text
stringlengths 3.44k
194k
|
---|
\section{Introduction}
\label{sec:intro}
\emph{Gender diversity}, or more often its lack thereof, among participants to
software development activities has been thoroughly studied in recent years. In
particular, the presence of, effects of, and countermeasures for \emph{gender
bias} in Free/Open Source Software (FOSS) have received a lot of attention
over the past decade~\cite{david2008fossdevs, qiu2010kdewomen,
nafus2012patches, kuechler2012genderfoss, vasilescu2014gender,
oneil2016debiansurvey, robles2016womeninfoss, terrell2017gender,
zacchiroli2021gender}. \emph{Geographic diversity} is on the other hand the
kind of diversity that stems from participants in some global activity coming
from different world regions and cultures.
Geographic diversity in FOSS has received relatively little attention in scholarly
works. In particular, while seminal survey-based and
point-in-time medium-scale studies of the geographic origins of FOSS
contributors exist~\cite{ghosh2005understanding, david2008fossdevs,
barahona2008geodiversity, takhteyev2010ossgeography, robles2014surveydataset,
wachs2021ossgeography}, large-scale longitudinal studies of the geographic
origin of FOSS contributors are still lacking. Such a quantitative
characterization would be useful to inform decisions related to global
development teams~\cite{herbsleb2007globalsweng} and hiring strategies in the
information technology (IT) market, as well as contribute factual information
to the debates on the economic impact and sociology of FOSS around the world.
\paragraph{Contributions}
With this work we contribute to close this gap by conducting \textbf{the first
longitudinal study of the geographic origin of contributors to public code
over 50 years.} Specifically, we provide a preliminary answer to the
following research question:
\begin{researchquestion}
From which world regions do authors of publicly available commits come from
and how has it changed over the past 50 years?
\label{rq:geodiversity}
\end{researchquestion}
We use as dataset the \SWH/ archive~\cite{swhipres2017} and analyze from it
2.2 billion\xspace commits archived from 160 million\xspace projects and authored by
43 million\xspace authors during the 1971--2021 time period.
We geolocate developers to
\DATAWorldRegions/ world regions, using as signals email country code top-level domains (ccTLDs) and
author (first/last) names compared with name distributions around the world, and UTC offsets
mined from commit metadata.
We find evidence of the early dominance of North America in open source
software, later joined by Europe. After that period, the geographic diversity
in public code has been constantly increasing.
We also identify relevant historical shifts
related to the end of the UNIX wars and the increase of coding literacy in
Central and South Asia, as well as of broader phenomena like colonialism and
people movement across countries (immigration/emigration).
\paragraph{Data availability.}
A replication package for this paper is available from Zenodo at
\url{https://doi.org/10.5281/zenodo.6390355}~\cite{replication-package}.
\section{Related Work}
\label{sec:related}
Both early and recent works~\cite{ghosh2005understanding, david2008fossdevs,
robles2014surveydataset, oneil2016debiansurvey} have characterized the
geography of Free/Open Source Software (FOSS) using \emph{developer surveys},
which provide high-quality answers but are limited in size (2-5\,K developers)
and can be biased by participant sampling.
In 2008 Barahona et al.~\cite{barahona2008geodiversity} conducted a seminal
large-scale (for the time) study on FOSS \emph{geography using mining software
repositories (MSR) techniques}. They analyzed the origin of 1\,M contributors
using the SourceForge user database and mailing list archives over the
1999--2005 period, using as signals information similar to ours: email domains
and UTC offsets.
The studied period (7 years) in~\cite{barahona2008geodiversity} is shorter than
what is studied in the present paper (50 years) and the data sources are
largely different; with that in mind, our results show a slightly larger quote of
European v.~North American contributions.
Another empirical work from 2010 by Takhteyev and
Hilts~\cite{takhteyev2010ossgeography} harvested self-declared geographic
locations of GitHub accounts recursively following their connections,
collecting information for $\approx$\,70\,K GitHub users. A very recent
work~\cite{wachs2021ossgeography} by Wachs et al.~has geolocated half a million
GitHub users, having contributed at least 100 commits each, and who
self-declare locations on their GitHub profiles. While the study is
point-in-time as of 2021, the authors compare their findings
against~\cite{barahona2008geodiversity, takhteyev2010ossgeography} to
characterize the evolution of FOSS geography over the time snapshots taken by
the three studies.
Compared with previous empirical works, our study is much larger scale---having
analyzed 43 million\xspace authors of 2.2 billion\xspace commits from 160 million\xspace
projects---longitudinal over 50 years of public code contributions rather than
point in time, and also more fine-grained (with year-by-year granularity over
the observed period). Methodologically, our study relies on Version Control
System (VCS) commit data rather than platform-declared location information.
Other works---in particular the work by Daniel~\cite{daniel2013ossdiversity}
and, more recently, Rastogi et al.~\cite{rastogi2016geobias,
rastogi2018geobias, prana2021geogenderdiversity}---have studied geographic
\emph{diversity and bias}, i.e., the extent to which the origin of FOSS
developers affect their collaborative coding activities.
In this work we characterized geographic diversity in public code for the first
time at this scale, both in terms of contributors and observation period. We do
not tackle the bias angle, but provide empirical data and findings that can be
leveraged to that end as future work.
\emph{Global software engineering}~\cite{herbsleb2007globalsweng} is the
sub-field of software engineering that has analyzed the challenges of scaling
developer collaboration globally, including the specific concern of how to deal
with geographic diversity~\cite{holmstrom2006globaldev, fraser2014eastwest}.
Decades later the present study provides evidence that can be used, in the
specific case of public code and at a very large scale, to verify which
promises of global software engineering have borne fruit.
\section{Methodology}
\label{sec:method}
\newif\ifgrowthfig \growthfigtrue
\ifgrowthfig
\begin{figure}
\includegraphics[width=\columnwidth]{yearly-commits}
\caption{Yearly public commits over time (log scale).
}
\label{fig:growth}
\end{figure}
\fi
\paragraph{Dataset}
We retrieved from \SWH/~\cite{swh-msr2019-dataset} all commits archived until \DATALastCommitDate/.
They amount to \DATACommitsRaw/ commits, unique by SHA1 identifier, harvested from \DATATotalCommitsInSH/ public projects coming from major development forges (GitHub, GitLab, etc.) and package repositories (Debian, PyPI, NPM, etc.).
Commits in the dataset are by \DATAAuthorsRaw/ authors, unique by $\langle$name, email$\rangle$ pairs.
The dataset came as two relational tables, one for commits and one for authors, with the former referencing the latter via a foreign key.
\iflong
Each row in the commit table contains the following fields: commit SHA1 identifier, author and committer timestamps, author and committer identifiers (referencing the author table).
The distinction between commit authors and committers come from Git, which allows to commit a change authored by someone else.
For this study we focused on authors and ignored committers, as the difference between the two is not relevant for our research questions and the amount of commits with a committer other than its author is negligible.
\fi
For each entry in the author table we have author full name and email as two separate strings of raw bytes.
We removed implausible or unusable names that: are not decodable as UTF-8 (\DATAAuthorsRmNondecodable/ author names removed), are email addresses instead of names (\DATAAuthorsRmEmail/ ``names''), consist of only blank characters (\DATAAuthorsRmBlank/), contain more than 10\% non-letters (\DATAAuthorsRmNonletter/), are longer than 100 characters (\DATAAuthorsRmToolong/).
After filtering, about \DATAAuthorsPlausibleApprox/ authors (\DATAAuthorsPlausiblePct/ of the initial dataset) remained for further analysis.
Note that the amount of public code commits (and authors) contained in the
initial dataset grows exponentially over
time~\cite{swh-provenance-emse}\ifgrowthfig, as shown for commits in
\Cref{fig:growth}\else: from $10^4$ commits in 1971, to $10^6$ in 1998, to
almost $10^9$ in 2020\fi. As a consequence the observed trends tend to be more
stable in recent decades than in 40+ year-old ones, due to statistics taken on
exponentially larger populations.
\paragraph{Geolocation}
\begin{figure}
\centering
\includegraphics[clip,trim=6cm 6cm 0 0,width=\linewidth]{subregions-ours}
\caption{The \DATAWorldRegions/ world regions used as geolocation targets.}
\label{fig:worldmap}
\end{figure}
As geolocation targets we use macro world regions derived from the United Nations geoscheme~\cite{un1999geoscheme}.
To avoid domination by large countries (e.g., China or Russia) within macro regions, we merged and split some regions based on geographic proximity and the sharing of preeminent cultural identification features, such as spoken language.
\Cref{fig:worldmap} shows the final list of \DATAWorldRegions/ world regions used as geolocation targets in this study.
Geolocation of commit authors to world regions uses the two complementary techniques introduced in~\cite{icse-seis-2022-gender}, briefly recalled below.
The first one relies on the country code top-level domain (ccTLD) of email addresses extracted from commit metadata, e.g., \texttt{.fr}, \texttt{.ru}, \texttt{.cn}, etc.
We started from the IANA list of Latin character ccTLDs~\cite{wikipedia-cctld} and manually mapped each corresponding territory to a target world region.
The second geolocation technique uses the UTC offset of commit timestamps (e.g., UTC-05:00) and author names to determine the most likely world region of the commit author.
For each UTC offset we determine a list of compatible places (country, state, or dependent territory) in the world that, at the time of that commit, had that UTC offset; commit time is key here, as country UTC offsets vary over time due to timezone changes.
To make this determination we use the IANA time zone database~\cite{tzdata}.
Then we assign to each place a score that captures the likelihood that a given author name is characteristic of it.
To this end we use the Forebears dataset of the frequencies of the most common first and family names which, quoting from~\cite{forebear-names}: {\itshape ``provides the approximate incidence of forenames and surnames produced from a database of \num{4 044 546 938} people (55.5\% of living people in 2014). As of September 2019 it covers \num{27 662 801} forenames and \num{27 206 821} surnames in 236 jurisdictions.''}
As in our dataset authors are full name strings (rather than split by first/family name), we first tokenize names (by blanks and case changes) and then lookup individual tokens in both first and family names frequency lists.
For each element found in name lists we multiply the place population\footnotemark{} by the name frequency to obtain a measure that is proportional to the number of persons bearing that name (token) in the specific place.
\footnotetext{To obtain population totals---as the notion of ``place'' is heterogeneous: full countries v.~slices of large countries spanning multiple timezones---we use a mixture of primary sources (e.g., government websites), and non-primary ones (e.g., Wikipedia articles).}
We sum this figure for all elements to obtain a place score, ending up with a list of $\langle$place, score$\rangle$ pairs.
We then partition this list by the world region that a place belongs to and sum the score for all the places in each region to obtain an overall score, corresponding to the likelihood that the commit belongs to a given world region.
We assign the starting commit as coming from the world region with the highest score.
The email-based technique suffers from the limited and unbalanced use of ccTLDs: most developers use generic TLDs such as \texttt{.com}, \texttt{.org}, or \texttt{.net}.
Moreover this does not happen uniformly across zones: US-based developers, for example, use the \texttt{.us} ccTLD much more seldomly than their European counterparts.
On the other hand the offset/name-based technique relies on the UTC offset of the commit timestamps.
Due to tool configurations on developer setups, a large number of commits in the dataset has an UTC offset equal to zero.
This affects less recent commits (\DATACommitsTZZTwoThousandTwenty/ of 2020s commits have a zero offset) than older ones (\DATACommitsTZZTwoThousand/ in 2000).
As a result the offset/name-based technique could end up detecting a large share of older commits as authored by African developers, and to a lesser extent Europeans.
To counter these issues we combine the two geolocation techniques together by applying the offset/name-based techniques to all commits with a non-zero UTC offset, and the email-based on to all other commits.
\section{Results and Discussion}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{stacked.pdf}
\caption{Ratio of commits (above) and active authors (below) by world zone over the 1971--2020 period.}
\Description[Chart]{Stacked bar chart showing the world zone ratios for commits and authors over the 1971--2020 period.}
\label{fig:results}
\end{figure*}
To answer \cref{rq:geodiversity} we gathered the number of commits and distinct authors per year and per world zone.
We present the obtained results in \Cref{fig:results} as two stacked bar charts, showing yearly breakdowns for commits and authors respectively.
Every bar represents a year and is partitioned in slices showing the commit/author ratio for each of the world regions of \Cref{fig:worldmap} in that year.
To avoid outliers due to sporadic contributors, in the author chart we only consider authors having contributed at least 5 commits in a given year.
While observing trends in the charts remember that the total numbers of commits and authors grow exponentially over time.
Hence for the first years in the charts, the number of data points in some world regions can be extremely small, with negative consequences on the stability of trends.
\paragraph{Geographic diversity over time}
Overall, the general trend appears to be that the \textbf{geographic diversity in public code is increasing}: North America and Europe alternated their ``dominance'' until the middle of the 90s; from that moment on most other world regions show a slow but steady increment.
This trend of increased participation into public code development includes Central and South Asia (comprising India), Russia, Africa, Central and South America,
Notice that also zones that do not seem to follow this trend, such as Australia and New Zealand, are also increasing their participation, but at a lower speed with respect to other zones.
For example, Australia and New Zealand incremented the absolute number of their commits by about 3 orders of magnitude from 2000 to present days.
Another interesting phenomenon that can be appreciated in both charts is the sudden contraction of contributions from North America in 1995; since the charts depict ratios, this corresponds to other zones, and Europe in particular, increasing their share.
An analysis of the main contributions in the years right before the contraction shows that nine out of ten have \texttt{ucbvax.Berkeley.EDU} as author email domain, and the tenth is Keith Bostic, one of the leading Unix BSD developers, appearing with email \texttt{bostic}.
No developer with the same email domain appears anymore within the first hundred contributors in 1996.
This shows the relevance that BSD Unix and the Computer Systems Research Group at the University of California at Berkeley had in the history of open source software.
The group was disbanded in 1995, partially as a consequence of the so-called UNIX wars~\cite{kernighan2019unixhistory}, and this contributes significantly---also because of the relatively low amount of public code circulating at the time---to the sudden drop of contributions from North America in subsequent years.
Descendant UNIX operating systems based on BSD, such as OpenBSD, FreeBSD, and NetBSD had smaller relevance to world trends due to (i) the increasing amount of open source code coming from elsewhere and (ii) their more geographically diverse developer community.
Another time frame in which the ratios for Europe and North America are subject to large, sudden changes is 1975--79.
A preliminary analysis shows that these ratios are erratic due to the very limited number of commits in those time period, but we were unable to detect a specific root cause.
Trends for those years should be subject to further studies, in collaboration with software historians.
\paragraph{Colonialism}
Another trend that stands out from the charts is that Africa appears to be well represented.
To assess if this results from a methodological bias, we double-checked the commits detected as originating from Africa for timezones included in the $[0, 3]$ range using both the email- the offset/name-based methods.
The results show that the offset/name-based approach assigns 22.7\% of the commits to Africa whereas the email-based one only assigns 2.7\% of them.
While a deeper investigation is in order, it is our opinion that the phenomenon we are witnessing here is a consequence of colonialism, specifically the adoption of Europeans names in African countries.
For example the name Eric, derived from Old Norse, is more popular in Ghana than it is in France or in the UK.
This challenges the ability of the offset/name-based method to correctly differentiate between candidate places.
Together with the fact that several African countries are largely populated, the offset/name-based method could detect European names as originating from Africa.
While this cuts both way, the likelihood of a random person contributing to public code is very different between European countries, all having a well-developed software industry, and African countries that do not all share this trait.
\paragraph{Immigration/emigration}
Another area where a similar phenomenon could be at play is the evolution of Central and South America.
Contribution from this macro region appears to be growing steadily.
To assess if this is the result of a bias introduced by the name-based detection we analyzed the evolution of offset/name-based assignment over time for authors whose email domain is among the top-ten US-based entities in terms of overall contributions (estimated in turn by analyzing the most frequent email domains and manually selecting those belonging to US-based entities).
In 1971 no author with an email from top US-based entities is detected as belonging to Central and South America, whereas in 2019 the ratio is 12\%.
Nowadays more than one tenth of the people email-associated to top US-based entities have popular Central and South American names, which we posit as a likely consequence of immigration into US (emigration from Central and South America).
Since immigration has a much longer history than what we are studying here, what we are witnessing probably includes long-term consequences of it, such as second and third generation immigrants employed in white-collar jobs, such as software development.
\section{Limitations and Future Work}
\label{sec:conclusion}
We have performed an exploratory, yet very large scale, empirical study of the geographic diversity in public code commits over time.
We have analyzed 2.2 billion\xspace public commits covering the \DATAYearRange/ time period.
We have geolocated developers to \DATAWorldRegions/ world regions using as signals email domains, timezone offsets, and author names.
Our findings show that the geographic diversity in public code is increasing over time, and markedly so over the past 20--25 years.
Observed trends also co-occur with historical events and macro phenomena like the end of the UNIX wars, increase of coding literacy around the world, colonialism, and immigration.
\medskip
\emph{Limitations.}
This study relies on a combination of two geolocation methods: one based on email domains, another based on commit UTC offsets and author names.
We discussed some of the limitations of either method in \Cref{sec:method}, motivating our decision of restricting the use of the email-based method to commits with a zero UTC offset.
As a consequence, for most commits in the dataset the offset/name-based method is used.
With such method, the frequencies of forenames and surnames are used to rank candidate zones that have a compatible UTC offset at commit time.
A practical consequence of this is that for commits with, say, offset UTC+09:00 the candidate places can be Russia, Japan and Australia, depending on the specific date due to daylight saving time.
Popular forenames and surnames in these regions tend to be quite different so the likelihood of the method to provide a reliable detection is high.
For other offsets the set of popular forenames and surnames from candidate zones can exhibit more substantial overlaps, negatively impacting detection accuracy.
We have discussed some of these cases in \Cref{sec:results}, but other might be lingering in the results impacting observed trends.
The choice of using the email-based method for commits with zero UTC offset, and the offset/name-based method elsewhere, has allowed us to study all developers not having a country-specific email domain (ccTLD), but comes with the risk of under-representing the world zones that have (in part and in some times of the year) an actual UTC offset of zero.
A potential bias in this study could be introduced by the fact that the name database used for offset/name-based geolocation only contains names formed using Latin alphabet characters.
We looked for names containing Chinese, Japanese, and Korean characters in the original dataset, finding only a negligible amount of authors who use non-Latin characters in their VCS names, which leads us to believe that the impact of this issue is minimal.
We did not apply identity merging (e.g., using state-of-the-art tools like SortingHat~\cite{moreno2019sortinghat}), but we do not expect this to be a significant issue because: (a) to introduce bias in author trends the distribution of identity merges around the world should be uneven, which seems unlikely; and (b) the observed commit trends (which would be unaffected by identity merging) are very similar to observed author trends.
We did not systematically remove known bot accounts~\cite{lebeuf2018swbots} from the author dataset, but we did check for the presence of software bots among the top committers of each year. We only found limited traces of continuous integration (CI) bots, used primarily to automate merge commits. After removing CI bots from the dataset the observed global trends were unchanged, therefore this paper presents unfiltered data.
\medskip
\emph{Future work.}
To some extent the above limitations are the price to pay to study such a large dataset: there exists a trade-off between large-scale analysis and accuracy.
We plan nonetheless to further investigate and mitigate them in future work.
Multi-method approaches, merging data mining with social science methods, could be applied to address some of the questions raised in this exploratory study.
While they do not scale to the whole dataset, multi-methods can be adopted to dig deeper into specific aspects, specifically those related to social phenomena.
Software is a social artifact, it is no wonder that aspects related to sociocultural evolution emerge when analyzing its evolution at this scale.
\clearpage
|
\section{Introduction}
One of the fundamental ingredients in the theory of non-commutative or
quantum geometry is the notion of a differential calculus.
In the framework of quantum groups the natural notion
is that of a
bicovariant differential calculus as introduced by Woronowicz
\cite{Wor_calculi}. Due to the allowance of non-commutativity
the uniqueness of a canonical calculus is lost.
It is therefore desirable to classify the possible choices.
The most important piece is the space of one-forms or ``first
order differential calculus'' to which we will restrict our attention
in the following. (From this point on we will use the term
``differential calculus'' to denote a
bicovariant first order differential calculus).
Much attention has been devoted to the investigation of differential
calculi on quantum groups $C_q(G)$ of function algebra type for
$G$ a simple Lie group.
Natural differential calculi on matrix quantum groups were obtained by
Jurco \cite{Jur} and
Carow-Watamura et al.\
\cite{CaScWaWe}. A partial classification of calculi of the same
dimension as the natural ones
was obtained by
Schm\"udgen and Sch\"uler \cite{ScSc2}.
More recently, a classification theorem for factorisable
cosemisimple quantum groups was obtained by Majid \cite{Majid_calculi},
covering the general $C_q(G)$ case. A similar result was
obtained later by Baumann and Schmitt \cite{BaSc}.
Also, Heckenberger and Schm\"udgen \cite{HeSc} gave a
complete classification on $C_q(SL(N))$ and $C_q(Sp(N))$.
In contrast, for $G$ not simple or semisimple the differential calculi
on $C_q(G)$
are largely unknown. A particularly basic case is the Lie group $B_+$
associated with the Lie algebra $\lalg{b_+}$ generated by two elements
$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra
\ensuremath{U_q(\lalg{b_+})}{}
is self-dual, i.e.\ is non-degenerately paired with itself \cite{Drinfeld}.
This has an interesting consequence: \ensuremath{U_q(\lalg{b_+})}{} may be identified with (a
certain algebraic model of) \ensuremath{C_q(B_+)}. The differential calculi on this
quantum group and on its ``classical limits'' \ensuremath{C(B_+)}{} and \ensuremath{U(\lalg{b_+})}{}
will be the main concern of this paper. We pay hereby equal attention
to the dual notion of ``quantum tangent space''.
In section \ref{sec:q} we obtain the complete classification of differential
calculi on \ensuremath{C_q(B_+)}{}. It turns out that (finite
dimensional) differential
calculi are characterised by finite subsets $I\subset\mathbb{N}$.
These
sets determine the decomposition into coirreducible (i.e.\ not
admitting quotients) differential calculi
characterised by single integers. For the coirreducible calculi the
explicit formulas for the commutation relations and braided
derivations are given.
In section \ref{sec:class} we give the complete classification for the
classical function algebra \ensuremath{C(B_+)}{}. It is essentially the same as in the
$q$-deformed setting and we stress this by giving an almost
one-to-one correspondence of differential calculi to those obtained in
the previous section. In contrast, however, the decomposition and
coirreducibility properties do not hold at all. (One may even say that
they are maximally violated). We give the explicit formulas for those
calculi corresponding to coirreducible ones.
More interesting perhaps is the ``dual'' classical limit. I.e.\ we
view \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra with quantum enveloping
algebra \ensuremath{C(B_+)}{}. This is investigated in section \ref{sec:dual}. It
turns out that in this setting we have considerably more freedom in
choosing a
differential calculus since the bicovariance condition becomes much
weaker. This shows that this dual classical limit is in a sense
``unnatural'' as compared to the ordinary classical limit of section
\ref{sec:class}.
However, we can still establish a correspondence of certain
differential calculi to those of section \ref{sec:q}. The
decomposition properties are conserved while the coirreducibility
properties are not.
We give the
formulas for the calculi corresponding to coirreducible ones.
Another interesting aspect of viewing \ensuremath{U(\lalg{b_+})}{} as a quantum function
algebra is the connection to quantum deformed models of space-time and
its symmetries. In particular, the $\kappa$-deformed Minkowski space
coming from the $\kappa$-deformed Poincar\'e algebra
\cite{LuNoRu}\cite{MaRu} is just a simple generalisation of \ensuremath{U(\lalg{b_+})}.
We use this in section \ref{sec:kappa} to give
a natural $4$-dimensional differential calculus. Then we show (in a
formal context) that integration is given by
the usual Lesbegue integral on $\mathbb{R}^n$ after normal ordering.
This is obtained in an intrinsic context different from the standard
$\kappa$-Poincar\'e approach.
A further important motivation for the investigation of differential
calculi on
\ensuremath{U(\lalg{b_+})}{} and \ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale
Hopf algebra \cite{Majid_Planck}\cite{Majid_book}. This shall be
developed elsewhere.
In the remaining parts of this introduction we will specify our
conventions and provide preliminaries on the quantum group \ensuremath{U_q(\lalg{b_+})}, its
deformations, and differential calculi.
\subsection{Conventions}
Throughout, $\k$ denotes a field of characteristic 0 and
$\k(q)$ denotes the field of rational
functions in one parameter $q$ over $\k$.
$\k(q)$ is our ground field in
the $q$-deformed setting, while $\k$ is the
ground field in the ``classical'' settings.
Within section \ref{sec:q} one could equally well view $\k$ as the ground
field with $q\in\k^*$ not a root of unity. This point of view is
problematic, however, when obtaining ``classical limits'' as
in sections \ref{sec:class} and \ref{sec:dual}.
The positive integers are denoted by $\mathbb{N}$ while the non-negative
integers are denoted by $\mathbb{N}_0$.
We define $q$-integers, $q$-factorials and
$q$-binomials as follows:
\begin{gather*}
[n]_q=\sum_{i=0}^{n-1} q^i\qquad
[n]_q!=[1]_q [2]_q\cdots [n]_q\qquad
\binomq{n}{m}=\frac{[n]_q!}{[m]_q! [n-m]_q!}
\end{gather*}
For a function of several variables (among
them $x$) over $\k$ we define
\begin{gather*}
(T_{a,x} f)(x) = f(x+a)\\
(\fdiff_{a,x} f)(x) = \frac{f(x+a)-f(x)}{a}
\end{gather*}
with $a\in\k$ and similarly over $\k(q)$
\begin{gather*}
(Q_{m,x} f)(x) = f(q^m x)\\
(\partial_{q,x} f)(x) = \frac{f(x)-f(qx)}{x(1-q)}\\
\end{gather*}
with $m\in\mathbb{Z}$.
We frequently use the notion of a polynomial in an extended
sense. Namely, if we have an algebra with an element $g$ and its
inverse $g^{-1}$ (as
in \ensuremath{U_q(\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power
series in $g$ with exponents in $\mathbb{Z}$. The length of such a polynomial
is the difference between highest and lowest degree.
If $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra
with the opposite product.
\subsection{\ensuremath{U_q(\lalg{b_+})}{} and its Classical Limits}
\label{sec:intro_limits}
We recall that,
in the framework of quantum groups, the duality between enveloping algebra
$U(\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie
group carries over to $q$-deformations.
In the case of
$\lalg{b_+}$, the
$q$-deformed enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} defined over $\k(q)$ as
\begin{gather*}
U_q(\lalg{b_+})=\k(q)\langle X,g,g^{-1}\rangle \qquad
\text{with relations} \\
g g^{-1}=1 \qquad Xg=qgX \\
\cop X=X\otimes 1 + g\otimes X \qquad
\cop g=g\otimes g \\
\cou (X)=0 \qquad \cou (g)=1 \qquad
\antip X=-g^{-1}X \qquad \antip g=g^{-1}
\end{gather*}
is self-dual. Consequently, it
may alternatively be viewed as the quantum algebra \ensuremath{C_q(B_+)}{} of
functions on the Lie group $B_+$ associated with $\lalg{b_+}$.
It has two classical limits, the enveloping algebra \ensuremath{U(\lalg{b_+})}{}
and the function algebra $C(B_+)$.
The transition to the classical enveloping algebra is achieved by
replacing $q$
by $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in
$t$, introducing a new generator $H$. Now, all expressions are written in
the form $\sum_j a_j t^j$ and only the lowest order in $t$ is kept.
The transition to the classical function algebra on the other hand is
achieved by setting $q=1$.
This may be depicted as follows:
\[\begin{array}{c @{} c @{} c @{} c}
& \ensuremath{U_q(\lalg{b_+})} \cong \ensuremath{C_q(B_+)} && \\
& \diagup \hspace{\stretch{1}} \diagdown && \\
\begin{array}{l} q=e^{-t} \\ g=e^{tH} \end{array} \Big| _{t\to 0}
&& q=1 &\\
\swarrow &&& \searrow \\
\ensuremath{U(\lalg{b_+})} & <\cdots\textrm{dual}\cdots> && \ensuremath{C(B_+)}
\end{array}\]
The self-duality of \ensuremath{U_q(\lalg{b_+})}{} is expressed as a pairing
$\ensuremath{U_q(\lalg{b_+})}\times\ensuremath{U_q(\lalg{b_+})}\to\k$
with
itself:
\[\langle X^n g^m, X^r g^s\rangle =
\delta_{n,r} [n]_q!\, q^{-n(n-1)/2} q^{-ms}
\qquad\forall n,r\in\mathbb{N}_0\: m,s\in\mathbb{Z}\]
In the classical limit this becomes the pairing $\ensuremath{U(\lalg{b_+})}\times\ensuremath{C(B_+)}\to\k$
\begin{equation}
\langle X^n H^m, X^r g^s\rangle =
\delta_{n,r} n!\, s^m\qquad \forall n,m,r\in\mathbb{N}_0\: s\in\mathbb{Z}
\label{eq:pair_class}
\end{equation}
\subsection{Differential Calculi and Quantum Tangent Spaces}
In this section we recall some facts about differential calculi
along the lines of Majid's treatment in \cite{Majid_calculi}.
Following Woronowicz \cite{Wor_calculi}, first order bicovariant differential
calculi on a quantum group $A$ (of
function algebra type) are in one-to-one correspondence to submodules
$M$ of $\ker\cou\subset A$ in the category $^A_A\cal{M}$ of (say) left
crossed modules of $A$ via left multiplication and left adjoint
coaction:
\[
a\triangleright v = av \qquad \mathrm{Ad_L}(v)
=v_{(1)}\antip v_{(3)}\otimes v_{(2)}
\qquad \forall a\in A, v\in A
\]
More precisely, given a crossed submodule $M$, the corresponding
calculus is given by $\Gamma=\ker\cou/M\otimes A$ with $\diff a =
\pi(\cop a - 1\otimes a)$ ($\pi$ the canonical projection).
The right action and coaction on $\Gamma$ are given by
the right multiplication and coproduct on $A$, the left action and
coaction by the tensor product ones with $\ker\cou/M$ as a left
crossed module. In all of what follows, ``differential calculus'' will
mean ``bicovariant first order differential calculus''.
Alternatively \cite{Majid_calculi}, given in addition a quantum group $H$
dually paired with $A$
(which we might think of as being of enveloping algebra type), we can
express the coaction of $A$ on
itself as an action of $H^{op}$ using the pairing:
\[
h\triangleright v = \langle h, v_{(1)} \antip v_{(3)}\rangle v_{(2)}
\qquad \forall h\in H^{op}, v\in A
\]
Thereby we change from the category of (left) crossed $A$-modules to
the category of left modules of the quantum double $A\!\bowtie\! H^{op}$.
In this picture the pairing between $A$ and $H$ descends to a pairing
between $A/\k 1$ (which we may identify with $\ker\cou\subset A$) and
$\ker\cou\subset H$. Further quotienting $A/\k 1$ by $M$ (viewed in
$A/\k 1$) leads to a pairing with the subspace $L\subset\ker\cou H$
that annihilates $M$. $L$ is called a ``quantum tangent space''
and is dual to the differential calculus $\Gamma$ generated by $M$ in
the sense that $\Gamma\cong \Lin(L,A)$ via
\begin{equation}
A/(\k 1+M)\otimes A \to \Lin(L,A)\qquad
v\otimes a \mapsto \langle \cdot, v\rangle a
\label{eq:eval}
\end{equation}
if the pairing between $A/(\k 1+M)$ and $L$ is non-degenerate.
The quantum tangent spaces are obtained directly by dualising the
(left) action of the quantum double on $A$ to a (right) action on
$H$. Explicitly, this is the adjoint action and the coregular action
\[
h \triangleright x = h_{(1)} x \antip h_{(2)} \qquad
a \triangleright x = \langle x_{(1)}, a \rangle x_{(2)}\qquad
\forall h\in H, a\in A^{op},x\in A
\]
where we have converted the right action to a left action by going
from \mbox{$A\!\bowtie\! H^{op}$}-modules to \mbox{$H\!\bowtie\! A^{op}$}-modules.
Quantum tangent spaces are subspaces of $\ker\cou\subset H$ invariant
under the projection of this action to $\ker\cou$ via \mbox{$x\mapsto
x-\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be
converted to a left coaction of $H$ being the comultiplication (with
subsequent projection onto $H\otimes\ker\cou$).
We can use the evaluation map (\ref{eq:eval})
to define a ``braided derivation'' on elements of the quantum tangent
space via
\[\partial_x:A\to A\qquad \partial_x(a)={\diff a}(x)=\langle
x,a_{(1)}\rangle a_{(2)}\qquad\forall x\in L, a\in A\]
This obeys the braided derivation rule
\[\partial_x(a b)=(\partial_x a) b
+ a_{(2)} \partial_{a_{(1)}\triangleright x}b\qquad\forall x\in L, a\in A\]
Given a right invariant basis $\{\eta_i\}_{i\in I}$ of $\Gamma$ with a
dual basis $\{\phi_i\}_{i\in I}$ of $L$ we have
\[{\diff a}=\sum_{i\in I} \eta_i\cdot \partial_i(a)\qquad\forall a\in A\]
where we denote $\partial_i=\partial_{\phi_i}$. (This can be easily
seen to hold by evaluation against $\phi_i\ \forall i$.)
\section{Classification on \ensuremath{C_q(B_+)}{} and \ensuremath{U_q(\lalg{b_+})}{}}
\label{sec:q}
In this section we completely classify differential calculi on \ensuremath{C_q(B_+)}{}
and, dually, quantum tangent spaces on \ensuremath{U_q(\lalg{b_+})}{}. We start by
classifying the relevant crossed modules and then proceed to a
detailed description of the calculi.
\begin{lem}
\label{lem:cqbp_class}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ensuremath{C_q(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs $(1,\{n\})$ with $n$ the
codimension. The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{n\})$.
\end{lem}
\begin{proof}
(a) Let $M\subseteq\ensuremath{C_q(B_+)}$ be a crossed \ensuremath{C_q(B_+)}-submodule by left
multiplication and left adjoint coaction and let
$\sum_n X^n P_n(g) \in M$, where $P_n$ are polynomials in $g,g^{-1}$
(every element of \ensuremath{C_q(B_+)}{} can be expressed in
this form). From the formula for the coaction ((\ref{eq:adl}), see appendix)
we observe that for all $n$ and for all $t\le n$ the element
\[X^t P_n(g) \prod_{s=1}^{n-t} (1-q^{s-n}g)\]
lies in $M$.
In particular
this is true for $t=n$, meaning that elements of constant degree in $X$
lie separately in $M$. It is therefore enough to consider such
elements.
Let now $X^n P(g) \in M$.
By left multiplication $X^n P(g)$ generates any element of the form
$X^k P(g) Q(g)$, where $k\ge n$ and $Q$ is any polynomial in
$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)
We see that $M$ contains the following elements:
\[\begin{array}{ll}
\vdots & \\
X^{n+2} & P(g) \\
X^{n+1} & P(g) \\
X^n & P(g) \\
X^{n-1} & P(g) (1-q^{1-n}g) \\
X^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\
\vdots & \\
X & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g) \\
& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g)(1-g)
\end{array}
\]
Moreover, if $M$ is generated by $X^n P(g)$ as a module
then these elements generate a basis for $M$ as a vector
space by left
multiplication with polynomials in $g,g^{-1}$. (Observe that the
application of the coaction to any of the elements shown does not
generate elements of new type.)
Now, let $M$ be a given crossed submodule. We pick, among the
elements in $M$ of the form $X^n P(g)$ with $P$ of minimal
length,
one
with lowest degree in $X$. Then certainly the elements listed above are
in $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must
contain $P$ as a factor and for $k<n$, $Q$ must contain $P(g) (1-q^{1-n}g)$
as a factor. We continue by picking the smallest $n_2$, so that
$X^{n_2} P(g) (1-q^{1-n}g) \in M$. Certainly $n_2<n$. Again, for any
element of $X^l Q(g)$ in $M$ with $l<n_2$, we have that
$P(g) (1-q^{1-n}g) (1-q^{1-n_2}g)$ divides Q(g). We proceed by
induction, until we arrive at degree zero in $X$.
We obtain the following elements generating a basis for $M$ by left
multiplication with polynomials in $g,g^{-1}$ (rename $n_1=n$):
\[ \begin{array}{ll}
\vdots & \\
X^{n_1+1} & P(g) \\
X^{n_1} & P(g) \\
X^{n_1-1} & P(g) (1-q^{1-{n_1}}g) \\
\vdots & \\
X^{n_2} & P(g) (1-q^{1-{n_1}}g) \\
X^{n_2-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2})\\
\vdots & \\
X^{n_3} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) \\
X^{n_3-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) (1-q^{1-n_3})\\
\vdots & \\
& P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2}g) (1-q^{1-n_3}g) \ldots (1-q^{1-n_m}g)
\end{array}
\]
We see that the integers $n_1,\ldots,n_m$ uniquely determine the shape
of this picture. The polynomial $P(g)$ on the other hand can be
shifted (by $g$ and $g^{-1}$) or renormalised. To determine $M$
uniquely we shift and normalise $P$ in such a way that it contains no
negative powers
and has unit constant coefficient. $P$ can then be viewed as a
polynomial $\in\k(q)[g]$.
We see that the codimension of $M$ is the sum of the lengths of the
polynomials in $g$ over all degrees in $X$ in the above
picture. Finite codimension corresponds to $P=1$. In this
case the codimension is the sum
$n_1+\ldots +n_m$.
(b) We observe that polynomials of the form $1-q^{j}g$
have no common divisors for distinct $j$. Therefore,
finite codimensional crossed
submodules are maximal if and only if
there is just one integer ($m=1$). Thus, the maximal left
crossed submodule of
codimension $k$ is generated by $X^k$ and $1-q^{1-k}g$.
For an infinite codimensional crossed submodule we certainly need
$m=0$. Then, the maximality corresponds to irreducibility of
$P$.
(c) This is again due to the distinctness of factors $1-q^j g$.
\end{proof}
\begin{cor}
\label{cor:cqbp_eclass}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C_q(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in lemma \ref{lem:cqbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $1\in I$.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs
$(1,\{1,n\})$ with $n\ge 2$ the
codimension. The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{1,n\})$.
\end{cor}
\begin{proof}
First observe that $\sum_n X^n P_n(g)\in \ker\cou$ if and only if
$(1-g)$ divides $P_0(g)$. This is to say that that $\ker\cou$
is the crossed submodule corresponding to the pair $(1,\{1\})$ in
lemma \ref{lem:cqbp_class}. We obtain the classification
from the one of lemmas \ref{lem:cqbp_class} by intersecting
everything with this crossed submodule. In particular, this reduces
the codimension by one in the finite codimensional case.
\end{proof}
\begin{lem}
\label{lem:uqbp_class}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint
action and left
regular coaction are in one-to-one correspondence to the set
$3^{\mathbb{N}_0}\times2^{\mathbb{N}}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets $I\subset\mathbb{N}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{lem}
\begin{proof}
(a) The action takes the explicit form
\[g\triangleright X^n g^k = q^{-n} X^n g^k\qquad
X\triangleright X^n g^k = X^{n+1}g^k(1-q^{-(n+k)})\]
while the coproduct is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binomq{n}{r}
q^{-r(n-r)} X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction here.
Let now $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ be a crossed \ensuremath{U_q(\lalg{b_+})}-submodule via this action
and coaction. For $\sum_n X^n P_n(g)\in L$ invariance under
the action by
$g$ clearly means that \mbox{$X^n P_n(g)\in L\ \forall n$}. Then from
invariance under the coaction we can conclude that
if $X^n \sum_j a_j g^j\in L$ we must have
$X^n g^j\in L\ \forall j$.
I.e.\ elements of the form $X^n g^j$ lie separately in $L$ and it is
sufficient to consider such elements. From the coaction we learn that
if $X^n g^j\in L$ we have $X^m g^j\in L\ \forall m\le n$.
The action
by $X$ leads to $X^n g^j\in L \Rightarrow X^{n+1} g^j\in
L$ except if
$n+j=0$. The classification is given by the possible choices we have
for each power in $g$. For every positive integer $j$ we can
choose wether or not to include the span of
$\{ X^n g^j|\forall n\}$ in $L$ and for
every non-positive
integer we can choose to include either the span of $\{ X^n
g^j|\forall n\}$
or just
$\{ X^n g^j|\forall n\le -j\}$ or neither. I.e.\ for positive
integers ($\mathbb{N}$) we have two choices while for non-positive (identified
with $\mathbb{N}_0$) ones we have three choices.
Clearly, the finite dimensional $L$ are those where we choose only to
include finitely many powers of $g$ and also only finitely many powers
of $X$. The latter is only possible for the non-positive powers
of $g$.
By identifying positive integers $n$ with powers $1-n$ of $g$, we
obtain a classification by finite subsets of $\mathbb{N}$.
(b) Irreducibility clearly corresponds to just including one power of $g$
in the finite dimensional case.
(c) The decomposition property is obvious from the discussion.
\end{proof}
\begin{cor}
\label{cor:uqbp_eclass}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ker\cou\subset\ensuremath{U_q(\lalg{b_+})}$ via
the left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
the set $3^{\mathbb{N}}\times2^{\mathbb{N}_0}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets
$I\subset\mathbb{N}\setminus\{1\}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n\ge 2$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{cor}
\begin{proof}
Only a small modification of lemma \ref{lem:uqbp_class} is
necessary. Elements of
the form $P(g)$ are replaced by elements of the form
$P(g)-P(1)$. Monomials with non-vanishing degree in $X$ are unchanged.
The choices for elements of degree $0$ in $g$ are reduced to either
including the span of
$\{ X^k |\forall k>0 \}$ in the crossed submodule or not. In
particular, the crossed submodule characterised by \{1\} in lemma
\ref{lem:uqbp_class} is projected out.
\end{proof}
Differential calculi in the original sense of Woronowicz are
classified by corollary \ref{cor:cqbp_eclass} while from the quantum
tangent space
point of view the
classification is given by corollary \ref{cor:uqbp_eclass}.
In the finite dimensional case the duality is strict in the sense of a
one-to-one correspondence.
The infinite dimensional case on the other hand depends strongly on
the algebraic models we use for the function or enveloping
algebras. It is therefore not surprising that in the present purely
algebraic context the classifications are quite different in this
case. We will restrict ourselves to the finite dimensional
case in the following description of the differential calculi.
\begin{thm}
\label{thm:q_calc}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C_q(B_+)}{} and
corresponding quantum tangent spaces $L$ on \ensuremath{U_q(\lalg{b_+})}{} are
in one-to-one correspondence to
finite sets $I\subset\mathbb{N}\setminus\{1\}$. In particular
$\dim\Gamma=\dim L=\sum_{n\in I}n$.
(b) Coirreducible $\Gamma$ and irreducible $L$ correspond to
$\{n\}$ with $n\ge 2$ the dimension.
Such a $\Gamma$ has a
right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that the relations
\begin{gather*}
\diff X=\eta_1+(q^{n-1}-1)\eta_0 X \qquad
\diff g=(q^{n-1}-1)\eta_0 g\\
[a,\eta_0]=\diff a\quad \forall a\in\ensuremath{C_q(B_+)}\\
[g,\eta_i]_{q^{n-1-i}}=0\quad \forall i\qquad
[X,\eta_i]_{q^{n-1-i}}=\begin{cases}
\eta_{i+1} & \text{if}\ i<n-1 \\
0 & \text{if}\ i=n-1
\end{cases}
\end{gather*}
hold, where $[a,b]_p := a b - p b a$. By choosing the dual basis on
the corresponding irreducible $L$ we obtain
the braided derivations
\begin{gather*}
\partial_i\no{f}=
\no{Q_{n-1-i,g} Q_{n-1-i,X} \frac{1}{[i]_q!} (\partial_{q,X})^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=
\no{Q_{n-1,g} Q_{n-1,X} f - f}
\end{gather*}
for $f\in \k(q)[X,g,g^{-1}]$ with normal ordering
$\k(q)[X,g,g^{-1}]\to \ensuremath{C_q(B_+)}$ given by \mbox{$g^n X^m\mapsto g^n X^m$}.
(c) Finite dimensional $\Gamma$ and $L$ decompose into direct sums of
coirreducible respectively irreducible ones.
In particular $\Gamma=\oplus_{n\in I}\Gamma^n$ and
$L=\oplus_{n\in I}L^n$ with $\Gamma^n$ and $L^n$ corresponding to $\{n\}$.
\end{thm}
\begin{proof}
(a) We observe that the classifications of lemma
\ref{lem:cqbp_class} and lemma \ref{lem:uqbp_class} or
corollary \ref{cor:cqbp_eclass} and corollary \ref{cor:uqbp_eclass}
are dual to each other in the finite (co){}dimensional case. More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in lemma \ref{lem:cqbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $I$ in lemma \ref{lem:uqbp_class}
and vice versa.
$\ensuremath{C_q(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For $I\subset\mathbb{N}\setminus\{1\}$ finite this descends to
$M$ corresponding to $(1,I\cup\{1\})$ in corollary
\ref{cor:cqbp_eclass} and $L$ corresponding to $I$ in corollary
\ref{cor:uqbp_eclass}.
For the dimension of $\Gamma$ observe
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) Coirreducibility (having no proper quotient) of $\Gamma$
clearly corresponds to maximality of $M$. The statement then follows
from parts (b) of corollaries
\ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass}. The formulas are
obtained by choosing the basis $\eta_0,\dots,\eta_{n-1}$ of
$\ker\cou/M$ as the equivalence classes of
\[(g-1)/(q^{n-1}-1),X,\dots,X^{n-1}\]
The dual basis of $L$ is then given by
\[g^{1-n}-1, X g^{1-n},\dots, q^{k(k-1)} \frac{1}{[k]_q!} X^k g^{1-n},
\dots,q^{(n-1)(n-2)} \frac{1}{[n-1]_q!} X^{n-1} g^{1-n}\]
(c) The statement follows from corollaries \ref{cor:cqbp_eclass} and
\ref{cor:uqbp_eclass} parts (c) with the observation
\[\ker\cou/M=\ker\cou/{\bigcap_{n\in I}}M^n
=\oplus_{n\in I}\ker\cou/M^n\]
\end{proof}
\begin{cor}
There is precisely one differential calculus on \ensuremath{C_q(B_+)}{} which is
natural in the sense that it
has dimension $2$.
It is coirreducible and obeys the relations
\begin{gather*}
[g,\diff X]=0\qquad [g,\diff g]_q=0\qquad
[X,\diff X]_q=0\qquad [X,\diff g]_q=(q-1)({\diff X}) g
\end{gather*}
with $[a,b]_q:=ab-qba$. In particular we have
\begin{gather*}
\diff\no{f} = {\diff g} \no{\partial_{q,g} f} + {\diff X}
\no{\partial_{q,X} f}\qquad\forall f\in \k(q)[X,g,g^{-1}]
\end{gather*}
\end{cor}
\begin{proof}
This is a special case of theorem \ref{thm:q_calc}.
The formulas follow from (b) with $n=2$.
\end{proof}
\section{Classification in the Classical Limit}
\label{sec:class}
In this section we give the complete classification of differential
calculi and quantum tangent spaces in the classical case of \ensuremath{C(B_+)}{}
along the lines of the previous section.
We pay particular
attention to the relation to the $q$-deformed setting.
The classical limit \ensuremath{C(B_+)}{} of the quantum group \ensuremath{C_q(B_+)}{} is
simply obtained by substituting the parameter $q$ with $1$.
The
classification of left crossed submodules in part (a) of lemma
\ref{lem:cqbp_class} remains
unchanged, as one may check by going through the proof.
In particular, we get a correspondence of crossed modules in the
$q$-deformed setting with crossed modules in the
classical setting
as a map of
pairs $(P,I)\mapsto (P,I)$
that converts polynomials $\k(q)[g]$ to polynomials $\k[g]$ (if
defined) and leaves
sets $I$ unchanged. This is one-to-one in the finite
dimensional case.
However, we did use the distinctness of powers of $q$ in part (b) and
(c) of lemma
$\ref{lem:cqbp_class}$ and have to account for changing this. The
only place where we used it, was in observing that
factors $1-q^j g $ have no common divisors for distinct $j$. This was
crucial to conclude the maximality (b) of certain finite codimensional
crossed submodules and the intersection property (c).
Now, all those factors become $1-g$.
\begin{cor}
\label{cor:cbp_class}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ensuremath{C(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-g$ for any
$k\in\mathbb{N}_0$.
\end{cor}
In the restriction to $\ker\cou\subset\ensuremath{C(B_+)}$ corresponding to corollary
\ref{cor:cqbp_eclass} we observe another difference to the
$q$-deformed setting.
Since the condition for a crossed submodule to lie in $\ker\cou$ is exactly
to have factors $1-g$ in the $X$-free monomials this condition may now
be satisfied more easily. If the characterising polynomial does not
contain this factor it is now sufficient to have just any non-empty
characterising integer set $I$ and it need not contain $1$. Consequently,
the map $(P,I)\mapsto (P,I)$ does not reach all crossed submodules now.
\begin{cor}
\label{cor:cbp_eclass}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in corollary \ref{cor:cbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $I$ non-empty.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-g$.
\end{cor}
Let us now turn to quantum tangent spaces on \ensuremath{U(\lalg{b_+})}{}. Here, the process
to go from the $q$-deformed setting to the classical one is not quite
so straightforward.
\begin{lem}
\label{lem:ubp_class}
Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ensuremath{U(\lalg{b_+})}$ via the left
adjoint action
and left regular coaction are
in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and
$I\subset\mathbb{N}$ finite. $\dim L<\infty$ iff $l=0$. In particular $\dim
L=\sum_{n\in I}n$ if $l=0$.
\end{lem}
\begin{proof}
The left adjoint action takes the form
\[
X\triangleright X^n H^m = X^{n+1}(H^m-(H+1)^m) \qquad
H\triangleright X^n H^m = n X^n H^m
\]
while the coaction is
\[
\cop(X^n H^m) = \sum_{i=1}^n \sum_{j=1}^m \binom{n}{i} \binom{m}{j}
X^i H^j\otimes X^{n-1} H^{m-j}
\]
Let $L$ be a crossed submodule invariant under the action and coaction.
The (repeated) action of $H$ separates elements by degree in $X$. It is
therefore sufficient to consider elements of the form $X^n P(H)$, where
$P$ is a polynomial.
By acting with $X$ on an element $X^n P(H)$ we obtain
$X^{n+1}(P(H)-P(H+1))$. Subsequently applying the coaction and
projecting on the left hand side of the tensor product onto $X$ (in
the basis $X^i H^j$ of \ensuremath{U(\lalg{b_+})})
leads to the element $X^n (P(H)-P(H+1))$. Now the degree of
$P(H)-P(H+1)$ is exactly the degree of $P(H)$ minus $1$. Thus we have
polynomials $X^n P_i(H)$ of any degree $i=\deg(P_i)\le \deg(P)$ in $L$
by induction. In particular, $X^n H^m\in L$ for all
$m\le\deg(P)$. It is thus sufficient to consider elements of
the form $X^n H^m$. Given such an element, the coaction generates all
elements of the form $X^i H^j$ with $i\le n, j\le m$.
For given $n$, the characterising datum is the maximal $m$ so
that $X^n H^m\in L$. Due to the coaction this cannot decrease
with decreasing $n$ and due to the action of $X$ this can decrease at
most by $1$ when increasing $n$ by $1$. This leads to the
classification given. For $l\in N_0$ and $I\subset\mathbb{N}$ finite, the
corresponding crossed submodule
is generated by
\begin{gather*}
X^{n_m-1} H^{l+m-1}, X^{n_m+n_{m-1}-1} H^{l+m-2},\dots,
X^{(\sum_i n_i)-1} H^{l}\\
\text{and}\qquad
X^{(\sum_i n_i)+k} H^{l-1}\quad \forall k\ge 0\quad\text{if}\quad l>0
\end{gather*}
as a crossed module.
\end{proof}
For the transition from the $q$-deformed (lemma
\ref{lem:uqbp_class}) to the classical case we
observe that the space spanned by $g^{s_1},\dots,g^{s_m}$ with $m$
different integers $s_i\in\mathbb{Z}$ maps to the space spanned by
$1, H, \dots, H^{m-1}$ in the
prescription of the classical limit (as described in section
\ref{sec:intro_limits}). I.e.\ the classical crossed submodule
characterised by an integer $l$ and a finite set $I\subset\mathbb{N}$ comes
from a crossed submodule characterised by this same $I$ and additionally $l$
other integers $j\in\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In
particular, we have a one-to-one correspondence in the finite
dimensional case.
To formulate the analogue of corollary \ref{cor:uqbp_eclass} for the
classical case is essentially straightforward now. However, as for
\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed
setting. This is due to the degeneracy introduced by forgetting the
powers of $g$ and just retaining the number of different powers.
\begin{cor}
\label{cor:ubp_eclass}
(a) Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules
$L\subset\ker\cou\subset\ensuremath{U(\lalg{b_+})}$ via the
left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite where $l\neq 0$
or $I\neq\emptyset$.
$\dim L<\infty$ iff $l=0$. In particular $\dim
L=(\sum_{n\in I}n)-1$ if $l=0$.
\end{cor}
As in the $q$-deformed setting, we give a description of the finite
dimensional differential calculi where we have a strict duality to
quantum tangent spaces.
\begin{prop}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C(B_+)}{} and
finite dimensional quantum tangent spaces $L$ on \ensuremath{U(\lalg{b_+})}{} are
in one-to-one correspondence to non-empty finite sets $I\subset\mathbb{N}$.
In particular $\dim\Gamma=\dim L=(\sum_{n\in I}) n)-1$.
The $\Gamma$ with $1\in\mathbb{N}$ are in
one-to-one correspondence to the finite dimensional
calculi and quantum tangent spaces of the $q$-deformed setting
(theorem \ref{thm:q_calc}(a)).
(b) The differential calculus $\Gamma$ of dimension $n\ge 2$
corresponding to the
coirreducible one of \ensuremath{C_q(B_+)}{} (theorem \ref{thm:q_calc}(b)) has a right
invariant
basis $\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1+\eta_0 X \qquad
\diff g=\eta_0 g\\
[g, \eta_i]=0\ \forall i \qquad
[X, \eta_i]=\begin{cases}
0 & \text{if}\ i=0\ \text{or}\ i=n-1\\
\eta_{i+1} & \text{if}\ 0<i<n-1
\end{cases}
\end{gather*}
hold. The braided derivations obtained from the dual basis of the
corresponding $L$ are
given by
\begin{gather*}
\partial_i f=\frac{1}{i!}
\left(\frac{\partial}{\partial X}\right)^i f\qquad
\forall i\ge 1\\
\partial_0 f=\left(X \frac{\partial}{X}+
g \frac{\partial}{g}\right) f
\end{gather*}
for $f\in\ensuremath{C(B_+)}$.
(c) The differential calculus of dimension $n-1$
corresponding to the
one in (b) with $1$ removed from the characterising set is
the same as the one above, except that we set $\eta_0=0$ and
$\partial_0=0$.
\end{prop}
\begin{proof}
(a) We observe that the classifications of corollary
\ref{cor:cbp_class} and lemma \ref{lem:ubp_class} or
corollary \ref{cor:cbp_eclass} and corollary \ref{cor:ubp_eclass}
are dual to each other in the finite (co)dimensional case.
More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in corollary \ref{cor:cbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $(0,I)$ in lemma \ref{lem:ubp_class}
and vice versa.
$\ensuremath{C(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For non-empty $I$ this descends to
$M$ corresponding to $(1,I)$ in corollary
\ref{cor:cbp_eclass} and $L$ corresponding to $(0,I)$ in corollary
\ref{cor:ubp_eclass}.
For the dimension of $\Gamma$ note
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) For $I=\{1,n\}$ we choose in
$\ker\cou\subset\ensuremath{C(B_+)}$ the basis $\eta_0,\dots,\eta_{n-1}$ as the
equivalence classes of
$g-1,X,\dots,X^{n-1}$. The dual basis in $L$
is then $H,X,\dots,\frac{1}{k!}X^k,\dots,\frac{1}{(n-1)!}X^{n-1}$.
This leads to the
formulas given.
(c) For $I=\{n\}$ we get the same as in (b) except that $\eta_0$ and
$\partial_0$ disappear.
\end{proof}
The classical commutative calculus is the special case of (b) with
$n=2$. It is the only calculus of dimension $2$ with
$\diff g\neq 0$. Note that it is not coirreducible.
\section{The Dual Classical Limit}
\label{sec:dual}
We proceed in this section to the more interesting point of view where
we consider the classical algebras, but with their roles
interchanged. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as the ``function algebra''
and \ensuremath{C(B_+)}{} as the ``enveloping algebra''. Due to the self-duality of
\ensuremath{U_q(\lalg{b_+})}{}, we can again view the differential calculi and quantum tangent
spaces as classical limits of the $q$-deformed setting investigated in
section \ref{sec:q}.
In this dual setting the bicovariance constraint for differential
calculi becomes much
weaker. In particular, the adjoint action on a classical function
algebra is trivial due to commutativity and the adjoint coaction on a
classical enveloping algebra is trivial due to cocommutativity.
In effect, the correspondence with the
$q$-deformed setting is much weaker than in the ordinary case of
section \ref{sec:class}.
There are much more differential
calculi and quantum tangent spaces than in the $q$-deformed setting.
We will not attempt to classify all of them in the following but
essentially
contend ourselves with those objects coming from the $q$-deformed setting.
\begin{lem}
\label{lem:cbp_dual}
Left \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ via the left regular coaction are
$\mathbb{Z}$-graded subspaces of \ensuremath{C(B_+)}{} with $|X^n g^m|=n+m$,
stable under formal derivation in $X$.
By choosing any ordering in \ensuremath{C_q(B_+)}{}, left crossed submodules via left
regular action and adjoint coaction are in one-to-one correspondence
to certain subcomodules of \ensuremath{C(B_+)}{} by setting $q=1$. Direct sums
correspond to direct sums.
This descends to $\ker\cou\subset\ensuremath{C(B_+)}$ by the projection $x\mapsto
x-\cou(x) 1$.
\end{lem}
\begin{proof}
The coproduct on \ensuremath{C(B_+)}{} is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binom{n}{r}
X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction.
Projecting on the left hand side of the tensor product onto $g^l$ in a
basis $X^n g^k$, we
observe that coacting on an element
$\sum_{n,k} a_{n,k} X^n g^k$ we obtain elements
$\sum_n a_{n,l-n} X^n g^{l-n}$ for all $l$.
I.e.\ elements of the form
$\sum_n b_n X^n g^{l-n}$ lie
separately in a subcomodule and it is
sufficient to consider such elements. Writing the coaction
on such an element as
\[\sum_t \frac{1}{t!} X^t g^{l-t}\otimes \sum_n b_n
\frac{n!}{(n-t)!} X^{n-t} g^{l-n}\]
we see that the coaction generates all formal derivatives in $X$
of this element. This gives us the classification: \ensuremath{C(B_+)}-subcomodules
$\subseteq\ensuremath{C(B_+)}$ under the left regular coaction are $\mathbb{Z}$-graded
subspaces with $|X^n g^m|=n+m$, stable under formal derivation in
$X$ given by $X^n
g^m \mapsto n X^{n-1} g^m$.
The correspondence with the \ensuremath{C_q(B_+)} case follows from
the trivial observation
that the coproduct of \ensuremath{C(B_+)}{} is the same as that of \ensuremath{C_q(B_+)}{} with $q=1$.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
\begin{lem}
\label{lem:ubp_dual}
The process of obtaining the classical limit \ensuremath{U(\lalg{b_+})}{} from \ensuremath{U_q(\lalg{b_+})}{} is
well defined for subspaces and sends crossed \ensuremath{U_q(\lalg{b_+})}-submodules
$\subset\ensuremath{U_q(\lalg{b_+})}$ by
regular action and adjoint coaction to \ensuremath{U(\lalg{b_+})}-submodules $\subset\ensuremath{U(\lalg{b_+})}$
by regular
action. This map is injective in the finite codimensional
case. Intersections and codimensions are preserved in this case.
This descends to $\ker\cou$.
\end{lem}
\begin{proof}
To obtain the classical limit of a left ideal it is enough to
apply the limiting process (as described in section
\ref{sec:intro_limits}) to the
module generators (We can forget the additional comodule
structure). On the one hand,
any element generated by left multiplication with polynomials in
$g$ corresponds to some element generated by left multiplication with a
polynomial in $H$, that is, there will be no more generators in the
classical setting. On the other hand, left multiplication by a
polynomial in $H$ comes
from left multiplication by the same polynomial in $g-1$, that is,
there will be no fewer generators.
The maximal left crossed \ensuremath{U_q(\lalg{b_+})}-submodule $\subseteq\ensuremath{U_q(\lalg{b_+})}$
by left multiplication and adjoint coaction of
codimension $n$ ($n\ge 1$) is generated as a left ideal by
$\{1-q^{1-n}g,X^n\}$ (see lemma
\ref{lem:cqbp_class}). Applying the limiting process to this
leads to the
left ideal of \ensuremath{U(\lalg{b_+})}{} (which is not maximal for $n\neq 1$) generated by
$\{H+n-1,X^n\}$ having also codimension $n$.
More generally, the picture given for arbitrary finite codimensional left
crossed modules of \ensuremath{U_q(\lalg{b_+})}{} in terms of generators with respect to
polynomials in $g,g^{-1}$ in lemma \ref{lem:cqbp_class} carries over
by replacing factors
$1-q^{1-n}g$ with factors $H+n-1$ leading to generators with
respect to polynomials in $H$. In particular,
intersections go to intersections since the distinctness of
the factors for different $n$ is conserved.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
We are now in a position to give a detailed description of the
differential calculi induced from the $q$-deformed setting by the
limiting process.
\begin{prop}
(a) Certain finite dimensional
differential calculi $\Gamma$ on \ensuremath{U(\lalg{b_+})}{} and quantum tangent spaces $L$
on \ensuremath{C(B_+)}{}
are in one-to-one correspondence to finite dimensional differential
calculi on \ensuremath{U_q(\lalg{b_+})}{} and quantum
tangent spaces on \ensuremath{C_q(B_+)}{}. Intersections correspond to intersections.
(b) In particular,
$\Gamma$ and $L$ corresponding to coirreducible differential calculi
on \ensuremath{U_q(\lalg{b_+})}{} and
irreducible quantum tangent spaces on \ensuremath{C_q(B_+)}{} via the limiting process
are given as follows:
$\Gamma$ has a right invariant basis
$\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1 \qquad \diff H=(1-n)\eta_0 \\
[H, \eta_i]=(1-n+i)\eta_i\quad\forall i\qquad
[X, \eta_i]=\begin{cases}
\eta_{i+1} & \text{if}\ \ i<n-1\\
0 & \text{if}\ \ i=n-1
\end{cases}
\end{gather*}
holds. The braided derivations corresponding to the dual basis of
$L$ are given by
\begin{gather*}
\partial_i\no{f}=\no{T_{1-n+i,H}
\frac{1}{i!}\left(\frac{\partial}{\partial X}\right)^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=\no{T_{1-n,H} f - f}
\end{gather*}
for $f\in\k[X,H]$
with the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ via $H^n X^m\mapsto H^n X^m$.
\end{prop}
\begin{proof}
(a) The strict duality between \ensuremath{C(B_+)}-subcomodules $L\subseteq\ker\cou$
given by lemma \ref{lem:cbp_dual} and corollary \ref{cor:uqbp_eclass}
and \ensuremath{U(\lalg{b_+})}-modules $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$ with $M$ given by lemma
\ref{lem:ubp_dual} and
corollary \ref{cor:cqbp_eclass} can be checked explicitly.
It is essentially due to mutual annihilation of factors $H+k$ in
\ensuremath{U(\lalg{b_+})}{} with elements $g^k$ in \ensuremath{C(B_+)}{}.
(b) $L$ is generated by
$\{g^{1-n}-1,Xg^{1-n},\dots,
X^{n-1}g^{1-n}\}$ and
$M$ is generated by $\{H(H+n-1),X(H+n-1),X^n \}$.
The formulas are obtained by denoting with
$\eta_0,\dots,\eta_{n-1}$ the equivalence classes of
$H/(1-n),X,\dots,X^{n-1}$ in $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$.
The dual basis of $L$ is then
\[g^{1-n}-1,X g^{1-n},
\dots,\frac{1}{(n-1)!}X^{n-1}
g^{1-n}\]
\end{proof}
In contrast to the $q$-deformed setting and to the usual classical
setting the many freedoms in choosing a calculus leave us with many
$2$-dimensional calculi. It is not obvious which one we should
consider to be the ``natural'' one. Let us first look at the
$2$-dimensional calculus coming from the $q$-deformed
setting as described in (b). The relations become
\begin{gather*}
[\diff H, a]=\diff a\qquad [\diff X, a]=0\qquad\forall a\in\ensuremath{U(\lalg{b_+})}\\
\diff\no{f} =\diff H \no{\fdiff_{1,H} f}
+ \diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
for $f\in\k[X,H]$.
We might want to consider calculi which are closer to the classical
theory in the sense that derivatives are not finite differences but
usual derivatives. Let us therefore demand
\[\diff P(H)=\diff H \frac{\partial}{\partial H} P(H)\qquad
\text{and}\qquad
\diff P(X)=\diff X \frac{\partial}{\partial X} P(X)\]
for polynomials $P$ and ${\diff X}\neq 0$ and ${\diff H}\neq 0$.
\begin{prop}
\label{prop:nat_bp}
There is precisely one differential calculus of dimension $2$ meeting
these conditions. It obeys the relations
\begin{gather*}
[a,\diff H]=0\qquad [X,\diff X]=0\qquad [H,\diff X]=\diff X\\
\diff \no{f} =\diff H \no{\frac{\partial}{\partial H} f}
+\diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
where the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ is given by
$X^n H^m\mapsto X^n H^m$.
\end{prop}
\begin{proof}
Let $M$ be the left ideal corresponding to the calculus. It is easy to
see that for a primitive element $a$ the classical derivation condition
corresponds to $a^2\in M$ and $a\notin M$. In our case $X^2,H^2\in
M$. If we take the
ideal generated from these two elements we obtain an ideal of
$\ker\cou$ of codimension $3$. Now, it is sufficient without loss of
generality to add a generator of the form $\alpha H+\beta X+\gamma
XH$. $\alpha$ and $\beta$ must then be zero in order not
to generate $X$ or $H$ in $M$.
I.e.\ $M$ is generated by $H^2,
XH, X^2$. The relations stated follow.
\end{proof}
\section{Remarks on $\kappa$-Minkowski Space and Integration}
\label{sec:kappa}
There is a straightforward generalisation of \ensuremath{U(\lalg{b_+})}.
Let us define the Lie algebra $\lalg b_{n+}$ as generated by
$x_0,\dots, x_{n-1}$ with relations
\[ [x_0,x_i]=x_i\qquad [x_i,x_j]=0\qquad\forall i,j\ge 1\]
Its enveloping algebra \ensuremath{U(\lalg{b}_{n+})}{} is nothing but (rescaled) $\kappa$-Minkowski
space as introduced in \cite{MaRu}. In this section we make some
remarks about its intrinsic geometry.
We have an injective Lie algebra
homomorphism $b_{n+}\to b_+$ given by
$x_0\mapsto H$ and $x_i\mapsto X$.
This is an isomorphism for $n=2$. The injective Lie algebra
homomorphism extends to an injective homomorphism of enveloping
algebras $\ensuremath{U(\lalg{b_+})}\to \ensuremath{U(\lalg{b}_{n+})}$ in the obvious way. This gives rise
to an injective map from the set of submodules of \ensuremath{U(\lalg{b_+})}{} to the set of
submodules of \ensuremath{U(\lalg{b}_{n+})}{} by taking the pre-image. In
particular this induces an injective
map from the set of differential calculi on \ensuremath{U(\lalg{b_+})}{} to the set of
differential calculi on \ensuremath{U(\lalg{b}_{n+})}{} which are invariant under permutations
of the $x_i\ i\ge 1$.
\begin{cor}
\label{cor:nat_bnp}
There is a natural $n$-dimensional differential calculus on \ensuremath{U(\lalg{b}_{n+})}{}
induced from the one considered in proposition
\ref{prop:nat_bp}.
It obeys the relations
\begin{gather*}
[a,\diff x_0]=0\quad\forall a\in \ensuremath{U(\lalg{b}_{n+})}\qquad [x_i,\diff x_j]=0
\quad [x_0,\diff x_i]=\diff x_i\qquad\forall i,j\ge 1\\
\diff \no{f} =\sum_{\mu=0}^{n-1}\diff x_{\mu}
\no{\frac{\partial}{\partial x_{\mu}} f}
\end{gather*}
where the normal ordering is given by
\[\k[x_0,\dots,x_{n-1}]\to \ensuremath{U(\lalg{b}_{n+})}\quad\text{via}\quad
x_{n-1}^{m_{n-1}}\cdots
x_0^{m_0}\mapsto x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\]
\end{cor}
\begin{proof}
The calculus is obtained from the ideal generated by
\[x_0^2,x_i x_j, x_i x_0\qquad\forall i,j\ge 1\]
being the pre-image of
$X^2,XH,X^2$ in \ensuremath{U(\lalg{b_+})}{}.
\end{proof}
Let us try to push the analogy with the commutative case further and
take a look at the notion of integration. The natural way to encode
the condition of translation invariance from the classical context
in the quantum group context
is given by the condition
\[(\int\otimes\id)\circ\cop a=1 \int a\qquad\forall a\in A\]
which defines a right integral on a quantum group $A$
\cite{Sweedler}.
(Correspondingly, we have the notion of a left integral.)
Let us
formulate a slightly
weaker version of this equation
in the context of a Hopf algebra $H$ dually paired with
$A$. We write
\[\int (h-\cou(h))\triangleright a = 0\qquad \forall h\in H, a\in A\]
where the action of $H$ on $A$ is the coregular action
$h\triangleright a = a_{(1)}\langle a_{(2)}, h\rangle$
given by the pairing.
In the present context we set $A=\ensuremath{U(\lalg{b}_{n+})}$ and $H=\ensuremath{C(B_{n+})}$. We define the
latter as a generalisation of \ensuremath{C(B_+)}{} with commuting
generators $g,p_1,\dots,p_{n-1}$ and coproducts
\[\cop p_i=p_i\otimes 1+g\otimes p_i\qquad \cop g=g\otimes g\]
This can be identified (upon rescaling) as the momentum sector of the
full $\kappa$-Poincar\'e algebra (with $g=e^{p_0}$).
The pairing is the natural extension of (\ref{eq:pair_class}):
\[\langle x_{n-1}^{m_{n-1}}\cdots x_1^{m_1} x_0^{k},
p_{n-1}^{r_{n-1}}\cdots p_1^{r_1} g^s\rangle
= \delta_{m_{n-1},r_{n-1}}\cdots\delta_{m_1,r_1} m_{n-1}!\cdots m_1!
s^k\]
The resulting coregular
action is conveniently expressed as (see also \cite{MaRu})
\[p_i\triangleright\no{f}=\no{\frac{\partial}{\partial x_i} f}\qquad
g\triangleright\no{f}=\no{T_{1,x_0} f}\]
with $f\in\k[x_0,\dots,x_{n-1}]$.
Due to cocommutativity, the notions of left and right integral
coincide. The invariance conditions for integration become
\[\int \no{\frac{\partial}{\partial x_i} f}=0\quad
\forall i\in\{1,\dots,n-1\}
\qquad\text{and}\qquad \int \no{\fdiff_{1,x_0} f}=0\]
The condition on the left is familiar and states the invariance under
infinitesimal translations in the $x_i$. The condition on the right states the
invariance under integer translations in $x_0$. However, we should
remember that we use a certain algebraic model of \ensuremath{C(B_{n+})}{}. We might add,
for example, a generator $p_0$
to \ensuremath{C(B_{n+})}{}
that is dual to $x_0$ and behaves
as the ``logarithm'' of $g$, i.e.\ acts as an infinitesimal
translation in $x_0$. We then have the condition of infinitesimal
translation invariance
\[\int \no{\frac{\partial}{\partial x_{\mu}} f}=0\]
for all $\mu\in\{0,1,\dots,{n-1}\}$.
In the present purely algebraic context these conditions do not make
much sense. In fact they would force the integral to be zero on the
whole algebra. This is not surprising, since we are dealing only with
polynomial functions which would not be integrable in the classical
case either.
In contrast, if we had for example the algebra of smooth functions
in two real variables, the conditions just characterise the usual
Lesbegue integral (up to normalisation).
Let us assume $\k=\mathbb{R}$ and suppose that we have extended the normal
ordering vector
space isomorphism $\mathbb{R}[x_0,\dots,x_{n-1}]\cong \ensuremath{U(\lalg{b}_{n+})}$ to a vector space
isomorphism of some sufficiently large class of functions on $\mathbb{R}^n$ with a
suitable completion $\hat{U}(\lalg{b_{n+}})$ in a functional
analytic framework (embedding \ensuremath{U(\lalg{b}_{n+})}{} in some operator algebra on a
Hilbert space). It is then natural to define the integration on
$\hat{U}(\lalg{b_{n+}})$ by
\[\int \no{f}=\int_{\mathbb{R}^n} f\ dx_0\cdots dx_{n-1}\]
where the right hand side is just the usual Lesbegue integral in $n$
real variables $x_0,\dots,x_{n-1}$. This
integral is unique (up to normalisation) in
satisfying the covariance condition since, as we have seen,
these correspond
just to the usual translation invariance in the classical case via normal
ordering, for which the Lesbegue integral is the unique solution.
It is also the $q\to 1$ limit of the translation invariant integral on
\ensuremath{U_q(\lalg{b_+})}{} obtained in \cite{Majid_qreg}.
We see that the natural differential calculus in corollary
\ref{cor:nat_bnp} is
compatible with this integration in that the appearing braided
derivations are exactly the actions of the translation generators
$p_{\mu}$. However, we should stress that this calculus is not
covariant under the full $\kappa$-Poincar\'e algebra, since it was
shown in \cite{GoKoMa} that in $n=4$ there is no such
calculus of dimension $4$. Our results therefore indicate a new
intrinsic approach to $\kappa$-Minkowski space that allows a
bicovariant
differential calculus of dimension $4$ and a unique translation
invariant integral by normal ordering and Lesbegue integration.
\section*{Acknowledgements}
I would like to thank S.~Majid for proposing this project,
and for fruitful discussions during the preparation of this paper.
|
\section{Introduction}
Continuous Engineering (CE) practices,
such as Continuous Integration (CI) and Continuous Deployment (CD),
are gaining prominence in software engineering,
as they help streamline and optimize the way software is built, tested and shipped.
The most salient advantage of CE is the tighter feedback loops:
CE practices help developers test and build their software more,
and makes software releases less brittle by enabling more incremental releases.
Nevertheless, a frequently reported barrier for success is the need to effectively analyze
the data that results from the numerous build and test
runs~\cite{Laukkanen2017,Hilton2017,Shahin2017,Debbiche2014,Olsson2012}.
One evident example of this is the handling and
analysis of results from complex end-to-end integration tests
which we focus on in this paper:
CE practices make it easier to run such end-to-end tests,
which include system integration and deployment to production hardware,
and they are critical for ensuring the quality of the end product.
However, since these end-to-end tests by their nature can fail for multiple
reasons, not least in the sense that new product code can make the tests
fail in new ways, it is critical to rapidly diagnose these failures.
In this paper we concern ourselves with how to rapidly analyze a set
of logs resulting from complex CE tasks\footnote{~For simplicity, and without loss of generality,
we will refer to these CE tasks as ``integration tests'' or ``tests'' throughout the paper,
though we acknowledge that they include more than just testing,
such as building the system and deploying it on hardware in a test or staging environment,
and failures can occur in any of these phases.
The proposed approach aims to cover all these situations,
and is evaluated on real-life logs capturing everything from building the system,
to deploying it on production hardware,
and running complex integration and interaction scenarios.}
where the overall outcome of the task (i.e. 'fail' or 'pass') is known,
but where analysts must consult the resulting logs to fully diagnose why the failures occurred.
Since these logs can get large and unwieldy, we
develop a tool that automatically suggests which segments in the logs
are most likely relevant for troubleshooting purposes.
Our method gives each event in the log an interestingness score based
on the overall event frequencies in the test result set: The log
events are in turn clustered based on these scores, and the event
clusters are presented to the user in decreasing order of overall
interestingness. The goal is to enable users to find all relevant
diagnostic information in the first presented event cluster, while having the
option of retrieving additional clusters if needed. An
additional benefit of our method is that the extracted events can help
identify commonly occurring patterns that are symptomatic for specific
errors. Future logs that exhibit the same characteristics can then be
automatically classified as having symptoms of that error.
\head{Contributions} We present Spectrum-Based Log Diagnosis (SBLD), a method for helping developers quickly find the
most relevant segments of a log. Using data from \CiscoNorway{an
industrial partner}, we empirically evaluate SBLD by investigating the following
three questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Overview}
The rest of the paper is structured as follows: Section~\ref{sec:approach}
explains SBLD and the methodology underlying its event ranking
procedures. Sections~\ref{sec:rqs} and~\ref{sec:expdesign} motivates our research questions
and empirical design. We report and discuss our results in
Section~\ref{sec:resdiscuss}. Section~\ref{sec:relwork} surveys related work,
and we discuss threats to validity in Section~\ref{sec:ttv} before concluding
in Section~\ref{sec:conclusion}.
%
\section{Approach}
\label{sec:approach}
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{overview.pdf}
\vspace*{-2ex}
\caption{A visual overview of our approach.}
\label{fig:approach}
\end{figure}
SBLD takes a set of log files from test failures, a set of log files from test successes, and a singular log file from a test failure called the \emph{target log} that the user wants analyzed and produces a list of segments from the target log file that are likely relevant for understanding why the corresponding test run failed.
In the following we explain the workings of SBLD in a stepwise
manner. At each step, we present the technical background needed to
understand how SBLD accomplishes its task. A visual overview of SBLD is
shown in Figure \ref{fig:approach}.
\head{Prerequisites}
First of all, SBLD requires access to a set of log files from failing test runs and a set of log files from successful test runs.
For brevity, we will refer to log files from failing test runs as 'failing logs',
and log files from successful test runs as 'passing logs'.%
\footnote{~Note that we explicitly assume that the outcome of each run is known;
This work is not concerned with determining whether the run was a failure or a success,
but rather with helping identify why the failing runs failed.}
We also require a programmatic way of segmenting each log file
into individually meaningful components. For the dataset used in this
paper these components are \emph{events} in the form of blocks of text
preceded by a date and a time-stamp in a predictable format. Lastly,
we require that run-time specific information such as timestamps,
dynamically generated IP addresses, check-sums and so on are removed
from the logs and replaced with standardized text. We refer to the process of
enforcing these requirements and delineating the log into events as
the \emph{abstraction} step. This enables SBLD to treat events
like ``2019-04-05 19:19:22.441 CEST: Alice calls Bob'' and ``2019-04-07
13:12:11.337 CEST: Alice calls Bob'' as two instances of the same
generic event "Alice calls Bob". The appropriate degree of abstraction
and how to meaningfully delineate a log will be context-dependent
and thus we require the user to perform these steps before using SBLD.
In the current paper we use an abstraction mechanism
and dataset generously provided by \CiscoNorway{our industrial partner}.
\renewcommand{\Ncf}{\ensuremath{\text{N}_\text{FI}}} %
\renewcommand{\Nuf}{\ensuremath{\text{N}_\text{FE}}} %
\renewcommand{\Ncs}{\ensuremath{\text{N}_\text{PI}}} %
\renewcommand{\Nus}{\ensuremath{\text{N}_\text{PE}}} %
\head{Computing coverage and event relevance} SBLD requires an assumption about what makes an event \emph{relevant}
and a method for computing this relevance. Our method takes inspiration
from Spectrum-Based Fault Localization (SBFL) in which the suspiciousness
or fault-proneness of a program statement is treated as a function of
the number of times the statement was activated in a failing test case,
combined with the number of times it is skipped in a passing test case~\cite{Jones2002,Abreu2007,Abreu2009}.
The four primitives that need to be computed are shown on the right-hand side in Table~\ref{table:measures}.
We treat each abstracted event as a statement and study their occurrences
in the logs like Fault Localization tracks the activation of statements in test cases.
We compute the analysis primitives by devising a binary
\emph{coverage matrix} whose columns represent every unique event
observed in the set of failing and successful logs while each row $r$
represents a log and tracks whether the event at column $c$ occurred in
log $r$ (1), or not (0), as shown in Figure~\ref{fig:approach}.
By computing these primitives, we can rank each event by using an
\emph{interestingness measure} (also referred to as ranking
metric, heuristic, or similarity coefficient~\cite{Wong2016}).
The choice of interestingness measure
is ultimately left to the user, as these are context dependent and
there is no generally optimal choice of interestingness measure~\cite{Yoo2014}.
In this paper we consider a
selection of nine interestingness measures prominent in the literature
and a simple metric that emphasizes the events that exclusively occur
in failing logs in the spirit of the \emph{union model} discussed
by Renieres et al.~\cite{renieres2003:fault}. We
report on the median performance of these interestingness measures with the intention of providing a
representative, yet unbiased, result. The ten measures considered are
precisely defined in Table~\ref{table:measures}.
\begin{table*}
\centering
\begin{tabular}{c@{\hspace{10mm}}c}
{\renewcommand{\arraystretch}{1.7} %
\begin{tabular}{lc}
\toprule
measure & formula \\\midrule
Tarantula \cite{Jones2001,Jones2002} & %
\( \frac{ \frac{ \cef{} }{ \cef{} + \cnf{} } }{ \frac{ \cef{} }{ \cef{} + \cnf{} } + \frac{ \cep{} }{ \cep{} + \cnp{} } } \)
\\
Jaccard \cite{Jaccard1912,Chen2002} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs } \)
\\
Ochiai \cite{Ochiai1957,Abreu2006} & %
\( \frac{ \Ncf }{ \sqrt{ ( \cef + \cnf ) \times ( \cef + \cep ) } } \)
\\
Ochiai2 \cite{Ochiai1957, Naish2011} & %
\( \frac{ \Aef \times \Anp }{ \sqrt{ ( \Aef + \Aep ) \times ( \Anf + \Anp ) \times ( \Aef + \Anf) \times ( \Aep + \Anp ) } } \)
\\
Zoltar \cite{Gonzalez2007} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs + \frac { 10000 \times \Nuf \times \Ncs }{ \Ncf } } \)
\\
D$^\star$ \cite{Wong2014} (we use $\star = 2$) & %
\( \frac{ (\cef)^\star }{ \cnf + \cep } \)
\\
O$^p$ \cite{Naish2011} & %
\( \Aef - \frac{ \Aep }{ \Aep + \Anp + 1} \)
\\
Wong3 \cite{Wong2007,Wong2010} &
\( \Aef - h, \text{where~} h = \left\{
\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
\Aep & \text{if~} \Aep \leq 2 \\
2 + 0.1(\Aep - 2) & \text{if~} 2 < \Aep \leq 10 \\
2.8 + 0.001(\Aep - 10) & \text{if~} \Aep > 10 \\
\end{array}\)}
\right. \)
\\
Kulczynski2 \cite{Kulczynski1927,Naish2011} & %
\( \frac{ 1 }{ 2 } \times ( \frac{ \Aef }{ \Aef + \Anf } + \frac{ \Aef }{ \Aef + \Aep } ) \)
\\
Failed only & %
\( \left\{\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
1 & \text{if~} \Ncs = 0 \\
0 & \text{otherwise~} \\
\end{array}\)}
\right. \)
\\
\bottomrule
\end{tabular}} &
\begin{tabular}{lp{2.99cm}}
\toprule
\multicolumn{2}{l}{notation used} \\\midrule
\Ncf & number of \emph{failing} logs \\ & that \emph{include} the event \\
\Nuf & number of \emph{failing} logs \\ & that \emph{exclude} the event \\
\Ncs & number of \emph{passing} logs \\ & that \emph{include} the event \\
\Nus & number of \emph{passing} logs \\ & that \emph{exclude} the event \\
\bottomrule
\end{tabular}
\end{tabular}\vspace*{1ex}
\caption{\label{table:measures}The 10 interestingness measures under consideration in this paper.}
\vspace*{-3ex}
\end{table*}
\head{Analyzing a target log file} Using our database of event scores,
we first identify the events occurring in the target log file and the
interestingness scores associated with these events. Then, we group
similarly scored events together using a clustering algorithm. Finally,
we present the best performing cluster of events to the end user. The
clustering step helps us make a meaningful selection of events rather
than setting an often arbitrary window selection size. Among other
things, it prevents two identically scored events from falling at
opposite sides of the selection threshold. If the user suspects that
the best performing cluster did not report all relevant events, she can
inspect additional event clusters in order of decreasing
aggregate interestingness score. To perform the clustering step we use Hierarchical Agglomerative
Clustering (HAC) with Complete linkage~\cite{manning2008introduction}, where
sub-clusters are merged until the maximal distance between members of
each candidate cluster exceeds some specified threshold. In SBLD,
this threshold is the uncorrected sample standard deviation of the event
scores for the events being clustered.\footnote{~Specifically,
we use the \texttt{numpy.std} procedure from the SciPy framework~\cite{2020SciPy-NMeth},
in which the uncorrected sample standard deviation is given by
$ \sqrt{\frac{1}{N} \sum_{i=1}^{N}\lvert x_{i} - \bar{x} \rvert^2} $ where
$\bar{x}$ is the sample mean of the interestingness scores obtained for the
events in the log being analyzed and $N$ is the number of events in the log.}
This ensures that the ``interestingness-distance'' between two events
in a cluster never exceeds the uncorrected sample standard deviation observed in the set.
%
\section{Research Questions}
\label{sec:rqs}
The goal of this paper is to present SBLD and help practitioners make
an informed decision whether SBLD meets their needs. To this end, we have identified
three research questions that encompass several concerns practitioners
are likely to have and that also are of interested to the research community at
large:
\begin{enumerate}[\bfseries RQ1]
\item How well does SBLD reduce the effort needed to identify all
known-to-be relevant events ("does it work?") ?
\item How is the efficacy of SBLD impacted by increased evidence in the form of
additional failing and passing logs ("how much data do we need before
running the analysis?") ?
\item How does SBLD perform compared to a strategy based on searching for
common textual patterns with a tool like \texttt{grep} ("is it better than doing the obvious thing?") ?
\end{enumerate}
RQ1 looks at the aggregated performance of SBLD to assess its viability.
With RQ2 we assess how sensitive the performance is to the amount of
available data: How many logs should you have before you can expect the
analysis to yield good results? Is more data unequivocally a good thing?
What type of log is more informative: A passing log or a failing log?
Finally, we compare SBLD's performance to a more traditional method for
finding relevant segments in logs: Using a textual search for strings
one expects to occur near informative segments, like
"failure" and "error". The next section details the dataset used, our
chosen quality measures for assessment and our methodology for answering
each research question.
%
\section{Experimental Design}
\label{sec:expdesign}
\begin{table}
\centering
\caption{The key per-test attributes of our dataset. Two events are considered
distinct if they are treated as separate events after the abstraction
step. A "mixed" event is an event that occurs in logs of both failing and
passing runs.}
\vspace*{-1ex}
\label{table:descriptive}
\renewcommand{\tabcolsep}{0.11cm}\small
\begin{tabular}{rcrrrrrr}
\toprule
& & \# fail & \# pass & distinct & fail-only & mixed & pass-only \\
test & signature & logs & logs & events & events & events & events \\
\midrule
1 & C & 24 & 100 & 36391 & 21870 & 207 & 14314 \\
2 & E & 11 & 25 & 380 & 79 & 100 & 201 \\
3 & E & 11 & 25 & 679 & 174 & 43 & 462 \\
4 & E & 4 & 25 & 227 & 49 & 39 & 139 \\
5 & C & 2 & 100 & 33420 & 2034 & 82 & 31304 \\
6 & C & 19 & 100 & 49155 & 15684 & 893 & 32578 \\
7 & C & 21 & 100 & 37316 & 17881 & 154 & 19281 \\
8 & C & 4 & 100 & 26614 & 3976 & 67 & 22571 \\
9 & C & 21 & 100 & 36828 & 19240 & 228 & 17360 \\
10 & C & 22 & 100 & 110479 & 19134 & 1135 & 90210 \\
11 & E & 5 & 25 & 586 & 95 & 47 & 444 \\
12 & E & 7 & 25 & 532 & 66 & 18 & 448 \\
13 & C & 2 & 100 & 15351 & 2048 & 232 & 13071 \\
14 & C & 3 & 100 & 16318 & 2991 & 237 & 13090 \\
15 & C & 26 & 100 & 60362 & 20964 & 1395 & 38003 \\
16 & C & 12 & 100 & 2206 & 159 & 112 & 1935 \\
17 & E & 8 & 25 & 271 & 58 & 98 & 115 \\
18 & A & 23 & 75 & 3209 & 570 & 156 & 2483 \\
19 & C & 13 & 100 & 36268 & 13544 & 411 & 22313 \\
20 & B & 3 & 19 & 688 & 69 & 31 & 588 \\
21 & B & 22 & 25 & 540 & 187 & 94 & 259 \\
22 & E & 1 & 25 & 276 & 11 & 13 & 252 \\
23 & C & 13 & 100 & 28395 & 13629 & 114 & 14652 \\
24 & E & 7 & 26 & 655 & 117 & 56 & 482 \\
25 & C & 21 & 100 & 44693 & 18461 & 543 & 25689 \\
26 & C & 21 & 100 & 42259 & 19434 & 408 & 22417 \\
27 & C & 21 & 100 & 44229 & 18115 & 396 & 25718 \\
28 & C & 20 & 100 & 43862 & 16922 & 642 & 26298 \\
29 & C & 28 & 100 & 54003 & 24216 & 1226 & 28561 \\
30 & C & 31 & 100 & 53482 & 26997 & 1063 & 25422 \\
31 & C & 27 & 100 & 53092 & 23283 & 463 & 29346 \\
32 & C & 21 & 100 & 55195 & 19817 & 768 & 34610 \\
33 & E & 9 & 25 & 291 & 70 & 30 & 191 \\
34 & D & 2 & 13 & 697 & 76 & 92 & 529 \\
35 & E & 9 & 25 & 479 & 141 & 47 & 291 \\
36 & E & 10 & 75 & 1026 & 137 & 68 & 821 \\
37 & E & 7 & 25 & 7165 & 1804 & 94 & 5267 \\
38 & E & 4 & 25 & 647 & 67 & 49 & 531 \\
39 & G & 47 & 333 & 3350 & 428 & 144 & 2778 \\
40 & G & 26 & 333 & 3599 & 240 & 157 & 3202 \\
41 & G & 26 & 332 & 4918 & 239 & 145 & 4534 \\
42 & C & 17 & 100 & 30411 & 14844 & 348 & 15219 \\
43 & F & 267 & 477 & 10002 & 3204 & 1519 & 5279 \\
44 & C & 9 & 100 & 29906 & 8260 & 274 & 21372 \\
45 & E & 3 & 25 & 380 & 44 & 43 & 293 \\
\bottomrule
\end{tabular}
\vspace*{-2ex}
\end{table}
%
\begin{table}
\centering
\caption{Ground-truth signatures and their occurrences in distinct events.}
\label{table:signature}
\vspace*{-1ex}
\small
\begin{tabular}{ccrrrc}
\toprule
& sub- & fail-only & pass-only & fail \& & failure \\
signature & pattern & events & events & pass & strings* \\
\midrule
A & 1 & 1 & 0 & 0 & yes \\
A & 2 & 2 & 0 & 0 & no \\
B & 1 & 2 & 0 & 0 & yes \\
C & 1 & 21 & 0 & 0 & yes \\
C & 2 & 21 & 0 & 0 & yes \\
D & 1 & 4 & 0 & 0 & yes \\
\textbf{D$^{\#}$} & \textbf{2} & 69 & 267 & 115 & no \\
\textbf{D$^{\#}$} & \textbf{3} & 2 & 10 & 13 & no \\
\textbf{E$^{\#}$} & \textbf{1} & 24 & 239 & 171 & no \\
E & 1 & 1 & 0 & 0 & no \\
E & 2 & 9 & 0 & 0 & no \\
E & 3 & 9 & 0 & 0 & yes \\
E & 4 & 23 & 0 & 0 & yes \\
F & 1 & 19 & 0 & 0 & yes \\
F & 2 & 19 & 0 & 0 & no \\
F & 3 & 19 & 0 & 0 & yes \\
F & 4 & 14 & 0 & 0 & yes \\
G & 1 & 2 & 0 & 0 & yes \\
G & 2 & 1 & 0 & 0 & no \\
G & 3 & 1 & 0 & 0 & no \\
\bottomrule
\multicolumn{6}{l}{* signature contains the lexical patterns 'error', 'fault' or 'fail*'}\\
\multicolumn{6}{l}{$^{\#}$ sub-patterns that were removed to ensure a clean ground truth}
\end{tabular}
\vspace*{-3ex}
\end{table}
\subsection{Dataset and ground truth}
\label{sec:dataset}
Our dataset provided by \CiscoNorway{our industrial partner} consists
of failing and passing log files from 45 different end-to-end integration
tests. In addition to the log text we also have data on when a given
log file was produced. Most test-sets span a time-period of 38 days, while
the largest set (test 43 in Table~\ref{table:descriptive}) spans 112
days. Each failing log is known to exemplify symptoms of one of seven
known errors, and \CiscoNorway{our industrial partner} has given us a
set of regular expressions that help determine which events are relevant
for a given known error. We refer to the set of regular expressions
that identify a known error as a \emph{signature} for that error. These
signatures help us construct a ground truth for our investigation.
Moreover, an important motivation for developing SBLD is to help create
signatures for novel problems: The events highlighted by SBLD should be
characteristic of the observed failure, and the textual contents of the
events can be used in new signature expressions.
Descriptive facts about our dataset is listed in
Table~\ref{table:descriptive} while Table~\ref{table:signature}
summarizes key insights about the signatures used.
Ideally, our ground truth should highlight exactly and \emph{only} the
log events that an end user would find relevant for troubleshooting
an error. However, the signatures used in this investigation were
designed to find sufficient evidence that the \emph{entire log} in
question belongs to a certain error class: the log might contain other
events that a human user would find equally relevant for diagnosing
a problem, but the signature in question might not encompass these
events. Nevertheless, the events that constitute sufficient evidence
for assigning the log to a given error class are presumably relevant
and should be presented as soon as possible to the end user. However,
if our method cannot differentiate between these signature events and
other events we cannot say anything certain about the relevance of
those other events. This fact is reflected in our choice of quality
measures, specifically in how we assess the precision of the approach. This
is explained in detail in the next section.
When producing the ground truth, we first ensured that a log would only be
associated with a signature if the entire log taken as a whole satisfied all
the sub-patterns of that signature. If so, we then determined which events
the patterns were matching on. These events constitute the known-to-be relevant
set of events for a given log. However, we identified some problems with two of the provided
signatures that made them unsuitable for assessing SBLD. Signature \emph{E}
(see Table~\ref{table:signature}) had a sub-pattern that searched for a "starting test"-prefix that necessarily
matches on the first event in all logs due to the structure of the logs.
Similarly, signature \emph{D} contained two sub-patterns that necessarily
match all logs in the set--in this case by searching for whether the test
was run on a given machine, which was true for all logs for the corresponding
test. We therefore elected to remove these sub-patterns from the signatures
before conducting the analysis.
\subsection{Quality Measures}
As a measure of how well SBLD reports all known-to-be relevant log
events, we measure \emph{recall in best cluster}, which we for brevity refer to
as simply \emph{recall}.
This is an adaption of the classic recall measure used in information retrieval,
which tracks the proportion of all relevant events that were retrieved
by the system~\cite{manning2008introduction}.
As our method presents events to the user in a series of ranked clusters,
we ideally want all known-to-be relevant events to appear in the highest ranked cluster.
We therefore track the overall recall obtained as if the first cluster were the only events retrieved.
Note, however, that SBLD ranks all clusters, and a user can retrieve additional clusters if desired.
We explore whether this could improve SBLD's performance on a
specific problematic test-set in Section~\ref{sec:testfourtythree}.
It is trivial to obtain a perfect recall by simply retrieving all events
in the log, but such a method would obviously be of little help to a user
who wants to reduce the effort needed to diagnose failures.
We therefore also track the \emph{effort reduction} (ER), defined as
\[ \text{ER} = 1 - \frac{\text{number of events in first cluster}}{\text{number of events in log}} \]
Much like effective information retrieval systems aim for high recall and
precision, we want our method to score a perfect recall while obtaining the
highest effort reduction possible.
\subsection{Recording the impact of added data}
To study the impact of added data on SBLD's performance, we need to measure how
SBLD's performance on a target log $t$ is affected by adding an extra
failing log $f$ or a passing log $p$. There are several strategies
for accomplishing this. One way is to try all combinations in the
dataset i.e.\ compute the performance on any $t$ using any choice of
failing and passing logs to produce the interestingness scores. This
approach does not account for the fact that the logs in the data are
produced at different points in time and is also extremely expensive
computationally. We opted instead to order the logs chronologically and
simulate a step-wise increase in data as time progresses, as shown in
Algorithm~\ref{alg:time}.
\begin{algorithm}[b]
\caption{Pseudo-code illustrating how we simulate a step-wise increase in data
as time progresses and account for variability in choice of
interestingness measure.}
\label{alg:time}
\begin{algorithmic}\small
\STATE $F$ is the set of failing logs for a given test
\STATE $P$ is the set of passing logs for a given test
\STATE $M$ is the set of interestingness measures considered
\STATE sort $F$ chronologically
\STATE sort $P$ chronologically
\FOR{$i=0$ to $i=\lvert F \rvert$}
\FOR{$j=0$ to $j=\lvert P \rvert$}
\STATE $f = F[:i]$ \COMMENT{get all elements in F up to and including position i}
\STATE $p = P[:j]$
\FORALL{$l$ in $f$}
\STATE initialize $er\_scores$ as an empty list
\STATE initialize $recall\_scores$ as an empty list
\FORALL{$m$ in $M$}
\STATE perform SBLD on $l$ using $m$ as measure \\ \hspace*{1.75cm} and $f$ and $p$ as spectrum data
\STATE append recorded effort reduction score to $er\_scores$
\STATE append recorded recall score to $recall\_scores$
\ENDFOR
\STATE record median of $er\_scores$
\STATE record median of $recall\_scores$
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Variability in interestingness measures}
\label{sec:imvars}
As mentioned in Section~\ref{sec:approach}, SBLD requires a
choice of interestingness measure for scoring the events,
which can have a considerable impact on SBLD's performance.
Considering that the best choice of interestingness measure is context-dependent,
there is no global optimum,
it is up to the user to decide which interestingness metric best reflects their
notion of event relevance.
Consequently, we want to empirically study SBLD in way
that captures the variability introduced by this decision.
To this end, we record the median score obtained by performing SBLD for every possible choice of
interestingness measure from those listed in Table~\ref{table:measures}.
Algorithm~\ref{alg:time} demonstrates the procedure in pseudo-code.
\subsection{Comparing alternatives}
\label{sec:comps}
To answer RQ2 and RQ3, we use pairwise comparisons of
different configurations of SBLD with a method that searches for regular expressions.
The alternatives are compared
on each individual failing log in the set in a paired fashion. An
important consequence of this is that the statistical comparisons have
no concept of which test the failing log belongs to, and thus the test
for which there is most data has the highest impact on the result of the
comparison.
The pairwise comparisons are conducted using paired Wilcoxon signed-rank
tests~\cite{wilcoxon1945} where the Pratt correction~\cite{Pratt1959}
is used to handle ties. We apply Holm's correction~\cite{Holm1979}
to the obtained p-values to account for the family-wise error
rate arising from multiple comparisons. We declare a comparison
\emph{statistically significant} if the Holm-adjusted p-value is below
$\alpha=0.05$. The Wilcoxon tests check the two-sided null hypothesis of
no difference between the alternatives. We report the Vargha-Delaney $A_{12}$ and
$A_{21}$~\cite{Vargha2000} measures of stochastic superiority to
indicate which alternative is the strongest. Conventionally, $A_{12}=0.56$ is
considered a small difference, $A_{12}=.64$ is considered a medium difference
and $A_{12}=.71$ or greater is considered large~\cite{Vargha2000}. Observe
also that $A_{21} = 1 - A_{12}$.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{rq1_boxplot.png}
%
\caption{The overall performance of SBLD in terms of effort reduction
and recall. On many tests, SBLD exhibited perfect recall for
all observations in the inter-quartile range and thus the box collapses to a single line on the $1.0$ mark.\label{fig:rq1boxplot}}
\end{figure*}
\subsection{Analysis procedures}
We implement the SBLD approach in a prototype tool
DAIM (Diagnosis and Analysis using Interestingness Measures),
and use DAIM to empirically evaluate the idea.
\head{RQ1 - overall performance} We investigate the overall performance
of SBLD by analyzing a boxplot for each test in our dataset. Every individual
datum that forms the basis of the plot is the median performance of SBLD over
all choices of interestingness measures for a given set of failing and passing
logs subject to the chronological ordering scheme outlined above.
\head{RQ2 - impact of data} We analyze the impact of added data by
producing and evaluating heatmaps that show the obtained performance
as a function of the number of failing logs (y-axis) and number of
passing logs (x-axis). The color intensity of each tile in the heatmaps
is calculated by taking the median of the scores obtained for each
failing log analyzed with the given number of failing and passing logs
as data for the spectrum inference, wherein the score for each log is
the median over all the interestingness measures considered as outlined in
Section~\ref{sec:imvars}.
Furthermore, we compare three variant configurations
of SBLD that give an overall impression of the influence of added
data. The three configurations considered are \emph{minimal evidence},
\emph{median evidence} and \emph{maximal evidence}, where minimal
evidence uses only events from the log being analyzed and one additional
passing log, median evidence uses the median amount of respectively failing and
and passing logs available while maximal evidence uses
all available data for a given test. The comparisons are conducted with the
statistical scheme described above in Section~\ref{sec:comps}.
\head{RQ3 - SBLD versus pattern-based search} To compare SBLD
against a pattern-based search, we record the effort reduction and
recall obtained when only selecting events in the log that match on the
case-insensitive regular expression \texttt{"error|fault|fail*"}, where
the $*$ denotes a wildcard-operator and the $\lvert$ denotes logical
$OR$. This simulates the results that a user would obtain by using
a tool like \texttt{grep} to search for words like 'error' and 'failure'.
Sometimes the ground-truth signature expressions contain words from this
pattern, and we indicate this in Table~\ref{table:signature}. If so, the
regular expression-based method is guaranteed to retrieve the event.
Similarly to RQ2, we compare the three configurations of SBLD described
above (minimum, median and maximal evidence) against the pattern-based
search using the statistical described in Section~\ref{sec:comps}.
%
\section{Results and Discussion}
\label{sec:resdiscuss}
This section gradually dissects Figure~\ref{fig:rq1boxplot}, showing a breakdown of SBLD's performance per test for both recall
and effort reduction, Figures \ref{fig:erheat} and \ref{fig:recallheat},
showing SBLD's performance as a function of the number of failing and passing
logs used, as well as Table~\ref{table:comparisons}, which shows the results
of the statistical comparisons we have performed.
\begin{figure*}
\includegraphics[width=\textwidth]{er_heatmap.pdf}
\caption{Effort reduction score obtained when SBLD is run on a given number of failing and passing logs. The tests not listed in this figure all obtained a lowest median effort reduction score of 90\% or greater and are thus not shown for space considerations. \label{fig:erheat}}
\vspace*{-2ex}
\end{figure*}
\begin{table*}
\caption{Statistical comparisons performed in this investigation. The
bold p-values are those for which no statistically significant difference under $\alpha=0.05$
could be established.}
\label{table:comparisons}
{\small%
\begin{tabular}{lllrrrr}
\toprule
variant 1 & variant 2 & quality measure & Wilcoxon statistic & $A_{12}$ & $A_{21}$ & Holm-adjusted p-value\\
\midrule
pattern-based search & minimal evidence & effort reduction & 29568.5 & 0.777 & 0.223 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & effort reduction & 202413.0 & 0.506 & 0.494 & \textbf{1.000} \\
pattern-based search & median evidence & effort reduction & 170870.5 & 0.496 & 0.504 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & effort reduction & 832.0 & 0.145 & 0.855 & $\ll$ 0.001 \\
minimal evidence & median evidence & effort reduction & 2666.0 & 0.125 & 0.875 & $\ll$ 0.001 \\
maximal evidence & median evidence & effort reduction & 164674.0 & 0.521 & 0.479 & \textbf{1.000} \\
pattern-based search & minimal evidence & recall & 57707.0 & 0.610 & 0.390 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & recall & 67296.0 & 0.599 & 0.401 & $\ll$ 0.001 \\
pattern-based search & median evidence & recall & 58663.5 & 0.609 & 0.391 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & recall & 867.5 & 0.481 & 0.519 & $\ll$ 0.001 \\
minimal evidence & median evidence & recall & 909.0 & 0.498 & 0.502 & 0.020 \\
maximal evidence & median evidence & recall & 0.0 & 0.518 & 0.482 & $\ll$ 0.001 \\
\bottomrule
\end{tabular}
%
}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{recall_heatmap.pdf}
\caption{Recall score obtained when SBLD is run on a given number of failing and passing logs. For space
considerations, we only show tests for which the minimum observed
median recall was smaller than 1 (SBLD attained perfect median recall for all configurations in the other tests). \label{fig:recallheat}}
\vspace*{-3ex}
\end{figure}
\subsection{RQ1: The overall performance of SBLD}
Figure~\ref{fig:rq1boxplot} suggests that SBLD's overall performance is strong,
since it obtains near-perfect recall while retaining a high degree of effort
reduction. In terms of recall, SBLD obtains a perfect performance on all except
four tests: 18, 34, 42 and 43, with the lower quartile stationed at perfect recall for all tests
except 43 (which we discuss in detail in Section~\ref{sec:testfourtythree}).
For test 18, only 75 out of 20700 observations ($0.036\%$) obtained a recall score
of $0.5$ while the rest obtained a perfect score. On test 34 (the smallest in our
dataset), 4 out of 39 observations obtained a score of zero recall while the
others obtained perfect recall.
For test 42, 700 out of 15300 ($0.4\%$) observations obtained a score of zero recall while the rest obtained perfect recall.
Hence with the exception of test 43 which is discussed later,
SBLD obtains very strong recall scores overall with only a few outliers.
The performance is also strong in terms of effort reduction, albeit
more varied. To a certain extent this is expected since the attainable
effort reduction on any log will vary with the length of the log and
the number of ground-truth relevant events in the log. As can be seen
in Figure~\ref{fig:rq1boxplot}, most of the observations fall well
over the 75\% mark, with the exceptions being tests 4 and 22. For test
4, Figure~\ref{fig:erheat} suggests that one or more of the latest
passing logs helped SBLD refine the interestingness scores. A similar
but less pronounced effect seems to have happened for test 22. However,
as reported in Table~\ref{table:descriptive}, test 22 consists only of
\emph{one} failing log. Manual inspection reveals that the log consists
of 30 events, of which 11 are fail-only events. Without additional
failing logs, most interestingness measures will give a high score to
all events that are unique to that singular failing log, which is likely
to include many events that are not ground-truth relevant. Reporting 11
out of 30 events to the user yields a meager effort reduction of around
63\%. Nevertheless, the general trend is that SBLD retrieves a compact
set of events to the user which yields a high effort reduction score.
In summary, the overall performance shows that SBLD
retrieves the majority of all known-to-be-relevant events
in compact clusters, which dramatically reduces the analysis burden for the
end user. The major exception is Test 43, which we return to in
Section~\ref{sec:testfourtythree}.
\subsection{RQ2: On the impact of evidence}
The heatmaps suggest that the effort reduction is generally not
adversely affected by adding more \emph{passing logs}. If the
assumptions underlying our interestingness measures are correct,
this is to be expected: Each additional passing log either gives us
reason to devalue certain events that co-occur in failing and passing
logs or contain passing-only events that are deemed uninteresting.
Most interestingness measures highly value events that
exclusively occur in failing logs, and additional passing logs help
reduce the number of events that satisfy this criteria. However, since
our method bases itself on clustering similarly scored events it is
weak to \emph{ties} in interestingness scores. It is possible that
an additional passing log introduces ties where there previously was
none. This is likely to have an exaggerated effect in situations with
little data, where each additional log can have a dramatic impact on the
interestingness scores. This might explain the gradual dip in effort
reduction seen in Test 34, for which there are only two failing logs.
Adding more failing logs, on the other hand, draws a more nuanced
picture: When the number of failing logs (y-axis) is high relative
to the number of passing logs (x-axis), effort reduction seems to suffer.
Again, while most interestingness measures will prioritize events that
only occur in failing logs, this strategy only works if there is a
sufficient corpus of passing logs to weed out false positives. When
there are far fewer passing than failing logs, many events will be
unique to the failing logs even though they merely reflect a different
valid execution path that the test can take. This is especially true for
complex integration tests like the ones in our dataset, which might test
a system's ability to recover from an error, or in other ways have many
valid execution paths.
The statistical comparisons summarized in Table~\ref{table:comparisons}
suggest that the minimal evidence strategy performs poorly compared to the
median and maximal evidence strategies. This is especially
pronounced for effort reduction, where the Vargha-Delaney
metric scores well over 80\% in favor of the maximal and median
strategy. For recall, the difference between the minimum strategy and
the other variants is small, albeit statistically significant. Furthermore,
the jump from minimal evidence to median evidence is much more
pronounced than the jump from median evidence to maximal evidence.
For effort reduction, there is in fact no statistically discernible
difference between the median and maximal strategies. For recall, the maximal
strategies seems a tiny bit better, but the $A_{12}$ measure suggests the
magnitude of the difference to be small.
Overall, SBLD seems to benefit from extra data, especially additional passing
logs. Failing logs also help, but depend on a proportional amount of passing
logs for SBLD to fully benefit.
The performance increase from going from minimal data to some data is more pronounced than going from some data to
maximal data. This suggests that there may be diminishing returns to
collecting extra logs, but our investigation cannot prove or disprove this.
\subsection{RQ3: SBLD versus simple pattern-search}
In terms of effort reduction, Table~\ref{table:comparisons} shows that
the pattern-based search clearly beats the minimal evidence variant of
SBLD. It does not, however, beat the median and maximal variants: The
comparison to median evidence suggests a statistically significant win
in favor of median evidence, but the effect reported by $A_{12}$ is
so small that it is unlikely to matter in practice. No statistically
significant difference could be established between the pattern-based
search and SBLD with maximal evidence.
In one sense, it is to be expected that the pattern-based search does
well on effort reduction assuming that events containing words like
"fault" and "error" are rare. The fact that the pattern-based search
works so well could indicate that \CiscoNorway{our industrial partner}
has a well-designed logging infrastructure where such words are
rare and occur at relevant positions in the logs. On the other
hand, it is then notable that the median and maximum variants of SBLD perform
comparably on effort reduction without having any concept of the textual
content in the events.
In terms of recall, however, pattern-based search beats all variants of
SBLD in a statistically significant manner, where the effect size of the
differences is small to medium. One likely explanation for this better performance is that the
pattern-based search performs very well on Test 43, which SBLD generally
performs less well on. Since the comparisons are run per failing log and test
43 constitutes 29\% of the failing logs (specifically, 267 out of 910 logs), the
performance of test 43 has a massive impact. We return to test 43 and its
impact on our results in Section~\ref{sec:testfourtythree}.
On the whole, SBLD performs similarly to pattern-based search, obtaining
slightly poorer results on recall for reasons that are likely due
to a particular test we discuss below. At any rate, there is no
contradiction in combining SBLD with a traditional pattern-based search.
Analysts could start by issuing a set of pattern-based searches and
run SBLD afterward if the pattern search returned unhelpful results.
Indeed, an excellent and intended use of SBLD is to suggest candidate
signature patterns that, once proven reliable, can be incorporated in a
regular-expression based search to automatically identify known issues
in future runs.
\subsection{What happens in Test 43?}
\label{sec:testfourtythree}
SBLD's performance is much worse on Test 43 than the other tests, which
warrants a dedicated investigation. The first thing we observed in the
results for Test 43 is that all of the ground-truth-relevant events
occurred \emph{exclusively} in failing logs and were often singular
(11 out of the 33) or infrequent (30 out of 33 events occurred in 10\%
of the failing logs or fewer). Consequently, we observed a strong
performance from the \emph{Tarantula} and \emph{Failed only}-measures
that put a high premium on failure-exclusive events. Most of the
interestingness measures, on the other hand, will prefer an event that
is very frequent in the failing logs and sometimes occur in passing logs
over a very rare event that only occurs in failing logs. This goes a
long way in explaining the poor performance on recall. The abundance of
singular events might also suggest that there is an error in the event
abstraction framework, where several events that should be treated as
instances of the same abstract event are treated as separate events. We
discuss this further in Section~\ref{sec:ttv}.
\begin{sloppypar}%
Another observation we made is that the failing logs contained only \emph{two}
ground-truth relevant events, which means that the recorded recall can quickly
fluctuate between $0$, $0.5$ and $1$.
\end{sloppypar}
Would the overall performance improve by retrieving an additional
cluster? A priori, retrieving an extra cluster would strictly improve
or not change recall since more events are retrieved without removing
the previously retrieved events. Furthermore, retrieving an additional
cluster necessarily decreases the effort reduction. We re-ran the
analysis on Test 43 and collected effort reduction and recall scores
for SBLD when retrieving \emph{two} clusters, and found that the added
cluster increased median recall from $0$ to $0.5$ while the median
effort reduction decreased from $0.97$ to $0.72$. While the proportional
increase in recall is larger than the decrease in effort reduction,
this should in our view not be seen as an improvement: As previously
mentioned, the failing logs in this set contain only two ground-truth
relevant events and thus recall is expected to fluctuate greatly.
Secondly, an effort reduction of $0.72$ implies that you still have to
manually inspect 28\% of the data, which in most information retrieval
contexts is unacceptable. An unfortunate aspect of our analysis in this
regard is that we do not account for event \emph{lengths}: An abstracted
event is treated as one atomic entity, but could in reality vary from a
single line to a stack trace that spans several pages. A better measure
of effort reduction should incorporate a notion of event length to
better reflect the real-world effect of retrieving more events.
All in all, Test 43 exhibits a challenge that SBLD is not suited for:
It asks SBLD to prioritize rare events that are exclusive to failing
logs over events that frequently occur in failing logs but might
occasionally occur in passing logs. The majority of interestingness
measures supported by SBLD would prioritize the latter category of
events. In a way, this might suggest that SBLD is not suited for finding
\emph{outliers} and rare events: Rather, it is useful for finding
events that are \emph{characteristic} for failures that have occurred
several times - a "recurring suspect", if you will. An avenue for future
research is to explore ways of letting the user combine a search for
"recurring suspects" with the search for outliers.
%
\section{Related Work}
\label{sec:relwork}
We distinguish two main lines of related work:
First, there is other work aimed at automated analysis of log files,
i.e., our problem domain,
and second, there is other work that shares similarities with our technical approach,
i.e., our solution domain.
\head{Automated log analysis}
Automated log analysis originates in \emph{system and network monitoring} for security and administration~\cite{lin1990:error,Oliner2007},
and saw a revival in recent years due to the needs of \emph{modern software development}, \emph{CE} and \emph{DevOps}~\cite{Hilton2017,Laukkanen2017,Debbiche2014,Olsson2012,Shahin2017,candido2019:contemporary}.
A considerable amount of research has focused on automated \emph{log parsing} or \emph{log abstraction},
which aims to reduce and organize log data by recognizing latent structures or templates in the events in a log~\cite{zhu2019:tools,el-masri2020:systematic}.
He et al. analyze the quality of these log parsers and conclude that many of them are not accurate or efficient enough for parsing the logs of modern software systems~\cite{he2018:automated}.
In contrast to these automated approaches,
our study uses a handcrafted log abstracter developed by \CiscoNorway{our industrial collaborator}.
\emph{Anomaly detection} has traditionally been used for intrusion detection and computer security~\cite{liao2013:intrusion,ramaki2016:survey,ramaki2018:systematic}.
Application-level anomaly detection has been investigated for troubleshooting~\cite{chen2004:failure,zhang2019:robust},
and to assess compliance with service-level agreements~\cite{banerjee2010:logbased,He2018,sauvanaud2018:anomaly}.
Gunter et al. present an infrastructure for troubleshooting of large distributed systems, %
by first (distributively) summarizing high volume event streams before submitting those summaries to a centralized anomaly detector.
This helps them achieve the fidelity needed for detailed troubleshooting,
without suffering from the overhead that such detailed instrumentation would bring~\cite{Gunter2007}.
Deeplog by Du et al. enables execution-path and performance anomaly detection in system logs by training a Long Short-Term Memory neural network of the system's expected behavior from the logs, and using that model to flag events and parameter values in the logs that deviate from the model's expectations~\cite{Du2017}.
Similarly, LogRobust by Zhang et al. performs anomaly detection using a bi-LSTM neural network but also detects events that are likely evolved versions of previously seen events, making the learned model more robust to updates in the target logging infrastructure~\cite{zhang2019:robust}.
In earlier work, we use \emph{log clustering} to reduce the effort needed to process a backlog of failing CE logs
by grouping those logs that failed for similar reasons~\cite{rosenberg2018:use,rosenberg:2018:improving}.
They build on earlier research that uses log clustering to identify problems in system logs~\cite{Lin2016,Shang2013}.
Common to these approaches is how the contrast between passing and failing logs is used to improve accuracy,
which is closely related to how SBLD highlights failure-relevant events.
Nagarash et al.~\cite{nagaraj:2012} explore the use of dependency networks to exploit the contrast between two sets of logs,
one with good and one with bad performance,
to help developers understand which component(s) likely contain the root cause of performance issues.
An often-occurring challenge is the need to (re)construct an interpretable model of a system's execution.
To this end, several authors investigate the combination of log analysis with (static) source code analysis,
where they try to (partially) match events in logs to log statements in the code,
and then use these statements to reconstruct a path through the source code to help determine
what happened in a failed execution~\cite{Xu2009,yuan:2010:sherlog,zhao2014:lprof,schipper2019:tracing}.
Gadler et al. employ Hidden Markov Models to create a model of a system's usage patterns from logged events~\cite{gadler2017:mining}, while
Pettinato et al. model and analyze the behavior of a complex telescope system using Latent Dirichlet Allocation~\cite{pettinato2019:log}.
Other researchers have analyzed the logs for successful and failing builds,
to warn for anti-patterns and decay~\cite{vassallo2019:automated},
give build repair hints~\cite{Vassallo2018},
and automatically repair build scripts~\cite{hassan2018:hirebuild, tarlow2019:learning}.
Opposite to our work,
these techniques exploit the \emph{overlap} in build systems used by many projects to mine patterns that hint at decay or help repair a failing build,
whereas we exploit the \emph{contrast} with passing runs for the same project to highlight failure-relevant events.
\begin{sloppypar}
\head{Fault Localization}
As mentioned, our approach was inspired by Spectrum-Based Fault Localization (SBFL),
where the fault-proneness of a statement is computed as a function of
the number of times that the statement was executed in a failing test case, combined with
the number of times that the statement was skipped in a passing test case~\cite{Jones2002,Chen2002,Abreu2007,Abreu2009,Naish2011}.
This more or less directly translates to the inclusion or exclusion of events in failing, resp. passing logs,
where the difference is that SBLD adds clustering of the results to enable step-wise presentation of results to the user.
\end{sloppypar}
A recent survey of Software Fault Localization includes the SBFL literature up to 2014~\cite{Wong2016}.
De Souza et. all extend this with SBFL work up to to 2017, and add an overview of seminal work on automated debugging from 1950 to 1977~\cite{deSouza2017}.
By reflecting on the information-theoretic foundations of fault localization, Perez proposes the DDU metric,
which can be used to evaluate test suites and predict their diagnostic performance when used in SBFL~\cite{Perez2018}.
One avenue for future work is exploring how a metric like this can be adapted to our context,
and see if helps to explain what happened with test 43.
A recent evaluation of \emph{pure} SBFL on large-scale software systems found that it under-performs in these situations
(only 33-40\% of the bugs are identified with the top 10 of ranked results~\cite{heiden2019:evaluation}.
The authors discuss several directions beyond pure SBFL, such as combining it with dynamic program analysis techniques,
including additional text analysis/IR techniques~\cite{Wang2015a}, mutation based fault localization,
and using SBFL in an interactive feedback-based process, such as whyline-debugging~\cite{ko2008:debugging}.
Pure SBFL is closely related to the Spectrum-Based Log Diagnosis proposed here,
so we may see similar challenges (in fact, test 43 may already show some of this).
Of the proposed directions to go beyond pure SBFL,
both the inclusion of additional text analysis/IR techniques,
and the application of Spectrum-Based Log Diagnosis in an interactive feedback-based process
are plausible avenues to extend our approach.
Closely related to the latter option,
de Souza et al.~\cite{deSouza2018b} assess guidance and filtering strategies to \emph{contextualize} the fault localization process.
Their results suggest that contextualization by guidance and filtering can improve the effectiveness of SBFL,
by classifying more actual bugs in the top ranked results.
\begin{comment}
Direct comparison~\cite{He2018, jiang2017:what, Jones:2007:DP:1273463.1273468,
Xu2009, Hwa-YouHsu:2008:RIB:1642931.1642994}.
Hsu et
al~\cite{Hwa-YouHsu:2008:RIB:1642931.1642994} discuss methods for extracting
failure signatures as sequences of code executions, which in spirit is rather
similar to what we are trying to accomplish.
An interesting data-structure, the event correlation
graph, is explores in~\cite{Fu2012a}. An FL metric that takes frequencies into
account~\cite{Shu2016}.
\end{comment}
%
\section{Threats to Validity}
\label{sec:ttv}
\head{Construct Validity} %
The signatures that provide our ground truth were devised to determine whether a given log \emph{in its entirety} showed symptoms of a known error.
As discussed in Section~\ref{sec:dataset}, we have used these signatures to detect events that give sufficient evidence for a symptom,
but there may be other events that could be useful to the user that are not part of our ground truth.
We also assume that the logs exhibit exactly the failures described by the signature expression.
In reality, the logs could contain symptoms of multiple failures beyond the ones described by the signature.
Furthermore, we currently do not distinguish between events that consist of single line of text,
or events that contain a multi-line stack-trace, although these clearly represent different comprehension efforts.
This threat could be addressed by tracking the \emph{length} of the event contents,
and using it to further improve the accuracy of our effort reduction measure.
The choice of clustering algorithm and parameters affects the events retrieved,
but our investigation currently only considers HAC with complete linkage.
While we chose complete linkage to favor compact clusters,
outliers in the dataset could cause unfavorable clustering outcomes.
Furthermore, using the uncorrected sample standard deviation as threshold criterion
may be too lenient if the variance in the scores is high.
This threat could be addressed by investigate alternative cluster algorithm and parameter choices.
Moreover, as for the majority of log analysis frameworks, the performance of SBLD strongly depends on the quality of log abstraction.
An error in the abstraction will directly propagate to SBLD:
For example, if abstraction fails to identify two concrete events as being instances of the same generic event,
their aggregated frequencies will be smaller and consequently treated as less interesting by SBLD.
Similarly, the accuracy will suffer if two events that represent distinct generic events are treated as instances of the same generic event.
Future work could investigate alternative log abstraction approaches.
\head{Internal Validity} %
While our heatmaps illustrate the interaction between additional data and SBLD performance,
they are not sufficient to prove a causal relationship between performance and added data.
Our statistical comparisons suggests that a strategy of maximizing data is generally preferable,
but they are not sufficient for discussing the respective contribution of failing or passing logs.
\head{External Validity} %
This investigation is concerned with a single dataset from one industrial partner.
Studies using additional datasets from other contexts is needed to assess the generalizability of SBLD to other domains.
Moreover, while SBLD is made to help users diagnose problems that are not already well understood,
we are assessing it on a dataset of \emph{known} problems.
It could be that these errors, being known, are of a kind that are generally easier to identify than most errors.
Studying SBLD in-situ over time and directly assessing whether end users found it helpful
in diagnosis would better indicate the generalizability of our approach.
%
\section{Concluding Remarks}
\label{sec:conclusion}
\head{Contributions}
This paper presents and evaluates Spectrum-Based Log Diagnosis (SBLD),
a method for automatically identifying segments of failing logs
that are likely to help users diagnose failures.
Our empirical investigation of SBLD addresses the following questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Results}
In response to (i),
we find that SBLD generally retrieves the failure-relevant events in a compact manner
that effectively reduces the effort needed to identify failure-relevant events.
In response to (ii),
we find that SBLD benefits from addition data, especially more logs from successful runs.
SBLD also benefits from additional logs from failing runs if there is a proportional amount of successful runs in the set.
We also find that the effect of added data is most pronounced when going from little data to \emph{some} data rather than from \emph{some} data to maximal data.
In response to (iii),
we find that SBLD achieves roughly the same effort reduction as traditional search-based methods but obtains slightly lower recall.
We trace the likely cause of this discrepancy on recall to a prominent part of our dataset, whose ground truth emphasizes rare events.
A lesson learned in this regard is that SBLD is not suited for finding statistical outliers but rather \emph{recurring suspects}
that characterize the observed failures.
Furthermore, the investigation highlights that traditional pattern-based search and SBLD can complement each other nicely:
Users can resort to SBLD if they are unhappy with what the pattern-based searches turn
up, and SBLD is an excellent method for finding characteristic textual patterns
that can form the basis of automated failure identification methods.
\head{Conclusions}
We conclude that SBLD shows promise as a method diagnosing failing runs,
that its performance is positively affected by additional data,
but that it does not outperform textual search on the dataset considered.
\head{Future work}
We see the following directions for future work:
(a) investigate SBLD's performance on other datasets, to better assess generalizability,
(b) explore the impact of alternative log abstraction mechanisms,
(c) explore ways of combining SBLD with outlier detection, to accommodate different user needs,
(d) adapt the Perez' DDU metric to our context and see if it can help predict diagnostic efficiency,
(e) experiment with extensions of \emph{pure SBLD} that include additional text analysis/IR techniques,
or apply it in an interactive feedback-based process
(f) rigorously assess (extensions of) SBLD in in-situ experiments.
\begin{acks}
We thank Marius Liaaen and Thomas Nornes of Cisco Systems Norway for help with obtaining and understanding the dataset, for developing the log abstraction
mechanisms and for extensive discussions.
This work is supported by the \grantsponsor{RCN}{Research Council of Norway}{https://www.rcn.no} through the
Certus SFI (\grantnum{RCN}{\#203461/030)}.
The empirical evaluation was performed on resources provided by \textsc{uninett s}igma2,
the national infrastructure for high performance computing and data
storage in Norway.
\end{acks}
\printbibliography
\end{document}
|
\section{Introduction}
When granular material in a cubic container is shaken
horizontally one observes experimentally different types of
instabilities, i.e. spontaneous formation of ripples in shallow
beds~\cite{StrassburgerBetatSchererRehberg:1996},
liquefaction~\cite{RistowStrassburgerRehberg:1997,Ristow:1997}, convective
motion~\cite{TennakoonBehringer:1997,Jaeger} and recurrent swelling of
shaken material where the period of swelling decouples from the
forcing period~\cite{RosenkranzPoeschel:1996}. Other interesting experimental results concerning simultaneously vertically and horizontally vibrated granular systems~\cite{TennakoonBehringer:1998} and enhanced packing of spheres due to horizontal vibrations~\cite{PouliquenNicolasWeidman:1997} have been reported recently. Horizontally shaken
granular systems have been simulated numerically using cellular
automata~\cite{StrassburgerBetatSchererRehberg:1996} as well as
molecular dynamics
techniques~\cite{RistowStrassburgerRehberg:1997,Ristow:1997,IwashitaEtAl:1988,LiffmanMetcalfeCleary:1997,SaluenaEsipovPoeschel:1997,SPEpre99}.
Theoretical work on horizontal shaking can be found
in~\cite{SaluenaEsipovPoeschel:1997} and the dynamics of a single
particle in a horizontally shaken box has been discussed
in~\cite{DrosselPrellberg:1997}.
\begin{figure}[htbp]
\centerline{\psfig{file=sketch.eps,width=7cm,clip=}}
\caption{Sketch of the simulated system.}
\label{fig:sketch}
\end{figure}
Recently the effect of convection in a horizontally shaken box filled with
granular material attracted much attention and presently the effect is studied
experimentally by different
groups~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
Unlike the effect of convective motion in vertically shaken granular
material which has been studied intensively experimentally,
analytically and by means of computer simulations
(s.~e.g.~\cite{vertikalEX,JaegerVert,vertikalANA,vertikalMD}), there
exist only a few references on horizontal shaking. Different from the
vertical case, where the ``architecture'' of the convection pattern is
very simple~\cite{BizonEtAl:1998}, in horizontally shaken containers one observes a variety
of different patterns, convecting in different directions, in parallel
as well as perpendicular to the direction of
forcing~\cite{TennakoonBehringer:1997}. Under certain conditions one
observes several convection rolls on top of each other~\cite{Jaeger}.
An impression of the complicated convection can be found in the
internet~\cite{movies}.
Whereas the properties of convection in vertically sha\-ken systems
can be reproduced by two dimensional molecular dynamics simulations
with good reliability, for the case of horizontal motion the results
of simulations are inconsistent with the experimental results: in {\em
all} experimental investigations it was reported that the material
flows downwards close to the vertical
walls~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996,movies},
but reported numerical simulations systematically show surface rolls
in opposite direction accompanying the more realistic deeper rolls, or
even replacing them completely~\cite{LiffmanMetcalfeCleary:1997}.
Our investigation is thus concerned with the convection pattern, i.e. the
number and direction of the convection rolls in a two dimensional
molecular dynamics simulation. We will show that the choice of the
dissipative material parameters has crucial influence on the convection pattern
and, in particular, that the type of convection rolls observed experimentally
can be
reproduced by using sufficiently high dissipation constants.
\section{Numerical Model}
The system under consideration is sketched in Fig.~\ref{fig:sketch}:
we simulate a two-dimensional vertical cross section of a three-dimensional
container.
This rectangular section of width $L=100$ (all units in cgs system), and
infinite height, contains $N=1000$ spherical particles. The system is
periodically driven by an external oscillator $x(t) = A \sin (2\pi f
t)$ along a horizontal plane. For the effect we want to show, a
working frequency $f=10$ and amplitude $A=4$ is
selected.
These values give an acceleration amplitude of approximately $16 g$.
Lower accelerations affect the intensity of the
convection but do not change the basic features of the convection
pattern which we want to discuss.
As has been shown in~\cite{SPEpre99},
past the fluidization point, a much better indicator of the convective
state is the dimensionless velocity $A 2\pi f/ \sqrt{Lg}$. This means
that in small containers motion saturates earlier, hence, results for
different container lengths at the same values of the acceleration amplitude
cannot be compared directly. Our acceleration amplitude $\approx 16g$ corresponds to
$\approx 3g$ in a 10 cm container (provided that the frequency is the same
and particle sizes have been
scaled by the same amount).
The radii of the particles of density $2$ are homogeneously
distributed in the interval $[0.6, 1.4]$. The rough inner walls of the
container are simulated by attaching additional particles of the same
radii and material properties (this simulation technique is similar to ``real''
experiments, e.g.~\cite{JaegerVert}).
For the molecular dynamics simulations, we apply a modified
soft-particle model by Cundall and Strack~\cite{CundallStrack:1979}:
Two particles $i$ and $j$, with radii $R_i$ and $R_j$ and at positions
$\vec{r}_i$ and $\vec{r}_j$, interact if their compression $\xi_{ij}=
R_i+R_j-\left|\vec{r}_i -\vec{r}_j\right|$ is positive. In this case
the colliding spheres feel the force
$F_{ij}^{N} \vec{n}^N + F_{ij}^{S} \vec{n}^S$,
with $\vec{n}^N$ and $\vec{n}^S$ being the unit vectors in normal and shear
direction. The normal force acting between colliding spheres reads
\begin{equation}
F_{ij}^N = \frac{Y\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}}{1-\nu^2}
~\left(\frac{2}{3}\xi_{ij}^{3/2} + B \sqrt{\xi_{ij}}\,
\frac{d {\xi_{ij}}}{dt} \right)
\label{normal}
\end{equation}
where $Y$ is the Young modulus, $\nu$ is the Poisson ratio and $B$
is a material constant which characterizes the dissipative
character of the material~\cite{BSHP}.
\begin{equation}
R^{\,\mbox{\it\footnotesize\it
eff}}_{ij} = \left(R_i R_j\right)/\left(R_i + R_j\right)
\end{equation}
is the
effective radius. For a strict derivation of (\ref{normal})
see~\cite{BSHP,KuwabaraKono}.
For the shear force we apply the model by Haff and Werner~\cite{HaffWerner}
\begin{equation}
F_{ij}^S = \mbox{sign}\left({v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right)
\min \left\{\gamma_s m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
\left|{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right|~,~\mu
\left|F_{ij}^N\right| \right\}
\label{shear}
\end{equation}
with the effective mass $m_{ij}^{\,\mbox{\it\footnotesize\it eff}} =
\left(m_i m_j\right)/\left(m_i + m_j\right)$ and the relative velocity
at the point of contact
\begin{equation}
{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}} = \left(\dot{\vec{r}}_i -
\dot{\vec{r}}_j\right)\cdot \vec{n}^S + R_i {\Omega}_i + R_j {\Omega}_j ~.
\end{equation}
$\Omega_i$ and $\Omega_j$ are the angular velocities of the particles.
The resulting momenta $M_i$ and $M_j$ acting upon the particles are
$M_i = F_{ij}^S R_i$ and $M_j = - F_{ij}^S R_j$. Eq.~(\ref{shear})
takes into account that the particles slide upon each other for the
case that the Coulomb condition $\mu \mid F_{ij}^N \mid~<~\left|
F_{ij}^S \right|$ holds, otherwise they feel some viscous friction.
By means of $\gamma _{n} \equiv BY/(1-\nu ^2)$ and $\gamma _{s}$,
normal and shear damping coefficients, energy loss during particle
contact is taken into account~\cite{restitution}.
The equations of motion for translation and rotation have been solved
using a Gear predictor-corrector scheme of sixth order
(e.g.~\cite{AllenTildesley:1987}).
The values of the coefficients used in simulations are $Y/(1-\nu
^2)=1\times 10^{8}$, $\gamma _{s}=1\times 10^{3}$, $ \mu =0.5$. For
the effect we want to show, the coefficient $\gamma _{n}$ takes values within the range
$\left[10^2,10^4\right]$.
\section{Results}
The mechanisms for convection under horizontal shaking have been
discussed in \cite{LiffmanMetcalfeCleary:1997}. Now we can show that
these mechanisms can be better understood by taking into account the
particular role of dissipation in this problem. The most striking
consequence of varying the normal damping coefficient is the change
in organization of the convective pattern, i.e. the direction and
number of rolls in the stationary regime. This is shown in
Fig.~\ref{fig1}, which has been obtained after averaging particle
displacements over 200 cycles
(2 snapshots per cycle).
The asymmetry of compression and expansion of particles close to
the walls (where the material results highly compressible) explains
the large transverse velocities shown in the figure.
Note, however, that the upward and downward motion at the walls cannot be altered
by this particular averaging procedure.
The first frame shows a convection pattern with only two rolls, where
the arrows indicate that the grains slide down the walls, with at most
a slight expansion of the material at the surface.
There are no surface rolls.
This is very
similar to what has been observed in
experiments\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
In this case, dissipation is high enough to damp most of the sloshing
induced by the vertical walls, and not even the grains just below the
surface can overcome the pressure gradient directed downwards.
For lower damping, we see the developing of surface rolls,
which
coexist with the inner rolls circulating in the opposite way. Some
energy is now available for upward motion when the walls compress the
material fluidized during the opening of the wall ``gap'' (empty space
which is created alternatively during the shaking motion). This is the
case reported in \cite{LiffmanMetcalfeCleary:1997}. The last frames
demonstrate how the original rolls vanish at the same time that the
surface rolls grow occupying a significant part of the system.
Another feature shown in the figure is the thin layer of material involving
3 particle rows close to the bottom, which perform a different kind
of motion. This effect, which can be seen in all frames,
is due to the presence of the constraining boundaries
but has not been analyzed separately.
\onecolumn
\begin{figure}
\centerline{\psfig{file=fric1nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric2nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric3nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric4nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric5nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric6nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric7nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric8nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric9nn.eps,width=5.7cm,clip=}}
\vspace{0.3cm}
\caption{Velocity field obtained after cycle averaging of
particle displacements, for different values of the normal damping
coefficient, $\gamma_n$. The first one is $1\times 10^4$, and for
obtaining each subsequent frame the coefficient has been divided by
two. The frames are ordered from left to right and from top to
bottom. The cell size for averaging is approximately one particle diameter.}
\label{fig1}
\vspace*{-0.2cm}
\end{figure}
\twocolumn
With decreasing normal damping $\gamma_n$ there are two transitions
observable in Fig.~\ref{fig1}, meaning that the convection pattern changes
qualitatively at these two particular values of $\gamma_n$:
The first transition leads to the appearance of two surface rolls
laying on top of the bulk cells and circulating in opposite direction.
The second transition eliminates the bulk rolls. A more detailed analysis of
the displacement fields (Fig.~\ref{fig2})
allows us to locate the transitions much more precisely.
In Fig.~\ref{fig2} we have represented in grey-scale the horizontal and
vertical components of the displacement vectors pictured in
Fig.~\ref{fig1} but in a denser sampling, analyzing data from 30 simulations
corresponding to
values of the normal damping coefficient within the interval [50,10000].
For horizontal displacements, we have chosen vertical sections
at some representative position in horizontal direction
($x=30$). For the vertical displacements, vertical sections of the
leftmost part of the container were selected ($x=10$), s.
Fig.~\ref{fig2}, lower part.
\begin{figure}
\centerline{\psfig{file=vx.eps,width=4.5cm,clip=}\hspace{-0.5cm}
\psfig{file=vy.eps,width=4.5cm,clip=}
\centerline{\psfig{file=sectionn.eps,height=4.2cm,bbllx=7pt,bblly=16pt,bburx=507pt,bbury=544pt,clip=}}
\vspace*{0.2cm}
\caption{Horizontal (left) and vertical (right) displacements at
selected positions of the frames in Fig.~\ref{fig1} (see the text
for details), for decreasing normal damping and as a function of
depth. White indicates strongest flow along positive axis directions
(up,right), and black the corresponding negative ones. The black region
at the bottom of the left picture corresponds to the complex boundary
effect observed in Fig.~\ref{fig1}, involving only two particle layers.
The
figure below shows a typical convection pattern together with the sections
at $x=10$ and $x=30$ at which the displacements were recorded.}
\label{fig2}
\vspace*{-0.1cm}
\end{figure}
The horizontal axis shows the values of the normal damping
coefficient scaled logarithmically in decreasing sequence. The
vertical axis represents the position in vertical direction, with the
free surface of the system located at $y \approx 60$. One observes first
that white surface shades, complemented by subsurface black ones,
appear quite clearly at about $\gamma =$2000 in Fig.~\ref{fig2}
(left), indicating the appearance of surface rolls. On the other
hand, Fig.~\ref{fig2} (right) shows a black area (indicative of
downward flow along the vertical wall) that vanishes at
$\gamma_n \approx 200$ (at this point the grey shade represents vanishing vertical velocity).
The dashed lines in Fig.~\ref{fig2} lead the eye to identify the transition values.
In the interval $ 200 \lesssim \gamma_n
\lesssim 2000$ surface and inner rolls coexist, rotating in opposite
directions.
One can analyze the situation in terms of the restitution coefficient.
\ From Eq. (\ref{normal}), the equation of motion for the displacement
$\xi_{ij}$ can be integrated and the relative energy loss in a
collision $\eta=(E_0-E)/E_0$ (with $E$ and $E_0$ being the energy of
the relative motion of the particles) can be evaluated approximately.
Up to the lowest order in the expansion parameter, one
finds~\cite{Thomas-Thorsten}
\begin{equation}
\eta = 1.78 \left( \frac{\tau}{\ell} v_0\right)^{1/5}\;,
\label{energyloss}
\end{equation}
where $v_0$ is the relative initial velocity in normal direction, and
$\tau$, $\ell$, time and length scales associated with the problem
(see~\cite{Thomas-Thorsten} for details),
\begin{equation}
\tau = \frac{3}{2} B\; ,~~~~~~~~~
\ell = \left(\frac{1}{3} \frac{m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
}{\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}
B \gamma_{n}}\right)^{2}.
\end{equation}
For $\gamma_n = 10^4$ (the highest value analyzed) and the values of
the parameters specified above ($v_0 \approx A 2\pi f$ for collisions
with the incoming wall), $B= 10^{-4}$ and $\eta$ is typically
50\%. This means that after three more collisions the particle leaves
with an energy not enough to overcome the height of one single
particle in the gravity field. For $\gamma_n = 10^3$ and the other
parameters kept constant, $B=10^{-5}$ and $\eta$ has been
reduced to 5\%, resulting in that the number of collisions needed for
the particle to have its kinetic energy reduced to the same residual
fraction, has increased roughly by an order of magnitude. On the other
hand, given the weak dependence of Eq. (\ref{energyloss}) on the
velocity, one expects that the transitions shown in Fig.~\ref{fig2}
will depend also weakly on the amplitude of the shaking velocity. The reduction of the
inelasticity $\eta$ by an order of magnitude seems enough for
particles to ``climb'' the walls and develop the characteristic
surface rolls observed in numerical simulations.
\section{Discussion}
We have shown that the value of the normal damping coefficient
influences the convective pattern of horizontally shaken granular
materials. By means of molecular dynamics simulations in two
dimensions we can reproduce the pattern observed in real experiments,
which corresponds to a situation of comparatively high damping,
characterized by inelasticity parameters $\eta$ larger than 5\%. For
lower damping, the upper layers of the material develop additional
surface rolls as has been reported previously. As normal damping
decreases, the lower rolls descend and finally disappear completely at
inelasticities of the order of 1\%.
\begin{acknowledgement}
The authors want to thank R. P. Behringer, H. M. Jaeger, M. Medved,
and D. Rosenkranz for providing experimental results prior to
publication and V. Buchholtz, S. E. Esipov, and L. Schimansky-Geier
for discussion. The calculations have been done on the parallel
machine {\it KATJA} (http://summa.physik.hu-berlin.de/KATJA/) of the
medical department {\em Charit\'e} of the Humboldt University Berlin.
The work was supported by Deut\-sche Forschungsgemeinschaft through
grant Po 472/3-2.
\end{acknowledgement}
|
\section{\label{sec:intro}Introduction}
Demonstration of non-abelian exchange statistics is one of the most active areas of condensed matter research and yet experimental realization of braiding of Majorana modes remains elusive~\cite{RevModPhys.80.1083,zhang2019next}. Most efforts so far have been focused on superconductor/semiconductor nanowire hybrids, where Majorana bound states (MBS) are expected to form at the ends of a wire or at boundaries between topologically trivial and non-trivial regions~\cite{rokhinson2012fractional, deng2012anomalous, mourik2012signatures, LutchynReview}. Recently, it became clear that abrupt interfaces may also host topologically trivial Andreev states with experimental signatures similar to MBS \cite{pan2020generic,Yu2021}, which makes demonstrating braiding in nanowire-based platforms challenging. Phase-controlled long Josephson junctions (JJ) open much wider phase space to realize MBS with a promise to solve some problems of the nanowire platform, such as enabling zero-field operation to avoid detrimental flux focusing for in-plane fields \cite{pientka2017topological, ren2019topological}. However, MBSs in long JJs suffer from the same problems as in the original Fu-Kane proposal for topological insulator/superconductor JJs, such as poor control of flux motion along the junction and presence of sharp interfaces in the vicinity of MBS-carrying vortices which may host Andreev states and trap quasiparticles. For instance, MBS spectroscopy in both HgTe and InAs-based JJs shows a soft gap \cite{fornieri2019evidence}, despite a hard SC gap in an underlying InAs/Al heterostructure.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.95\textwidth}
\includegraphics[width=1\textwidth]{Schematic.pdf}
\caption{\label{fig:schematic}}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=1\textwidth]{stack_2.pdf}
\caption{\label{fig:layers}}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=1\textwidth]{Flow_2.pdf}
\caption{\label{fig:flow}}
\end{subfigure}
\caption{\label{fig:one} (a) Schematic of the Majorana braiding platform. Magnetic multilayer (MML) is patterned into a track and is separated from TSC by a thin insulating layer. Green lines represent on-chip microwave resonators for a dispersive parity readout setup. The left inset shows a magnified view of a SVP and the right inset shows the role of each layer (b) Expanded view of the composition of an MML (c) Process flow diagram for our Majorana braiding scheme. Here, $T_c$ is superconducting transition temperature and $T_{BKT}$ is Berezinskii–Kosterlitz–Thouless transition temperature for the TSC.}
\end{figure*}
In the search for alternate platforms to realize Majorana braiding, spectroscopic signatures of MBS have been recently reported in STM studies of vortex cores in iron-based topological superconductors (TSC) \cite{wang2018evidence}. Notably, a hard gap surrounding the zero-bias peak at a relatively high temperature of $0.55$ K, and a $5$ K separation gap from trivial Caroli-de Gennes-Matricon (CdGM) states were observed \cite{chen2020observation, chen2018discrete}. Moreover, vortices in a TSC can be field-coupled to a skyrmion in an electrically-separated magnetic multilayer (MML) \cite{volkov,petrovic2021skyrmion}, which can be used to manipulate the vortex. This allows for physical separation of the manipulation layer from the layer wherein MBS reside, eliminating the problem of abrupt interfaces faced by nanowire hybrids and JJs. Finally, recent advances in the field of spintronics provide a flexible toolbox to design MML in which skyrmions of various sizes can be stabilized in zero external magnetic field and at low temperatures \cite{petrovic2021skyrmion, buttner2018theory, dupe2016engineering}. Under the right conditions, stray fields from these skyrmions alone can nucleate vortices in the adjacent superconducting layer. In this paper, we propose TSC--MML heterostructures hosting skyrmion-vortex pairs (SVP) as a viable platform to realize Majorana braiding. By patterning the MML into a track and by driving skyrmions in the MML with local spin-orbit torques (SOT), we show that the SVPs can be effectively moved along the track, thereby facilitating braiding of MBS bound to vortices.
The notion of coupling skyrmions (Sk) and superconducting vortices (Vx) through magnetic fields has been studied before \cite{volkov, baumard2019generation, zhou_fusion_2022, PhysRevLett.117.077002, PhysRevB.105.224509, PhysRevB.100.064504, PhysRevB.93.224505, PhysRevB.99.134505, PhysRevApplied.12.034048}. Menezes et al. \cite{menezes2019manipulation} performed numerical simulations to study the motion of a skyrmion--vortex pair when the vortex is dragged via supercurrents and Hals et al. \cite{hals2016composite} proposed an analytical model for the motion of such a pair where a skyrmion and a vortex are coupled via exchange fields. However, the dynamics of a SVP in the context of Majorana braiding remains largely unexplored. Furthermore, no \textit{in-situ} non-demolition experimental technique has been proposed to measure MBS in these TSC--MML heterostructures. In this paper, through micromagnetic simulations and analytical calculations within London and Thiele formalisms, we study the dynamics of a SVP subjected to external spin torques. We demonstrate that the SVP moves without dissociation up to speeds necessary to complete Majorana braiding within estimated quasiparticle poisoning time. We further eliminate the problem of \textit{in-situ} MBS measurements by proposing a novel on-chip microwave readout technique. By coupling the electric field of the microwave cavity to dipole-moments of transitions from Majorana modes to CdGM modes, we show that a topological non-demolition dispersive readout of the MBS parity can be realized. Moreover, we show that our platform can be used to make the first experimental observations of quasiparticle poisoning times in topological superconducting vortices.
The paper is organized as follows: in Section~\ref{sec:plat} we present a schematic and describe our platform. In Section~\ref{sec:initial} we present the conditions for initializing a skyrmion--vortex pair and discuss its equilibrium properties. In particular, we characterize the skyrmion--vortex binding strength. In Section~\ref{sec:braid} we discuss the dynamics of a SVP in the context of braiding. Then in Section~\ref{sec:read}, we present details of our microwave readout technique. Finally, we discuss the scope of our platform in Section~\ref{sec:summ}.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{energies.jpg}
\caption{\label{fig:energies}}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{forces.jpg}
\caption{\label{fig:forces}}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{fvav.jpg}
\caption{\label{fig:fvav}}
\end{subfigure}
\caption{\label{fig:onenew} (a -- b) Normalized energies and forces for Sk--Vx interaction between a Pearl vortex and a N\'eel skyrmion of varying thickness. (c) Attractive $F_{Vx-Avx}$ and repulsive $F_{Sk-Avx}$ (colored lines) for the example materials in Appendix~\ref{app:A}: $M_{0}=1450$ emu/cc, $r_{sk}=35$ nm, $d_s = 50$ nm, $\Lambda = 5$ $\mu$m and $\xi=15$ nm.}
\end{figure*}
\section{\label{sec:plat}Platform Description}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.59\textwidth}
\includegraphics[width=1\textwidth]{Braiding.jpg}
\caption{\label{fig:braiding}}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics[width=1\textwidth]{t0.jpg}
\caption{\label{fig:t0}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t1.jpg}
\caption{\label{fig:t1}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t2.jpg}
\caption{\label{fig:t2}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t3.jpg}
\caption{\label{fig:t3}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t4.jpg}
\caption{\label{fig:t4}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t55.jpg}
\caption{\label{fig:t5}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t6.jpg}
\caption{\label{fig:t6}}
\end{subfigure}
\caption{\label{fig:two} (a) Schematic of our braiding process: manipulations of four skyrmions in the MML track are shown. MBS at the centers of vortices bound to each of these skyrmions are labeled $\gamma_1$--$\gamma_4$. Ohmic contacts in HM layers of the MML are shown in brown and rf readout lines are shown in green. II--VI show the steps involved in braiding $\gamma_2$ and $\gamma_4$. In step II, $\gamma_1$ and $\gamma_2$ are brought close to rf lines by applying charge currents from C to A and D to B, respectively. $\gamma_1$ and $\gamma_2$ are then initialized by performing a dispersive readout of their parity (see Section~\ref{sec:read}). Similarly, $\gamma_3$ and $\gamma_4$ are initialized after applying charge currents along P to R and Q to S, respectively. In step III, $\gamma_2$ is moved aside to make room for $\gamma_4$ by applying currents from B to X followed by applying currents from X to C. In step IV, $\gamma_4$ is braided with $\gamma_2$ by applying currents along S to X and X to B. Finally, in step V, the braiding process is completed by bringing $\gamma_2$ to S by applying currents from A to X and from X to S. Parities (i.e., fusion outcomes) of $\gamma_1$ and $\gamma_4$, and $\gamma_3$ and $\gamma_2$ are then measured in step VI. Fusion outcomes in each pair of MBS indicate the presence or absence of a fermion corresponding to a parity of $\pm1$ \cite{PhysRevApplied.12.054035, PhysRevX.6.031016}. (b) Initial position of the skyrmions labeled A and B in the micromagnetic simulation for skyrmion braiding (see Appendix.~\ref{app:A}) (c--h) Positions of the two skyrmions at the given times as the braiding progresses. Charge current $j = 2\times 10^{12}$ A/m$^2$ was applied.}
\end{figure*}
Our setup consists of a thin TSC layer that hosts vortices grown on top of a MML that hosts skyrmions as shown in Fig.~\ref{fig:schematic}. A thin insulating layer separates the magnetic and superconducting layers ensuring electrical separation between the two. Vortices in a TSC are expected to host MBS at their cores \cite{wang2018evidence,chen2020observation, chen2018discrete}. Stray fields from a skyrmion in the MML nucleate such a vortex in the TSC, forming a bound skyrmion--vortex pair under favorable energy conditions (see Sec.~\ref{sec:initial}). This phenomenon has been recently experimentally demonstrated in Ref.~\cite{petrovic2021skyrmion}, where stray fields from N\'eel skyrmions in Ir/Fe/Co/Ni magnetic multilayers nucleated vortices in a bare Niobium superconducting film.
The MML consists of alternating magnetic and heavy metal (HM) layers, as shown in Fig.~\ref{fig:layers}. The size of a skyrmion in a MML is determined by a delicate balance between exchange, magnetostatic, anisotropy and Dzyaloshinskii–Moriya interaction (DMI) energies \cite{wang2018theory, romming2015field} -- and the balance is highly tunable, thanks to advances in spintronics \cite{buttner2018theory, dupe2016engineering, soumyanarayanan2017tunable}. Given a TSC, this tunability allows us to find a variety of magnetic materials and skyrmion sizes that can satisfy the vortex nucleation condition [to be detailed in Eq.~(\ref{eqn:nuc})]. In Appendix~\ref{app:A}, we provide a specific example of FeTeSe topological superconductor coupled with Ir/Fe/Co/Ni magnetic multilayers.
Due to large intrinsic spin-orbit coupling, a charge current through the heavy metal layers of a MML exerts spin-orbit torques (SOT) on the magnetic moments in the MML, which have been shown to drive skyrmions along magnetic tracks \cite{fert2013skyrmions, woo2017spin}. In our platform, to realize Majorana braiding we propose to pattern the MML into a track as shown in Fig.~\ref{fig:schematic} and use local spin-orbit torques to move skyrmions along each leg of the track. If skyrmions are braided on the MML track, and if skyrmion-vortex binding force is stronger than total pinning force on the SVPs, then the MBS hosting vortices in TSC will closely follow the motion of skyrmions, resulting in the braiding of MBS. We note here that there is an upper threshold speed with which a SVP can be moved as detailed in Sec.~\ref{sec:braid}. By using experimentally-relevant parameters for TSC and MML in Appendix~\ref{app:A}, we show that our Majorana braiding scheme can be realized with existing materials.
We propose a non-demolition microwave measurement technique for the readout of the quantum information encoded in a pair of vortex Majorana bound states (MBS). A similar method has been proposed for the parity readout in topological Josephson junctions~\cite{PhysRevB.92.245432,Vayrynen2015,Yavilberg2015,PhysRevB.99.235420,PRXQuantum.1.020313} and in Coulomb blockaded Majorana islands~\cite{PhysRevB.95.235305}. Dipole moments of transitions from MBS to CdGM levels couple dispersively to electric fields in a microwave cavity, producing a parity-dependent dispersive shift in the cavity resonator frequency. Thus by probing the change in the resonator's natural frequency, the state of the Majorana modes can be inferred. Virtual transitions from Majorana subspace to excited CdGM subspace induced due to coupling to the cavity electric field are truly parity conserving, making our readout scheme a so-called topological quantum non-demolition technique \cite{PRXQuantum.1.020313, PhysRevB.99.235420}. The readout scheme is explained in greater detail in Sec.~\ref{sec:read}.
As discussed above, in our platform we consider coupling between a thin superconducting layer and magnetic multilayers. We note that in thin superconducting films, vortices are characterized by the Pearl penetration depth, given by $\Lambda \ =\ \lambda ^{2} /d_{s}$, where $\lambda$ is the London penetration depth and $d_{s}$ is the thickness of the TSC film. Typically, these penetration depths $\Lambda$ are much larger than skyrmion radii $r_{sk}$ in MMLs of interest. Further, interfacial DMI in MML stabilizes a N\'eel skyrmion as opposed to a Bloch skyrmion. So hereon, we only study coupling between a N\'eel skyrmion and a Pearl vortex in the limit $\Lambda\gg r_{sk}$.
\section{\label{sec:initial}Initialization and SVP in Equilibrium}
Fig.~\ref{fig:flow} illustrates the process flow of our initialization scheme. Skyrmions can be generated individually in MML by locally modifying magnetic anisotropy through an artificially created defect center and applying a current through adjacent heavy metal layers \cite{zhang2020skyrmion}. Such defect centers have been experimentally observed to act as skyrmion creation sites \cite{buttner2017field}. When the TSC--MML heterostructure is cooled below the superconducting transition temperature (SC $T_{C}$), stray fields from a skyrmion in the MML will nucleate a vortex and an antivortex in the superconducting layer if the nucleation leads to a lowering in overall free energy of the system \cite{volkov}. An analytical expression has been obtained for the nucleation condition in Ref.~\cite{NeelInteraction} ignoring contributions of dipolar and Zeeman energies to total magnetic energy: a N\'eel skyrmion nucleates a vortex directly on top of it if
\begin{equation}
d_{m}\left[ \alpha _{K}\frac{Kr_{sk}^{2}}{2} -\alpha _{A} A-M_{0} \phi _{0}\right] \geq \frac{{\phi _{0}}^2}{8 \pi^2 \lambda} \ln\left(\frac{\Lambda }{\xi }\right).
\label{eqn:nuc}
\end{equation}
\noindent Here, $d_{m}$ is the effective thickness, $M_{0}$ is the saturation magnetization, $A$ is the exchange stiffness and $K$ is the perpendicular anisotropy constant of the MML; $\alpha_K$ and $\alpha_A$ are positive constants that depend on skyrmion's spatial profile (see Appendix~\ref{app:A}), $r_{sk}$ is the radius of the skyrmion in the presence of a Pearl vortex \footnote{The radius of a skyrmion is not expected to change significantly in the presence of a vortex \cite{NeelInteraction}. We verified this claim with micromagnetic simulations. For the materials in Appendix~\ref{app:A}, when vortex fields are applied on a bare skyrmion, its radius increased by less than $10\%$. So, for numerical calculations in this paper, we use bare skyrmion radius for $r_{sk}$.}, $\phi _{0}$ is the magnetic flux quantum, and $\Lambda$ ($\xi$) is the Pearl depth (coherence length) of the TSC. Although a complete solution of the nucleation condition must include contributions from dipolar and Zeeman energies to total energy of a MML, such a calculation can only be done numerically and Eq.~(\ref{eqn:nuc}) can still be used as an approximate estimate. For the choice of materials listed in the Appendix, the left side of the equation exceeds the right side by $400\%$, strongly suggesting the nucleation of a vortex for every skyrmion in the MML. Furthermore, skyrmions in Ir/Fe/Co/Ni heterostructures have also been experimentally shown to nucleate vortices in Niobium superconducting films \cite{petrovic2021skyrmion}.
We proceed to characterize the strength of a skyrmion (Sk) -- vortex (Vx) binding force as it plays a crucial role in determining the feasibility of moving the skyrmion and the vortex as a single object. Spatial magnetic profile of a N\'eel skyrmion is given by $\boldsymbol{M}_{sk} =M_{0}[\zeta \sin\theta(r) \boldsymbol{\hat{r}}+ \cos\theta(r) \boldsymbol{\hat{z}}]$, where $\zeta=\pm$1 is the chirality and $\theta(r)$ is the angle of the skyrmion. For $\Lambda\gg r_{sk}$, the interaction energy between a vortex and a skyrmion is given by \cite{NeelInteraction}:
\begin{equation}
E_{Sk-Vx} =\frac{M_{0} \phi _{0} r_{sk}^{2}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q^2}(e^{-q\tilde{d}}-1) J_{0}(qR) m_{z,\theta}(q) \,dq,
\label{eqn:energy}
\end{equation}
\noindent where $\tilde{d} = d_m \slash r_{sk}$, $J_{n}$ is the nth-order Bessel function of the first kind, and $R=r/r_{sk}$ is the normalized horizontal displacement $r$ between the centers of the skyrmion and the vortex. $m_{z,\theta}(q)$ contains information about skyrmion's spatial profile and is given by \cite{NeelInteraction}: $m_{z,\theta}(q) = \int_{0}^{\infty} x [\zeta q + \theta^\prime ( x )] J_{1}( qx) \sin\theta(x) \,dx$, where $\theta ( x )$ is determined by skyrmion ansatz.
We now derive an expression for the skyrmion--vortex restoring force by differentiating Eq.~(\ref{eqn:energy}) with respect to $r$:
\begin{equation}
F_{Sk-Vx} =-\frac{M_{0} \phi _{0} r_{sk}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q}(1- e^{-q\tilde{d}}) J_{1}(qR) m_{z,\theta}(q) \,dq.
\label{eqn:force}
\end{equation}
For small horizontal displacements $r\ll r_{sk}$ between the centers of the skyrmion and the vortex, we can approximate the Sk--Vx energy as:
\begin{equation}
E_{Sk-Vx} =\frac{1}{2} kr^{2},
\label{eqn:springconstant}
\end{equation}
\noindent with an effective spring constant
\begin{equation}
k =-\frac{M_{0} \phi _{0}}{4\Lambda }\int_{0}^{\infty} (1- e^{-q\tilde{d}}) m_{z,\theta}(q) \,dq.
\label{eqn:spring}
\end{equation}
Figs.~\ref{fig:energies}--\ref{fig:forces} show binding energy and restoring force between a vortex and skyrmions of varying thickness for the materials listed in Appendix~\ref{app:A}. Here we used domain wall ansatz for the skyrmion with $\theta(x) = 2\tan^{-1}[\frac{\sinh(r_{sk}/\delta)}{\sinh(r_{sk}x/\delta)}]$, where $r_{sk}/\delta$ is the ratio of skyrmion radius to its domain wall width and $x$ is the distance from the center of the skyrmion normalized by $r_{sk}$. As seen in Fig.~\ref{fig:forces}, the restoring force between a skyrmion and a vortex increases with increasing separation between their centers until it reaches a maximum value, $F_{max}$, and then decreases with further increase in separation. We note that $F_{max}$ occurs when Sk--Vx separation is equal to the radius of the skyrmion, i.e. when $R=1$ in Eq.~(\ref{eqn:force}):
\begin{equation}
F_{max} = -\frac{M_{0} \phi _{0} r_{sk}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q}(1- e^{-q\tilde{d}}) J_{1}(q) m_{z,\theta}(q) \,dq.
\label{eqn:fmax}
\end{equation}
\noindent As the size of the skyrmion increases, the maximum binding force $F_{max}$ of the SVP increases. For a given skyrmion size, increasing the skyrmion thickness increases the attractive force until the thickness reaches the size of the skyrmion. Further increase in MML thickness does not lead to an appreciable increase in stray fields outside the MML layer and, as a result, the Sk--Vx force saturates.
It is important to note that stray fields from a skyrmion nucleate both a vortex and an antivortex (Avx) in the superconducting layer \cite{volkov, PhysRevLett.88.017001, milosevic_guided_2010, PhysRevLett.93.267006}. While the skyrmion attracts the vortex, it repels the antivortex. Eqs.~(\ref{eqn:energy}) and (\ref{eqn:force}) remain valid for Sk--Avx interaction, but switch signs. The equilibrium position of the antivortex is at the location where repulsive skyrmion--antivortex force, $F_{Sk-Avx}$, is balanced by the attractive vortex--antivortex force, $F_{Vx-Avx}$~\cite{lemberger2013theory, ge2017controlled}. Fig.~\ref{fig:fvav} shows $F_{Vx-Avx}$ against $F_{Sk-Avx}$ for the platform in the Appendix. We see that for thicker magnets, the location of the antivortex is far away from that of the vortex, where the Avx can be pinned with artificially implanted pinning centers \cite{aichner2019ultradense, gonzalez2018vortex}. For thin magnetic films, where the antivortex is expected to be nucleated right outside the skyrmion radius, we can leverage Berezinskii–Kosterlitz–Thouless (BKT) transition to negate $F_{Vx-AVx}$ for Vx-Avx distances $r<\Lambda$ \cite{PhysRevB.104.024509, schneider_excess_2014, goldman2013berezinskii, zhao2013evidence}. Namely, when a Pearl superconducting film is cooled to a temperature below $T_C$ but above $T_{BKT}$, vortices and antivortices dissociate to gain entropy, which minimizes the overall free energy of the system \cite{beasley1979possibility}. While the attractive force between a vortex and an antivortex is nullified, a skyrmion in the MML still attracts the vortex and pushes the antivortex towards the edge of the sample, where it can be pinned. Therefore we assume that the antivortices are located far away and neglect their presence in our braiding and readout schemes.
\section{\label{sec:braid}Braiding}
Majorana braiding statistics can be probed by braiding a pair of MBS \cite{RevModPhys.80.1083} which involves swapping positions of the two vortices hosting the MBS. We propose to pattern the MML into interconnected Y-junctions as shown in Fig.~\ref{fig:two} to enable that swapping. Ohmic contacts in HM layers across each leg of the Y-junctions enable independent application of charge currents along each leg of the track. These charge currents in-turn apply spin-orbit torques on the adjacent magnetic layers and enable skyrmions to be moved independently along each leg of the track. As long as skyrmion and vortex move as a collective object, braiding of skyrmions in the MML leads to braiding of MBS hosting vortices in the superconducting layer. Below we study the dynamics of a SVP subjected to spin torques for braiding. We calculate all external forces acting on the SVP in the process and discuss the limits in which the skyrmion and the vortex move as a collective object.
For a charge current $\bm{J}$ in the HM layer, the dynamics in the magnetic layer is given by the modified Landau–Lifshitz–Gilbert (LLG) equation \cite{hayashi2014quantitative, slonczewski1996current}:
\begin{equation}
\partial _{t}\bm{m} =-\gamma (\bm{m} \times {{\bm H}_{eff}} +\eta J\ \bm{m} \times \bm{m} \times \bm{p}) +\alpha \bm{m} \times \partial _{t}\bm{m}
\label{eqn:llg}
\end{equation}
\noindent where we have included damping-like term from the SOT and neglected the field-like term as it does not induce motion of N\'eel skyrmions for our geometry \cite{jiang_blowing_2015}. Here, $\gamma$ is the gyromagnetic ratio, $\alpha$ is the Gilbert damping parameter, and ${{\bm H}_{eff}}$ is the effective field from dipole, exchange, anisotropy and DMI interactions. $\bm{p}=sgn(\Theta _{SH})\bm{\hat{J}} \times \hat{\bm{n}}$ is the direction of polarization of the spin current, where $\Theta _{SH}$ is the spin Hall angle, $\bm{\hat{J}}$ is the direction of charge current in the HM layer and $\hat{\bm{n}}$ is the unit vector normal to the MML. $\eta=\hbar \Theta _{SH}/2eM_{0} d_{m}$ quantifies the strength of the torque, $\hbar$ is the reduced Planck's constant and $e$ is the charge of an electron.
Assuming skyrmion and vortex move as a collective object, semiclassical equations of motion for the centers of mass of the skyrmion and the vortex can be written using collective coordinate approach as done in Ref.~\cite{hals2016composite}:
\begin{eqnarray}
m_{sk}\ddot{\bm{R}}_{sk}= {\bf{F}}_{SOT} - \frac{\partial U_{sk,\ pin}}{\partial \bm{R}_{sk}} - & {\bm{G}}_{sk}\times \dot{\bm{R}}_{sk} - 4\pi s \alpha \dot{\bm{R}}_{sk} \nonumber \\
&- k({\bm{R}}_{sk}-{\bm{r}}_{vx}),
\label{eqn:skmotion}
\end{eqnarray}
and
\begin{eqnarray}
m_{vx}\ddot{\bm{R}}_{vx} = - \frac{\partial U_{vx,\ pin}}{\partial \bm{R}_{vx}} - &{\bm{G}}_{vx}\times \dot{\bm{R}}_{vx} - {\alpha}_{vx} \dot{\bm{R}}_{vx} \nonumber \\
& + k({\bm{R}}_{sk}-{\bm{r}}_{vx}),
\label{eqn:vxmotion}
\end{eqnarray}
\noindent where ${\bm{R}}_{sk}$ (${\bm{R}}_{vx}$), $m_{sk}$ ($m_{vx}$) and $q_{sk}$ ($q_{vx}$) are the position, mass and chirality of the skyrmion (vortex). $k$ is the effective spring constant of the Sk--Vx system, given in Eq.~(\ref{eqn:spring}). ${\bm{F}}_{SOT}=\pi ^{2} \gamma \eta r_{sk} s\bm{{J}} \times \hat{\bm{n}}$ is the force on a skyrmion due to spin torques in Thiele formalism, where $s=M_0 d_m/\gamma$ is the spin density \cite{upadhyaya2015electric, thiele1970theory}. The third term on the right side of Eq.~(\ref{eqn:skmotion}) gives Magnus force on the skyrmion, with ${\bm{G}}_{sk} = 4\pi s q_{sk}\hat{\bm{z}}$, and the fourth term characterizes a dissipative force due to Gilbert damping. Similarly, the second term on the right side of Eq.~(\ref{eqn:vxmotion}) gives the Magnus force on the vortex with ${\bm{G}}_{vx} = 2\pi s n_{vx} q_{vx} \hat{\bm{z}}$, with $n_{vx}$ being the superfluid density of the TSC, and the third term characterizes viscous force with friction coefficient ${\alpha}_{vx}$. $U_{sk,\ pin}$ ($U_{vx,\ pin}$) gives the pinning potential landscape for the skyrmion (vortex). The last term in Eq.~(\ref{eqn:vxmotion}) represents restoring force on a vortex due to its separation from a skyrmion and is valid when $\mid{\bm{R}}_{sk}-{\bm{R}}_{vx}\mid <r_{sk}$. Here, $k$ is the effective spring constant characterizing Sk--Vx force, as given by Eq.~(\ref{eqn:springconstant}).
We consider steady-state solutions of the equations of motion assuming that the skyrmion and the vortex are bound. We discuss conditions for the dissociation of a SVP later. For a given external current $\bm{J}$, velocity $v$ of a SVP in steady state is obtained by setting $\ddot{\bm{R}}_{sk} = \ddot{\bm{R}}_{vx} = 0$ and $\dot{\bm{R}}_{sk} = \dot{\bm{R}}_{vx} = \dot{\bm{R}}$ in Eqs.~(\ref{eqn:skmotion}) and (\ref{eqn:vxmotion}):
\begin{equation}
v = |\dot{\bm{R}}| = \frac{\pi ^{2} \gamma \eta r_{sk} sJ}{\sqrt{(G_{sk}+G_{vx})^{2} +(4\pi s \alpha + \alpha_{vx})^{2}}}.
\label{eqn:vgivenj}
\end{equation}
\noindent In general, the SVP moves at an angle $\varphi$ relative to $\bm{{F}}_{SOT}$ due to Magnus forces on the skyrmion and the vortex, with:
\begin{eqnarray}
\tan \varphi = \frac{G_{sk}+G_{vx}}{4\pi s \alpha + \alpha_{vx}}.
\label{eqn:svpangle}
\end{eqnarray}
Armed with the above equations, we extract some key parameters that determine the feasibility of our braiding scheme. First, if $\bm{{F}}_{SOT}$ from external currents is unable to overcome the maximum pinning force on either the skyrmion ($F_{pin, sk}$) or the vortex ($F_{pin, vx}$), the SVP will remain stationary. This gives us a lower threshold $J^-$ on the external current which is obtained by weighing $\bm{{F}}_{SOT}$ against the pinning forces:
\begin{equation}
J^{-} = \frac{max(F_{pin, sk}, F_{pin, vx})}{\pi ^{2} \gamma \eta r_{sk} s}.
\label{eqn:jminus}
\end{equation}
Second, once the SVP is in motion, drag and Magnus forces that act on the skyrmion and the vortex are proportionate to their velocity. If the net external force on a vortex in motion is larger than the maximum force with which a skyrmion can pull it ($F_{max}$), then the skyrmion and the vortex dissociate and no longer move as a collective object. This sets an upper bound $v^+$ on the SVP speed which can be obtained by balancing $F_{max}$ with the net force from Magnus and drag forces on the vortex. This maximum speed plays a key role in determining whether our braiding and readout scheme can be completed within the quasiparticle poisoning time.
\begin{equation}
v^{+} = \frac{F_{max}}{\sqrt{(\alpha_{vx})^2+(G_{vx})^2}}.
\label{eqn:vplus}
\end{equation}
An upper bound on the SVP speed implies an upper bound $J^+$ on the external current which can be obtained by putting $v^+$ in Eq.~(\ref{eqn:vgivenj}):
\begin{equation}
J^{+} = \frac{v^{+} \sqrt{(G_{sk}+G_{vx})^{2} +(4\pi s \alpha + \alpha_{vx})^{2}}}{\pi ^{2} \gamma \eta r_{sk} s}.
\label{eqn:jplus}
\end{equation}
Another parameter of critical importance, the distance of closest approach between two skyrmion--vortex pairs ($r_{min}$) plays a crucial role in achieving significant overlap of the MBS wavefunctions centered at the vortex cores and is given by balancing Sk--Vx attractive force by Vx--Vx repulsive force:
\begin{equation}
r_{min} = \frac{\phi_0^2}{4\pi^2 \Lambda} \frac{1}{F_{max}}.
\label{eqn:rmin}
\end{equation}
Finally, the power $P$ dissipated in heavy metal layers due to Joule heating from charge currents has to be effectively balanced by the cooling rate of the dilution refrigerator:
\begin{equation}
P = n_{hm} L W t_{hm} \rho_{hm} J^2,
\label{eqn:power}
\end{equation}
\noindent where $n_{hm}$ is the number of heavy metal layers, $L$ ($W$) is the length (width) of the active segment of the MML track, $t_{hm}$ is the thickness of each heavy metal layer and $\rho_{hm}$ is the resistivity of a heavy metal layer.
By applying a current $J^- < J < J^+$ locally in a desired section of the MML track, each SVP can be individually addressed. For the materials listed in Appendix~\ref{app:A}, the maximum speed $v^+$ with which a SVP can be moved is over $1000$ m/s. At this top speed, SVPs can cover the braiding distance (the sum of the lengths of the track in steps I--VI of Fig.~\ref{fig:braiding}) of $50 r_{sk}$ in about $0.15$ ns, but the process generates substantial Joule heating. At a reduced speed of $0.25$ m/s, SVPs cover that distance in $7$ $\mu$s generating $30$ $\mu$W of heat during the process, which is within the cooling power of modern dilution refrigerators. SVPs can be braided at faster speeds if the dilution fridges can provide higher cooling power or if the resistivity of heavy metal layers in the MML can be lowered. Although quasiparticle poisoning times in superconducting vortices have not been measured yet, estimates in similar systems range from hundreds of microseconds to seconds \cite{higginbotham2015parity, PhysRevLett.126.057702, PhysRevB.85.174533}. Our braiding time falls well within such estimates for quasiparticle poisoning times, indicating the viability of our platform. Furthermore, the ability to easily tune braiding times in our platform by varying magnitude of currents in heavy metal layers can be used to investigate the effects of quasiparticle poisoning on the braiding protocol.
As will be shown in Section~\ref{sec:read}, Vx--Vx distances $<10\xi$ should be sufficient to perform a dispersive readout of MBS parity in adjacent vortices. For the materials listed in Appendix~\ref{app:A}, the distance of closest approach between two vortices is $r_{min}=40$ nm. The shape of the MML track further limits how close two vortices can be brought together (see step II in Fig.~\ref{fig:braiding}). With the geometry of the track taken into account, Vx--Vx distance less than $10\xi$ can still be easily achieved, enough to induce a detectable shift in cavity's resonance frequency during the dispersive readout.
Figs.~\ref{fig:t0}--\ref{fig:t6} show the results of micromagnetic simulation of braiding skyrmions in a smaller section of the MML (for computational reasons) for the example platform. The details of the simulation are given in the Appendix~\ref{app:A}. The simulation results demonstrate the effectiveness of using local SOT to move individual skyrmions and realize braiding. Finally, as discussed in this section, due to the strong skyrmion--vortex binding force, vortices hosting MBS in the TSC will braid alongside the skyrmions.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{MW_combined.pdf}
\caption{\label{fig:readout} (a) Schematic of our readout process. When two vortices are brought close, microwave transitions can be dispersively driven from the MBS to the excited hybridized CdGM levels (only level $1$ is shown). Parity of the Majorana mode can be inferred from the difference in the cavity frequency shift produced by $\omega_{-\mathcal{M},1}$ and $\omega_{\mathcal{M},1}$ transitions (see Eq.~(\ref{eq:chi})).
The allowed fermion parity conserving transitions are shown in both single-particle and many-particle representations. In the latter, dashed and solid lines denote states in the two fermion parity sectors.
The transition of frequency $\omega_{-\mathcal{M},1}$ (blue arrows) corresponds to breaking a Cooper pair and exciting the MZM and CdGM levels (MZM being initially unoccupied).
When the MZM is occupied, the transition of frequency $\omega_{\mathcal{M},1}$ (red arrow) excites the MZM quasiparticle into the CdGM level.
The dipole transition matrix elements are different for the two processes, enabling parity readout.
(b) MZM-parity sensitive dipole transition strength versus vortex pair separation. We denote $g^2_n = (|\mathbf{E}_{0}\cdot\mathbf{d}_{n,- \mathcal{M} }|^{2}-|\mathbf{E}_{0}\cdot\mathbf{d}_{n, \mathcal{M} }|^{2}) $ the dipole transition strength between the Majorana level and the $n$th CdGM level. We plot the dimensionless strength normalized by $U = e |\mathbf{E}_{0}| l$.
As expected from MZM hybridization, $g_n$ decays approximately exponentially in the distance between the two vortices. Oscillations in $g^2_n$ represent oscillations in the wave functions of a clean system. In a disordered (real) system the oscillations are expected to be smeared out.
The inset shows the probability density for the MZM hosted by a vortex pair 400 nm apart.
The simulation was done for an effective 2D model (a $1000\times 600 \mathrm{nm}^2$ rectangle) of a 3D topological insulator surface, see Refs.~\cite{PhysRevB.86.155146,PhysRevX.7.031006,MW_inprep}. We used $\xi = 15 \mathrm{nm}$, vortex radius $r = \xi$, and $E_F = 1.125 \Delta$ in the vortex.
}
\end{figure*}
\section{\label{sec:read}Readout}
Quantum information is encoded in the charge parity of MBS hosted in a pair of vortices which we propose to readout with a dispersive measurement technique. Fig.~\ref{fig:readout}a summarizes our readout scheme - the top figure shows single particle energy levels and the bottom figure shows many-body energy levels of a pair of vortices brought close to each other. In the dispersive limit, a microwave cavity electric field can drive virtual transitions from the ground state Majorana manifold to excited CdGM manifold (only one CdGM level, labeled 1, is considered in the figure). The transitions allowed by selection rules, labeled $\omega_{-\mathcal{M},1}$ and $\omega_{\mathcal{M},1}$ are shown in the many-body spectrum. Each of these virtual transitions causes a state-dependent dispersive shift in the cavity's natural frequency and the parity of the vortex pair can be inferred from relative change in cavity frequency. Note that each of the allowed transitions is truly parity conserving since microwaves cannot change the number of fermions.
Since parity states are true eigenstates (as opposed to approximate eigenstates) of the measurement operation, our readout scheme can be dubbed as a topological quantum non-demolition technique \cite{PRXQuantum.1.020313, PhysRevB.99.235420}. We now proceed to calculate dipole coupling strengths of the allowed transitions to cavity electric field and the corresponding dispersive shift.
In BCS mean-field theory, the coupling to electric field can be described by the Hamiltonian
\begin{equation}
\delta H=-\mathbf{E}(t)\cdot\hat{\mathbf{d}}\,,\quad\hat{\mathbf{d}}=\frac{e}{2}\int d^{2}\mathbf{r}\mathbf{r}\hat{\Psi}^{\dagger}\tau_{z}\hat{\Psi}\,, \label{eq:deltaH}
\end{equation}
where $\mathbf{E}(t) = \mathbf{E}_0 \cos \omega t $ is the microwave-induced time-dependent electric field
which is approximately uniform over the scale of the vortices~\footnote{In Eq.~(\ref{eq:deltaH}), we assume a thin film superconductor that can be approximated by a 2D system. This model can also describe a 3D superconductor when the electric field $\mathbf{E}$ does not penetrate deep into its bulk. }.
The electric field couples to the dipole operator $\hat{\mathbf{d}}$ of the electronic states in the vortices.
We have written it in terms of the electron field operator in
Nambu spinor notation, $\hat{\Psi}=(\psi_{\uparrow},\psi_{\downarrow},\psi_{\downarrow}^{\dagger},-\psi_{\uparrow}^{\dagger})^{T}$; The Pauli matrix $\tau_z$ acts on the particle-hole indices.
At low energies, we expand the field operators in terms of eigenstates as
\begin{equation}
\hat{\Psi}(\mathbf{r})= \phi_{1}(\mathbf{r})\hat{\gamma}_{1}+\phi_{2}(\mathbf{r})\hat{\gamma}_{2} +\Phi_{1}(\mathbf{r})\hat{\Gamma}_{1}+\Phi_{-1}(\mathbf{r})\hat{\Gamma}_{1}^{\dagger}+\dots \,, \label{eq:Psi} \end{equation}
where $\hat{\gamma}_{1,2}$ are the Majorana operators for vortices 1 and 2, and $\hat{\Gamma}_{1}^{(\dagger)}$ is the annihilation (creation) operator for the lowest CdGM state. The corresponding wave functions multiply the operators in Eq.~(\ref{eq:Psi}).
At low frequencies much below the level spacing $\delta E$ of the vortex quasiparticle bound states, $\omega \ll \delta E /\hbar$, the microwave field does not excite the quasiparticle states of the vortices.
We shall also assume that these quasiparticle states are not occupied, for example due to quasiparticle poisoning.
Under these conditions, the vortex pair stays in its ground state manifold consisting of the two states of unoccupied/occupied non-local MBS.
With sufficiently weak microwave driving we can use dispersive readout to measure the charge parity $\sigma_{z} = i\hat{\gamma}_{1}\hat{\gamma}_{2} $~\cite{RevModPhys.93.025005,PRXQuantum.1.020313}. The dispersive Hamiltonian of the resonator-vortex pair system reads~\cite{PRXQuantum.1.020313},
\begin{equation}
H_\text{resonator} + \delta H
= \hat{a}^\dagger \hat{a} (\hbar \omega + \sigma_{z} \hbar \chi) \,, \label{eq:MW+MZM}
\end{equation}
where $\hat{a},\hat{a}^\dagger$ are the harmonic oscillator annihilation and creation operators for the resonator. The MBS parity-dependent dispersive frequency shift is
\begin{equation}
\hbar \chi= \frac{g_1^2}{ \delta E} \left[\frac{\delta E^2}{\delta E^2 - (\hbar \omega)^2} \right] \,, \label{eq:chi}
\end{equation}
where we denote $g_1^2 = |\mathbf{E}_{0}\cdot\mathbf{d}_{1,- \mathcal{M} }|^{2}-|\mathbf{E}_{0}\cdot\mathbf{d}_{1, \mathcal{M}}|^{2}$ and $\omega$ is the resonator bare frequency, $\mathbf{E}_0$ is the electric field amplitude, and $\delta E $ is the energy gap separating the MBS from the first excited CdGM mode. We ignore here the exponentially small energy splitting between the MBS, which would give subleading corrections to $\chi$; we will see that $\chi$ itself will be exponentially small in the vortex separation (due to the parity-sensitive transition dipole matrix elements $\mathbf{d}_{1,- \mathcal{M} }$ and $\mathbf{d}_{1, \mathcal{M} }$ being almost equal).
We denote here $\mathbf{d}_{1, \mathcal{M} } = \langle 1 | \hat{ \mathbf{d}} | \mathcal{M} \rangle $ and $\mathbf{d}_{1, - \mathcal{M} } = \langle \mathcal{M},1 | \hat{ \mathbf{d}} | 0 \rangle $ where the relevant state are the ground state $| 0 \rangle$, the single-particle excited states $ | \mathcal{M} \rangle = \hat{\Gamma}_{\mathcal{M}}^{\dagger} | 0 \rangle$ and $ | 1 \rangle = \hat{\Gamma}_{1}^{\dagger} | 0 \rangle$, and the two-particle excited state $ | \mathcal{M}, 1 \rangle = \hat{\Gamma}_{1}^{\dagger} \hat{\Gamma}_{\mathcal{M}}^{\dagger} | 0 \rangle$; we introduced the annihilation operator $\hat{\Gamma}_{\mathcal{M}}=(\hat{\gamma}_{1}+i\hat{\gamma}_{2})/2 $ for the non-local MBS.
Evaluating the dipole transition matrix elements $ \mathbf{d}_{1, \pm \mathcal{M} }$ microscopically is somewhat involved since proper screening by the superconducting condensate needs to be carefully accounted for and is beyond the BCS mean-field theory~\cite{1996PhRvL..77..566B,2001PhRvL..86..312K,PhysRevB.91.045403,PhysRevB.97.125404,PhysRevX.8.031041}.
Nevertheless, to estimate $\mathbf{d}_{1, \pm \mathcal{M} }$ we can use Eq.~(\ref{eq:deltaH}) by replacing $\mathbf{r} \approx l \hat{\mathbf{z}}$ in it, with $l \approx a_B$ being the effective distance to the image charge in the superconductor and $\hat{\mathbf{z}}$ the surface normal vector~\cite{1996PhRvL..77..566B}. Here $a_B$ denotes the Bohr radius.
We evaluate the dimensionless matrix elements of the effective dipole ``charge'' $\mathbf{d}\cdot \hat{\mathbf{z}} / l$ by using a numerical simulation of the Majorana and CdGM states in a double vortex system depicted in Fig.~\ref{fig:readout}b. The numerical simulations will be detailed in a future publication~\cite{MW_inprep}.
In Fig.~\ref{fig:readout}b we plot the parity-sensitive term $g_n^2$ that largely determines the dispersive shift $\chi$, Eq.~(\ref{eq:chi}).
We find that even a relatively distant vortex pair can provide a parity-dependent shift $g_n^2 \sim 10^{-2} (e l E_0)^2$.
Since the relevant dipole moment is normal to the superconductor surface, we can couple to the dipole by using a microwave resonator above the surface, producing a large perpendicular electric field.
With a resonator zero-point voltage $V_0 \sim 100\, \mu \mathrm{V}$ at a $\sim 10 \mathrm{nm}$ distance from the vortices, we obtain $e l E_0 \approx 1 \mu \mathrm{eV} \cdot (l / \text{\AA}) \approx 2.4 \times 10^2 h \mathrm{MHz} \cdot (l / \text{\AA})$. (We estimate such high zero-point voltages can be achieved in high-inductance resonators~\cite{PhysRevApplied.5.044004}.)
Taking a low-lying CdGM state with $\delta E \sim 10 \mu eV$, we obtain $ \chi / 2\pi \sim 20 \, \mathrm{MHz} \cdot (l / \text{\AA})^2$ where $l \sim a_B \gtrsim 1 \text{\AA}$ is the typical dipole size~\cite{1996PhRvL..77..566B}.
We thus see that the MBS vortex parity measurement is well within standard circuit QED measurement capabilities~\cite{RevModPhys.93.025005}.
We note that the above estimate does not include the resonant enhancement, the second factor in Eq.~(\ref{eq:chi}), which may further substantially increase the frequency shift.
Finally, we note that the dipole operator $\hat{\mathbf{d}}$ also has a non-zero diagonal matrix element $\mathbf{d}_{\mathcal{M}}$ in the Majorana state~\cite{PhysRevB.97.125404}, leading to a term $\mathbf{E}_0 \cdot\mathbf{d}_{\mathcal{M}} \sigma_z (\hat{a}+\hat{a}^\dagger)$ in Eq.~(\ref{eq:MW+MZM}). This term in principle allows one to perform longitudinal readout of the MBS parity. However, making longitudinal readout practical may require parametric modulation of the coupling, in our case $\mathbf{d}_{\mathcal{M}}$, which may be difficult~\cite{PhysRevB.99.235420,RevModPhys.93.025005}.
\section{\label{sec:summ}Summary}
Measuring braiding statistics is the ultimate method to conclusively verify the existence of non-abelian excitations. We proposed a unified platform to initialize, braid and readout Majorana modes, avoiding abrupt topological-trivial interfaces at each stage. We derived general expressions for braiding speeds with spin currents, distance of closest approach between two Majorana modes and the resultant dispersive shift in cavity resonance frequency. We showed that our setup can be readily realized with existing options for TSC and MML materials.
\begin{acknowledgments}
We would like to thank Axel Hoffman and Mohammad Mushfiqur Rahman for helpful discussions. JIV thanks Dmitry Pikulin and Rafa\l{} Rechci\'{n}ski for helpful discussions on 3D TI simulations. JIV and LPR acknowledge support from the Office of the Under Secretary of Defense for Research and Engineering under award number FA9550-22-1-0354. YPC and PU acknowledges partial support of the work from US Department of Energy (DOE) Office of Science through the Quantum Science Center (QSC, a National Quantum Information Science Research Center) and NSF ECCS-1944635. STK acknowledges support from the Purdue research foundation.
\end{acknowledgments}
|
\section{Introduction}
Over the last decade, imaging atmospheric Cherenkov telescopes
(IACTs) have emerged as the prime instrument for the detection
of cosmic $\gamma$-rays in the TeV energy regime \cite{review}. Both galactic
and extragalactic sources of such $\gamma$-rays have been firmly
established, and have been identified with pulsars, supernova
remnants, and active galactic nuclei. Going beyond the existence
proof for different classes of $\gamma$-ray sources, interests
are increasingly turning towards precise measurements of the flux
and of the energy spectra, and the search for a break or cutoff
in the spectra.
Precise measurements of flux and spectrum with the IACT technique
represent a non-trivial challenge. Unlike particle detectors
used in high-energy-physics experiments or flown on balloons
or satellites, Cherenkov telescopes cannot be calibrated in a
test beam. Their energy calibration and response function has to
be derived indirectly, usually relying heavily on Monte Carlo
simulations. In addition, conventional single IACTs do not allow
to unambiguously reconstruct the full geometry of an air shower, i.e.,
its direction in space and its core location; this lack of
constraints make consistency checks between data and simulation more
difficult.
The stereoscopic observation of air showers with multiple telescopes,
as pioneered in the HEGRA system of Cherenkov telescopes
\cite{hegra_system}, solves
the latter problem. With two telescopes, the shower geometry is fully
determined. With three or more telescopes, the geometry is overdetermined
and one can measure resolution functions etc. \cite{wh_kruger}.
Angular resolution and energy resolution is improved compared to a
single telescope. The stereoscopic reconstruction of air showers
also allows a more detailed study of shower properties.
The analysis presented in the following concentrates on one feature
of $\gamma$-ray induced air showers which is central to the reconstruction
of shower energies, namely the distribution of photon intensity in the
Cherenkov light pool, as a function of the distance to the shower core.
In principle, the distribution of Cherenkov light can be calculated
from first principles, starting from the shower evolution governed by
Quantum Electro Dynamics (QED),
followed by the well-understood emission of Cherenkov light,
and its propagation through the atmosphere. The relevant atmospheric
parameters are quite well known and parameterized
(see, e.g., \cite{standard_atmo,modtran}). Nevertheless, early simulations showed
significant differences between simulation codes \cite{early_sim}.
These discrepancies can be traced to differences in the assumptions and in the
simplifications which are unavoidable to limit the processor time
required to generate a representative sample of air showers. More
recently, simulation codes seem to have converged
(see, e.g., \cite{recent_sim}), and
agree reasonably well among each other. Nevertheless, the experimental
verification of this key input to the interpretation of IACT data seems
desirable. In the past, experimental results concerning the distribution
of Cherenkov light in air showers were mainly limited to hadron-induced showers
of much higher energies.
The study of the distribution of Cherenkov light in TeV $\gamma$-ray
showers was carried out using the HEGRA system of IACTs,
based on the extensive sample of $\gamma$-rays detected from the
AGN Mrk 501 \cite{501_paper}. The Mrk 501 $\gamma$-ray sample
combines high statistics with a very favorable ratio of signal to
cosmic-ray background.
The basic idea is quite simple: the shower direction and core location
is reconstructed based on the different views of the shower. One then selects
showers of a given energy and plots the light yield observed in the
telescopes as a function of the distance to the shower core.
For this event selection, one should not
use the standard procedures for energy reconstruction
\cite{501_paper,wh_kruger}, since these procedures already assume a
certain distribution of the light yield. Instead, a much simpler -- and
bias-free -- method
is used to select events of a given energy: one uses a sample of
events which have
their core at a fixed distance $d_i$ (typically around 100~m)
from a given telescope $i$,
and which generate
a fixed amount of light $a_i$ in this telescope. Located on a circle
around telescope $i$, these showers cover a wide range in core distance
$r_j$ relative to some second telescope $j$, which in case of the
HEGRA array is located between about 70~m and 140~m from telescope $i$.
The measurement of the light yield $a_j$ in this second telescope
provides with $a_j(r_j)$ the shape of the Cherenkov
light pool. Lacking an absolute energy scale, this method does
provide the radial dependence, but not the absolute normalization of
the light yield. The determine the distribution of light for pure
$\gamma$-rays, the cosmic-ray background under the Mrk 501 signal
is subtracted on a statistical basis.
The following sections briefly describe the HEGRA IACT system, give
more detail on the Mrk 501 data set and the
analysis technique, and present and summarize the
results.
\section{The HEGRA IACT system}
The HEGRA IACT system is located on the Canary Island of La Palma,
at the Observatorio del Roque de los Muchachos
of the Instituto Astrofisico de Canarias,
at a height of about 2200~m asl.
The system will ultimately comprise five identical telescopes,
four of which are arranged in the corners of a square with roughly
100~m side length; the fifth telescope is located
in the center of the square. Currently, four of the telescopes
are operational in their final form. The fifth telescope -- one
of the corner telescopes -- is equipped with an older camera and
will be upgraded in the near future; it is not included in the
CT system trigger, and is not used in this analysis.
The system telescopes have
8.5~m$^2$ mirror area,
5~m focal length, and 271-pixel cameras with a pixel
size of $0.25^\circ$ and a field of view of $4.3^\circ$.
The cameras are read out by 8-bit Flash-ADCs, which sample the
pixel signals at a frequency of 120 MHz. More information on
the HEGRA cameras is given in \cite{hermann_padua}. The two-level
trigger requires a coincidence of two neighboring pixels
to trigger a telescope, and a coincidence of at least two
telescope triggers to initiate the readout. The pixel trigger
thresholds were initially set to 10~mV, corresponding to
about 8 photoelectrons, and were later in the 1997 run reduced to
8~mV, resulting in a typical trigger rate of 15~Hz,
and an energy threshold of the system of about
500~GeV. An in-depth discussion of the trigger system can be
found in \cite{trigger_paper}.
During data taking, a light pulser is used to regularly monitor
the gain and timing of the PMTs. FADC pedestals and offsets of
the trigger discriminators are followed continuously. Deviations
in telescope pointing are measured and corrected using bright
stars, resulting in a pointing accuracy
of better than $0.01^\circ$ \cite{pointing_paper}.
In the data analysis, a deconvolution procedure is applied to
the FADC data to generate minimum-length signals, and a signal
amplitude and timing is derived for each pixel
\cite{hess_phd}. With the gain
set to about 1 FADC count per photoelectron, the system provides
a direct linear range of about 200 photoelectrons. For larger signals,
the pulse length as measured by the FADC can be used to recover the
amplitude information, extending the dynamic range to well beyond
500 photoelectrons per pixel. Image pixels are then selected as
those pixels having a signal above a high cut of 6 photoelectrons,
or above cut of 3 photoelectrons if adjacent
to a high pixel. By diagonalizing its `tensor of inertia', the major
and minor axes of the images are determined, and the usual {\em width}
and {\em length} parameters \cite{hillas}. Both the image of the source of
a $\gamma$-ray and the point where the shower axis intersects the
telescope plane fall onto the major axes of the images. From the
multiple views of an air shower provided by the different telescopes,
the shower direction is hence determined by superimposing the images
and intersecting their major axes (see \cite{hegra_system,kohnle_paper}
for details); the typical angular resolution is $0.1^\circ$.
Similarly, the core location is derived. The $\gamma$-ray sample is enhanced
by cuts on the {\em mean scaled width} which is calculated by
scaling the measured {\em widths} of all images to the {\em width} expected
for $\gamma$-ray images of a given image {\em size} and distance
to the shower core \cite{501_paper}.
To simulate the properties and detection characteristics of the
HEGRA IACT system, detailed Monte-Carlo simulations are available,
using either the ALTAI \cite{altai} or the CORSIKA \cite{corsika}
air shower generator, followed by
a detailed simulation of the Cherenkov emission and propagation and
of the detector. These simulations include details
such as the pulse shapes of the input signals to the pixel trigger
discriminators, or the signal recording using the Flash-ADC system
\cite{telsimu1,telsimu2}. In the following, primarily the ALTAI
generator and the detector simulation \cite{telsimu1} was used.
Samples of simulated showers were available for zenith angles of
$0^\circ$, $20^\circ$, $30^\circ$, and $45^\circ$. Distributions at
intermediate angles were obtained by suitably scaling and/or interpolating
the distributions.
\section{The Mrk501 data set}
The extragalactic VHE $\gamma$-ray source Mrk 501
\cite{whipple_501_initial,hegra_501_initial} showed in 1997 significant
activity, with peak flux levels reaching up to 10 times the
flux of the Crab nebula (see \cite{501_rome} for a summary
of experimental results, first HEGRA results are given in \cite{501_paper}).
The telescopes of
the HEGRA IACT system were directed towards Mrk 501 for about
140~h, accumulating a total of about 30000 $\gamma$-ray events
at zenith angles between $10^\circ$ and $45^\circ$. Mrk 501
was typically positioned $0.5^\circ$ off the optical axis of the
telescope,
with the sign of the displacement varying every 20~min. In this
mode, cosmic-ray background can be determined by counting events
reconstructed in an equivalent region displaced from the
optical axis by the same amount, but opposite in direction to the
source region; dedicated off-source runs are no longer required,
effectively doubling the net on-source time.
Given the
angular resolution of about $0.1^\circ$, the separation by
$1^\circ$ of the on-source and off-source regions is fully
sufficient. The relatively large field of view of the cameras
ensures, on the other hand, that images are reliably reconstructed
even with a source displaced from the center of the camera.
The Mrk 501 $\gamma$-ray data \cite{501_paper}
have provided the basis for a number of
systematic studies of the properties of the HEGRA telescopes, and of the
characteristics of $\gamma$-ray induced air showers (see, e.g.,
\cite{wh_kruger}).
For the following analysis, the data set was cleaned by rejecting
runs with poor or questionable weather conditions, with hardware
problems, or with significant deviations of the trigger rates from
typical values. A subset of events was selected where
at least three telescopes had triggered, and had provided useful
images for the reconstruction of the shower geometry.
Fig.~\ref{fig_theta2} shows the distribution of reconstructed
shower axes in the angle $\theta$ relative to the direction towards
Mrk 501. A cut $\theta^2 < 0.05 (^\circ)^2$ was applied to enhance
the $\gamma$-ray content of the sample.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{dts.ps}}
\end{center}
\caption
{Distribution in the square of the angle $\theta$ between the
reconstructed shower axis and the direction
to the source, for events with at least three triggered telescopes.
No cuts on image shapes are applied. The dashed line shows the
distribution for the background region.}
\label{fig_theta2}
\end{figure}
To further reduce the cosmic-ray background, a loose cut on the
{\em mean scaled width} was used. The distributions in the
{\em mean scaled width} are shown in Fig.~\ref{fig_width}; events
were selected requiring a value below 1.25; this cut accepts
virtually all $\gamma$-rays.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{width.eps}}
\end{center}
\caption
{Distribution in the {\em mean scaled width} for $\gamma$-ray
showers (full line) after statistical subtraction of cosmic rays based on
the off-source region, and for cosmic rays (dashed).
The dashed line indicates the cut used to select $\gamma$-ray
candidates.}
\label{fig_width}
\end{figure}
To ensure that the core location of the events is well reconstructed,
the sample was further restricted to events with a core location
within 200~m from the center of the array (Fig.~\ref{fig_core});
in addition, events with $y_{core} > 100~m$ were rejected,
corresponding to the area near the fifth telescope currently
not included in the system.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize8.0cm
\epsffile{coreloc.eps}}
\end{center}
\caption
{Distribution of the core locations of events, after the cuts to
enhance the fraction of $\gamma$-rays. Also indicated are the
selection region and the telescope locations.}
\label{fig_core}
\end{figure}
After these cuts, a sample of 11874 on-source events remained, including
a background of 1543 cosmic-ray events, as estimated using the equal-sized
off-source region.
For such a sample of events at TeV energies,
the core location is measured with a
precision of about 6~m to 7~m for events with cores within a
distance up to 100~m from the central telescope; for larger
distances, the resolution degrades gradually, due to
the smaller angles between the different views,
and the reduced image {\em size} (see Fig.~\ref{fig_coreres}).
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{res.ps}}
\end{center}
\caption
{Resolution in the core position as a function of the distance
between the shower core and the central telescope, as determined
from Monte Carlo simulations of $\gamma$-ray showers with
energies between 1 and 2 TeV. The resolution is defined by
fitting a Gaussian to the distribution of differences between the true and
reconstructed coordinates of the shower impact point, projected
onto the $x$ and $y$ axes of the coordinate system. Due to slight
non-Gaussian tails, the rms widths of the distributions are about
20\% larger.}
\label{fig_coreres}
\end{figure}
\section{The shape of the Cherenkov light pool for $\gamma$-ray
events}
Using the technique described in the introduction, the intensity
distribution in the Cherenkov light pool can now simply be traced
by selecting events with the shower core at given distance $r_i$ from
a `reference'
telescope $i$ and with a fixed image {\em size} $a_i$, and plotting the
mean amplitude $a_j$ of telescope $j$ as a function of $r_j$.
However, in this simplest form, the procedure is not very practical,
given the small sample of events remaining after such additional
cuts. To be able to use a larger sample of events, one has to
\begin{itemize}
\item select events with $a_i$ in a certain range, $a_{min} < a_i
< a_{max}$, and plot $a_j/a_i$ vs $r_j$, assuming that the shape of
the light pool does not change rapidly with energy, and that one
can average over a certain energy range
\item repeat the measurement of $a_j(r_j)/a_i$ for different (small) bins
in $r_i$, and combine these measurements after normalizing the distributions
at some fixed distance
\item Combine the results obtained for different pairs of telescopes $i,j$.
\end{itemize}
Care has to be taken not to introduce a bias due to the trigger
condition. For example, one has to ensure that the selection
criterion of at least three triggered telescopes is fulfilled regardless
of whether telescope $j$ has triggered or not, otherwise the selection
might enforce a minimum image {\em size} in telescope $j$.
To avoid truncation of images by the border of the camera, only images
with a maximum distance of $1.5^\circ$ between the image centroid and
the camera center were included, leaving a $0.6^\circ$ margin to
the edge of the field of view. Since
the image of the source if offset by $0.5^\circ$ from the camera
center, a maximum distance of $2.0^\circ$ is possible between the source
image and the centroid of the shower image.
Even after these selections, the comparison between data and shower models
is not completely straight forward. One should not, e.g., simply compare
data to the predicted photon flux at ground level since
\begin{itemize}
\item as is well known, the radial dependence
of the density of Cherenkov light depends on the solid angle over which
the light is collected, i.e., on the field of view of the camera
\item the experimental resolution in the
reconstruction of the shower core position causes a
certain smearing, which is visible in particular near the break
in the light distribution
at the Cherenkov radius
\item the selection of image pixels using the tail cuts results in a
certain loss of photons; this loss is the more significant the lower
the intensity in the image is, and the more diffuse the image is.
\end{itemize}
While the distortion in the measured radial distribution of Cherenkov
light due to the latter two effects is relatively modest (see
Fig.~\ref{fig_pool}), a detailed
comparison with Monte Carlo should take these effects into account by
processing Monte-Carlo generated events using the same procedure as
real data, i.e., by plotting the distance to the reconstructed core
position rather than the true core position, and by applying the same
tail cuts etc.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize11.0cm
\epsffile{mc_final.eps}}
\end{center}
\caption
{Radial distribution of Cherenkov light for TeV $\gamma$-ray
showers, for unrestricted aperture of the photon detector (full line),
for a $2^\circ$ aperture (dashed), and
including the full camera simulation and image processing (shaded).
The curves are normalized at $r \approx $100~m.}
\label{fig_pool}
\end{figure}
For a first comparison between data and simulation,
showers from the zenith (zenith angle between
$10^\circ$ and $15^\circ$) were selected.
The range of distances $r_i$ from the shower core
to the reference telescope was restricted to the plateau region
between 50~m and 120~m. Smaller
distances were not used because of the large fluctuations of image
{\em size} close to the shower core, and larger distances were excluded
because of the relatively steep variation of light yield with
distance. The showers were further selected on an amplitude in the `reference'
telescope $i$ between 100 and 200 photoelectrons, corresponding to
a mean energy of about 1.3~TeV.
Contamination of the Mrk 501 on-source data sample by cosmic
rays was subtracted using an off-source region displaced from
the optical axis by the same amount as the source, but in
the opposite direction. The measured radial distribution
(Fig.~\ref{fig_dat2}(a))
shows the expected features: a relatively flat plateau out to distances
of 120~m, and a rapid decrease in light yield for larger distances.
The errors given in the Figure are purely statistical. To estimate the
influence of systematic errors, one can look at the consistency of
the data for different ranges in distance $r_i$ to the `reference'
telescope, one can compare results for different telescope combinations,
and one can study the dependence on the cuts applied. Usually,
the different data sets were consistent to better than $\pm 0.05$ units;
systematic effects certainly do not exceed a level of $\pm 0.1$ units.
Within these
errors, the measured distribution is reasonably well reproduced
by the Monte-Carlo
simulations.
\begin{figure}[p]
\begin{center}
\mbox{
\epsfysize18.0cm
\epsffile{reng1.eps}}
\end{center}
\caption
{Light yield as a function of shower energy, for image {\em size} in
the reference telescope between 100 and 200 photoelectrons (a),
200 and 400 photoelectrons (b), and 400 to 800 photoelectrons (c).
Events were selected
with a distance range between 50~m and 120~m from the reference telescope,
for zenith angles between $10^\circ$ and $15^\circ$.
The shaded bands indicate the Monte-Carlo results.
The distributions are normalized at $r \approx 100$~m. Only
statistical errors are shown.}
\label{fig_dat2}
\end{figure}
\begin{figure}[p]
\begin{center}
\mbox{
\epsfysize20.0cm
\epsffile{rall1.eps}}
\end{center}
\caption
{Light yield as a function of core distance, for zenith angles between
$10^\circ$ and $15^\circ$ (a), $15^\circ$ and $25^\circ$ (b), $25^\circ$ and
$35^\circ$ (c), and $35^\circ$ and $45^\circ$ (d). Events were selected
with a distance range between 50~m and 120~m from the reference telescope,
and an image {\em size} between 100 and 200 photoelectrons in the reference
telescope.
The shaded bands indicate the Monte-Carlo results.
The distributions are normalized at $r \approx 100$~m.
Only statistical errors are shown.}
\label{fig_dat3}
\end{figure}
Shower models predict that the distribution
of light intensity varies (slowly) with the shower
energy and with the zenith angle. Fig.~\ref{fig_dat2} compares the
distributions obtained for different {\em size} ranges $a_i$ of
100 to 200, 200 to 400, and 400 to 800 photoelectrons at distances
between 50~m and 120~m, corresponding
to mean shower energies of about 1.3, 2.5, and 4.5 TeV, respectively.
We note that the intensity close to the shower core increases with
increasing energy. This component of the Cherenkov light is generated
by penetrating particles near the shower core. Their number grows
rapidly with increasing shower energy, and correspondingly decreasing
height of the shower maximum. The increase in the mean light intensity
at small distances from the shower core is primarily caused by
long tails distribution of image {\em sizes} towards large {\em size}; the
median {\em size} is more or less constant.
The observed trends are well reproduced by the
Monte-Carlo simulations.
The dependence on zenith angle is
illustrated in Fig.~\ref{fig_dat3}, where zenith angles between
$10^\circ$ and $15^\circ$, $15^\circ$ and $25^\circ$, $25^\circ$ and
$35^\circ$, and $35^\circ$ and $45^\circ$ are compared. Events were
again selected for an image {\em size} in the `reference' telescope
between 100 and 200 photoelectrons, in a distance range of 50~m to
120~m \footnote{Core
distance is always measured in the plane perpendicular to the shower
axis}. The corresponding
mean shower energies for the four ranges in zenith angle are about
1.3~TeV, 1.5~TeV, 2~TeV, and 3~TeV.
For increasing zenith angles, the distribution of Cherenkov light
flattens for small radii, and the diameter of the light pool
increases. Both effects are expected, since for larger zenith
angles the distance between the telescope and the shower maximum
grows, reducing the number of penetrating particles, and resulting
in a larger Cherenkov radius. The simulations properly account for
this behaviour.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{rms.eps}}
\end{center}
\caption
{Relative variation in the {\em size} ratio $a_j/a_i$ as a function
of $r_j$, for $r_i$ in the range 50~m to 120~m, and for image {\em size}
in the `reference' telescope between 100 and 200 photoelectrons.
Full circles refer to zenith angles between $10^\circ$ and $15^\circ$,
open circles to zenith angles between $25^\circ$ and $35^\circ$.}
\label{fig_rms}
\end{figure}
It is also of some interest to consider the fluctuations of
image {\em size}, $\Delta(a_j/a_i)$.
Fig.~\ref{fig_rms} shows the relative rms fluctuation in the
{\em size} ratio, as a function of $r_j$, for small ($10^\circ$ to
$15^\circ$) and for larger ($25^\circ$ and $35^\circ$) zenith
angles. The fluctuations are minimal near the Cherenkov radius;
they increase for larger distances, primarily due to the smaller
light yield and hence larger relative fluctuations in the number
of photoelectrons. In particular for the small zenith angles,
the fluctuations also increase for small radii, reflecting the
large fluctuations associated with the penetrating tail of the
air showers. For larger zenith angles, this effect is much reduced,
since now all shower particles are absorbed well above the telescopes;
more detailed studies show that already zenith angles of $20^\circ$
make a significant difference.
\section{Summary}
The stereoscopic observation of $\gamma$-ray induced air showers
with the HEGRA Cherenkov telescopes allowed for the first time
the measurement of the light distribution in the Cherenkov light
pool at TeV energies, providing a consistency check of one of the
key inputs for the calculation of shower energies based on the
intensity of the Cherenkov images. The light distribution shows a
characteristic variation with shower energy and with zenith angle.
Data are well reproduced by the Monte-Carlo
simulations.
\section*{Acknowledgements}
The support of the German Ministry for Research
and Technology BMBF and of the Spanish Research Council
CYCIT is gratefully acknowledged. We thank the Instituto
de Astrofisica de Canarias for the use of the site and
for providing excellent working conditions. We gratefully
acknowledge the technical support staff of Heidelberg,
Kiel, Munich, and Yerevan.
|
\section{Introduction}
\label{sec:introduction}
A plethora of observations have led to confirm the standard $\Lambda$CDM framework as the most economical and successful model describing our current universe.
This simple picture (pressureless dark matter, baryons and a cosmological constant representing the vacuum energy) has been shown to provide an excellent fit to cosmological data.
However, there are a number of inconsistencies that persist and, instead of diluting with improved precision measurements, gain significance~\cite{Freedman:2017yms,DiValentino:2020zio,DiValentino:2020vvd,DiValentino:2020srs,Freedman:2021ahq,DiValentino:2021izs,Schoneberg:2021qvd,Nunes:2021ipq,Perivolaropoulos:2021jda,Shah:2021onj}.
The most exciting (i.e.\ probably non due to systematics) and most statistically significant ($4-6\sigma$) tension in the literature is the so-called Hubble constant tension, which refers to the discrepancy between cosmological predictions and low redshift estimates of $H_0$~\cite{Verde:2019ivm,Riess:2019qba,DiValentino:2020vnx}.
Within the $\Lambda$CDM scenario, Cosmic Microwave Background (CMB) measurements from the Planck satellite provide a value of $H_0=67.36\pm 0.54$~km s$^{-1}$ Mpc$^{-1}$ at 68\%~CL~\cite{Planck:2018vyg}.
Near universe, local measurements of $H_0$, using the cosmic distance ladder calibration of Type Ia Supernovae with Cepheids, as those carried out by the SH0ES team, provide a measurement of the Hubble constant $H_0=73.2\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ at 68$\%$~CL~\cite{Riess:2020fzl}.
This problematic $\sim 4\sigma$ discrepancy aggravates when considering other late-time estimates of $H_0$.
For instance, measurements from the Megamaser Cosmology Project~\cite{Pesce:2020xfe}, or those exploiting Surface Brightness Fluctuations~\cite{Blakeslee:2021rqi} only exacerbate this tension~\footnote{%
Other estimates are unable to disentangle between nearby universe and CMB measurements. These include results from the Tip of the Red Giant Branch~\cite{Freedman:2021ahq},
from the astrophysical strong lensing observations~\cite{Birrer:2020tax}
or from gravitational wave events~\cite{Abbott:2017xzu}.}.
As previously mentioned, the SH0ES collaboration exploits the cosmic distance ladder calibration of Type Ia Supernovae, which means that these observations do not provide a direct extraction of the Hubble parameter.
More concretely, the SH0ES team measures the absolute peak magnitude $M_B$ of Type Ia Supernovae \emph{standard candles} and then translates these measurements into an estimate of $H_0$ by means of the magnitude-redshift relation of the Pantheon Type Ia Supernovae sample~\cite{Scolnic:2017caz}.
Therefore, strictly speaking, the SH0ES team does not directly extract the value of $H_0$, and there have been arguments in the literature aiming to translate the Hubble constant tension into a Type Ia Supernovae absolute magnitude tension $M_B$~\cite{Camarena:2019rmj,Efstathiou:2021ocp,Camarena:2021jlr}.
In this regard, late-time exotic cosmologies have been questioned as possible solutions to the Hubble constant tension~\cite{Efstathiou:2021ocp,Camarena:2021jlr}, since within these scenarios, it is possible that the supernova absolute magnitude $M_B$ used to derive the low redshift estimate of $H_0$ is no longer compatible with the $M_B$ needed to fit supernovae, BAO and CMB data.
A number of studies have prescribed to use in the statistical analyses a prior on the intrinsic magnitude rather than on the Hubble constant $H_0$~\cite{Camarena:2021jlr,Schoneberg:2021qvd}.
Following the very same logic of these previous analyses, we reassess here the potential of interacting dark matter-dark energy cosmology~\cite{Amendola:1999er}
in resolving the Hubble constant (\cite{Kumar:2016zpg, Murgia:2016ccp, Kumar:2017dnp, DiValentino:2017iww, Yang:2018ubt, Yang:2018euj, Yang:2019uzo, Kumar:2019wfs, Pan:2019gop, Pan:2019jqh, DiValentino:2019ffd, DiValentino:2019jae, DiValentino:2020leo, DiValentino:2020kpf, Gomez-Valent:2020mqn, Yang:2019uog, Lucca:2020zjb, Martinelli:2019dau, Yang:2020uga, Yao:2020hkw, Pan:2020bur, DiValentino:2020vnx, Yao:2020pji, Amirhashchi:2020qep, Yang:2021hxg, Gao:2021xnk, Lucca:2021dxo, Kumar:2021eev,Yang:2021oxc,Lucca:2021eqy,Halder:2021jiv}
and references therein)
and/or the intrinsic magnitude $M_B$ tension, by demonstrating explicitly from a full analysis that the results are completely independent of whether a prior on $M_B$ or $H_0$ is assumed (see also the recent~\cite{Nunes:2021zzi}).
\section{Theoretical framework}
\label{sec:theory}
We adopt a flat cosmological model described by the Friedmann-Lema\^{i}tre-Robertson-Walker metric.
A possible parameterization of a dark matter-dark energy interaction is provided by the following expressions~\cite{Valiviita:2008iv,Gavela:2009cy}:
\begin{eqnarray}
\label{eq:conservDM}
\nabla_\mu T^\mu_{(dm)\nu} &=& Q \,u_{\nu}^{(dm)}/a~, \\
\label{eq:conservDE}
\nabla_\mu T^\mu_{(de)\nu} &=&-Q \,u_{\nu}^{(dm)}/a~.
\end{eqnarray}
In the equations above, $T^\mu_{(dm)\nu}$ and $T^\mu_{(de)\nu}$ represent the energy-momentum tensors for the dark matter and dark energy components respectively, the function $Q$ is the interaction rate between the two dark components, and $u_{\nu}^{(dm)}$ represents the dark matter four-velocity.
In what follows we shall restrict ourselves to the case in which the
interaction rate is proportional to the dark energy density $\rho_{de}$~\cite{Valiviita:2008iv,Gavela:2009cy}:
\begin{equation}
Q=\ensuremath{\delta{}_{DMDE}}\mathcal{H} \rho_{de}~,
\label{rate}
\end{equation}
where $\ensuremath{\delta{}_{DMDE}}$ is a dimensionless coupling parameter and
$\mathcal{H}=\dot{a}/a$~\footnote{The dot indicates derivative respect to conformal time $d\tau=dt/a$.}.
The background evolution equations in the coupled model considered
here read~\cite{Gavela:2010tm}
\begin{eqnarray}
\label{eq:backDM}
\dot{{\rho}}_{dm}+3{\mathcal H}{\rho}_{dm}
&=&
\ensuremath{\delta{}_{DMDE}}{\mathcal H}{\rho}_{de}~,
\\
\label{eq:backDE}
\dot{{\rho}}_{de}+3{\mathcal H}(1+\ensuremath{w_{\rm 0,fld}}){\rho}_{de}
&=&
-\ensuremath{\delta{}_{DMDE}}{\mathcal H}{\rho}_{de}~.
\end{eqnarray}
The evolution of the dark matter and dark energy density perturbations and velocities divergence field are described in \cite{DiValentino:2019jae} and references therein.
It has been shown in the literature that this model is free of instabilities
if the sign of the coupling $\ensuremath{\delta{}_{DMDE}}$ and the sign of $(1+\ensuremath{w_{\rm 0,fld}})$ are opposite,
where $\ensuremath{w_{\rm 0,fld}}$ refers to the dark energy equation of state~\cite{He:2008si,Gavela:2009cy}.
In order to satisfy such stability conditions, we explore three possible scenarios, all of them with a redshift-independent equation of state.
In Model A, the equation of state $\ensuremath{w_{\rm 0,fld}}$ is fixed to $-0.999$.
Consequently, since $(1+\ensuremath{w_{\rm 0,fld}}) >0$, in order to ensure a instability-free perturbation evolution, the dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$ is allowed to vary in a negative range.
In Model B, $\ensuremath{w_{\rm 0,fld}}$ is allowed to vary but we ensure that the condition $(1+\ensuremath{w_{\rm 0,fld}})>0$ is always satisfied.
Therefore, the coupling parameter $\ensuremath{\delta{}_{DMDE}}$ is also negative.
In Model C, instead, the dark energy equation of state is phantom ($\ensuremath{w_{\rm 0,fld}}<-1$), therefore the dark matter-dark energy coupling is taken as positive to avoid early-time instabilities.
We shall present separately the cosmological constraints for these three models, together with those corresponding to the canonical $\Lambda$CDM.
\begin{table}[t]
\centering
\begin{tabular}{c|c|c}
Model & Prior $\ensuremath{w_{\rm 0,fld}}$ & Prior $\ensuremath{\delta{}_{DMDE}}$ \\
\hline
A & -0.999 & [-1.0, 0.0]\\
B & [-0.999, -0.333] & [-1.0, 0.0] \\
C & [-3, -1.001]& [0.0, 1.0] \\
\end{tabular}
\caption{Priors of $\ensuremath{w_{\rm 0,fld}}$, $\delta$ in models A, B, C.}
\label{tab:priors}
\end{table}
\section{Datasets and Methodology}
\label{sec:data}
In this Section, we present the data sets and methodology employed to obtain the observational constraints on the model parameters by performing Bayesian Monte Carlo Markov Chain (MCMC) analyses.
In order to constrain the parameters, we use the following data sets:
\begin{itemize}
\item The Cosmic Microwave Background (CMB) temperature and polarization power spectra from the final release of Planck 2018, in particular we adopt the plikTTTEEE+lowl+lowE likelihood \cite{Aghanim:2018eyx,Aghanim:2019ame}, plus the CMB lensing reconstruction from the four-point correlation function~\cite{Aghanim:2018oex}.
\item Type Ia Supernovae distance moduli measurements from the \textit{Pantheon} sample~\cite{Scolnic:2017caz}. These measurements constrain the uncalibrated luminosity distance $H_0d_L(z)$, or in other words the slope of the late-time expansion rate (which in turn constrains the current matter energy density, $\Omega_{\rm 0,m}$). We refer to this dataset as \textit{SN}.
\item Baryon Acoustic Oscillations (BAO) distance and expansion rate measurements from the 6dFGS~\cite{Beutler:2011hx}, SDSS-DR7 MGS~\cite{Ross:2014qpa}, BOSS DR12~\cite{Alam:2016hwk} galaxy surveys,
as well as from the eBOSS DR14 Lyman-$\alpha$ (Ly$\alpha$) absorption~\cite{Agathe:2019vsu} and Ly$\alpha$-quasars cross-correlation~\cite{Blomqvist:2019rah}.
These consist of isotropic BAO measurements of $D_V(z)/r_d$
(with $D_V(z)$ and $r_d$ the spherically averaged volume distance and sound horizon at baryon drag, respectively)
for 6dFGS and MGS, and anisotropic BAO measurements of $D_M(z)/r_d$ and $D_H(z)/r_d$
(with $D_M(z)$ the comoving angular diameter distance and $D_H(z)=c/H(z)$ the radial distance)
for BOSS DR12, eBOSS DR14 Ly$\alpha$, and eBOSS DR14 Ly$\alpha$-quasars cross-correlation.
\item A gaussian prior on $M_B= -19.244 \pm 0.037$~mag~\cite{Camarena:2021jlr}, corresponding to the SN measurements from SH0ES.
\item A gaussian prior on the Hubble constant $H_0=73.2\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ in
agreement with the measurement obtained by the
SH0ES collaboration in~\cite{Riess:2020fzl}.
\end{itemize}
For the sake of brevity, data combinations are indicated as CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH) and CMB+SN+BAO+$M_B$ (CSBM).
Cosmological observables are computed with \texttt{CLASS}~\cite{Blas:2011rf,Lesgourgues:2011re}.
In order to derive bounds on the proposed scenarios, we modify the efficient and well-known cosmological package \texttt{MontePython}~\cite{Brinckmann:2018cvx}, supporting the Planck 2018 likelihood~\cite{Planck:2019nip}.
We make use of CalPriorSNIa, a module for \texttt{MontePython}, publicly available at \url{https://github.com/valerio-marra/CalPriorSNIa}, that implements an effective calibration prior on the absolute magnitude of Type Ia Supernovae~\cite{Camarena:2019moy,Camarena:2021jlr}.
\section{Main results and discussion}
\label{sec:results}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{H0.pdf}
\caption{Posterior distribution of the Hubble parameter in the $\Lambda$CDM model (black) and in interacting cosmologies, with priors on the parameters as given in Tab.~\ref{tab:priors}.
We show constraint obtained within model A (green), model B (red) and model C (blue)
for the CMB+SN+BAO data combination (solid lines),
CMB+SN+BAO+$H_0$ (dashed lines)
and CMB+SN+BAO+$M_B$ (dotted lines).}
\label{fig:h0}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{0_PlSB-vs-0_PlSBH-vs-0_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within the canonical $\Lambda$CDM picture, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_LCDM}
\end{center}
\end{figure*}
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.1193\pm0.0010$ & $0.1183\pm0.0009$ & $0.1183_{-0.0009}^{+0.0008}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.6889_{-0.0061}^{+0.0057}$ & $0.6958_{-0.0050}^{+0.0056}$ & $0.6956_{-0.0049}^{+0.0057}$ \\
$\Omega_{\rm 0,m}$ & $0.3111_{-0.0057}^{+0.0061}$ & $0.3042_{-0.0056}^{+0.0050}$ & $0.3044_{-0.0057}^{+0.0049}$ \\
$M_B$ & $-19.42\pm0.01$ & $-19.40\pm0.01$ & $-19.40\pm0.01$ \\
$H_0$ & $67.68_{-0.46}^{+0.41}$ & $68.21_{-0.41}^{+0.42}$ & $68.20_{-0.41}^{+0.41}$ \\
$\sigma_8$ & $0.8108_{-0.0058}^{+0.0061}$ & $0.8092_{-0.0065}^{+0.0060}$ & $0.8090_{-0.0059}^{+0.0064}$ \\
\hline
minimum $\chi^2$ & $3819.46$ & $3836.50$ & $3840.44$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the standard $\Lambda$CDM paradigm. We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_LCDM}
\end{table}
We start by discussing the results obtained within the canonical $\Lambda$CDM scenario. Table~\ref{tab:model_LCDM} presents the mean values and the $1\sigma$ errors on a number of different cosmological parameters.
Namely, we show the constraints on
$\omega_{cdm }\equiv\Omega_{0,cdm} h^2$,
the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$,
the current matter energy density $\Omega_{\rm 0,m}$,
the Supernovae Ia intrinsic magnitude $M_B$,
the Hubble constant $H_0$ and the clustering parameter $\sigma_8$
arising from three possible data combinations considered here and above described:
CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH), CMB+SN+BAO+$M_B$ (CSBM).
Interestingly, \emph{all} the parameters experience the very same shift regardless the prior is adopted on the Hubble constant or on the intrinsic Supernovae Ia magnitude $M_B$.
The mean value of $H_0$ coincides for both the CSBH and the CSBM data combinations, as one can clearly see from the dashed and dotted black lines in Fig.~\ref{fig:h0}.
Figure~\ref{fig:triangle_LCDM} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities on the parameters shown in Tab.~\ref{tab:model_LCDM}.
Notice that all the parameters are equally shifted when adding the prior on $H_0$ or on $M_B$, except for $\sigma_8$ which remains almost unchanged. Notice also that the value of the current matter density, $\Omega_{\rm 0,m}$, is smaller when a prior from SN measurements is considered:
due to the larger $H_0$ value that these measurements imply, in order to keep the CMB peaks structure unaltered, the value of $\Omega_{\rm 0,m}$ should be smaller to ensure that the product $\omega_m h^2$ is barely shifted.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.107_{-0.005}^{+0.011}$ & $0.09\pm0.01$ & $0.096_{-0.009}^{+0.011}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.723_{-0.028}^{+0.017}$ & $0.758_{-0.024}^{+0.026}$ & $0.754_{-0.028}^{+0.025}$ \\
$\Omega_{\rm 0,m}$ & $0.277_{-0.017}^{+0.028}$ & $0.242_{-0.026}^{+0.024}$ & $0.246_{-0.025}^{+0.028}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $-0.116_{-0.044}^{+0.100}$ & $-0.219_{-0.086}^{+0.083}$ & $-0.203_{-0.087}^{+0.093}$ \\
$M_B$ & $-19.40\pm0.02$ & $-19.38_{-0.01}^{+0.02}$ & $-19.37\pm0.02$ \\
$H_0$ & $68.59_{-0.79}^{+0.65}$ & $69.73_{-0.72}^{+0.71}$ & $69.67_{-0.85}^{+0.75}$ \\
$\sigma_8$ & $0.90_{-0.08}^{+0.04}$ & $1.01_{-0.11}^{+0.08}$ & $1.00_{-0.12}^{+0.07}$ \\
\hline
minimum $\chi^2$ & $3819.86$ & $3831.90$ & $3835.86$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model A, see Tab.~\ref{tab:priors}. We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_A}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{A_PlSB-vs-A_PlSBH-vs-A_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model A, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_A}
\end{center}
\end{figure*}
We focus now on Model A, which refers to an interacting cosmology with $\ensuremath{w_{\rm 0,fld}}=-0.999$ and $\ensuremath{\delta{}_{DMDE}}<0$.
Table~\ref{tab:model_A} presents the mean values and the $1\sigma$ errors on the same cosmological parameters listed above, with the addition of the coupling parameter $\ensuremath{\delta{}_{DMDE}}$, for the same three data combination already discussed.
Notice again that all the parameters are equally shifted to either smaller or larger values, regardless the prior is adopted on either $H_0$ or $M_B$. In this case the shift on the Hubble parameter is larger than that observed within the $\Lambda$CDM model, as one can notice from the blue curves depicted in
Fig.~\ref{fig:h0}.
Interestingly, we observe a $2\sigma$ indication in favor of a non-zero value of the coupling $\ensuremath{\delta{}_{DMDE}}$ when considering the CSBH and the CSBM data combinations.
Indeed, while the value of the minimum $\chi^2$ is almost equal to that obtained in the $\Lambda$CDM framework for the CSB data analyses, when adding either a prior on $H_0$ or on $M_B$,
the minimum $\chi^2$ value is \emph{smaller} than that obtained for the standard cosmological picture: therefore, the addition of a coupling \emph{improves} the overall fit.
Figure~\ref{fig:triangle_A} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model A.
It can be noticed that the prior on the Hubble constant and on the intrinsic magnitude lead to the very same shift, and the main conclusion is therefore prior-independent:
there is a $\sim 2\sigma$ indication for a non-zero dark matter-dark energy coupling when considering either $H_0$ or $M_B$ measurements,
\emph{and} the value of the Hubble constant is considerably larger, alleviating the $H_0$ tension.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.077_{-0.014}^{+0.036}$ & $0.061_{-0.019}^{+0.034}$ & $0.065_{-0.017}^{+0.036}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.785_{-0.081}^{+0.034}$ & $0.825_{-0.070}^{+0.045}$ & $0.818_{-0.075}^{+0.041}$ \\
$\Omega_{\rm 0,m}$ & $0.215_{-0.034}^{+0.081}$ & $0.174_{-0.044}^{+0.069}$ & $0.182_{-0.041}^{+0.075}$ \\
$\ensuremath{w_{\rm 0,fld}}$ & $-0.909_{-0.090}^{+0.026}$ & $-0.917_{-0.082}^{+0.026}$ & $-0.918_{-0.081}^{+0.026}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $-0.35_{-0.14}^{+0.26}$ & $-0.45_{-0.16}^{+0.22}$ & $-0.43_{-0.15}^{+0.24}$ \\
$M_B$ & $-19.41\pm0.02$ & $-19.38\pm0.02$ & $-19.38\pm0.02$ \\
$H_0$ & $68.28_{-0.85}^{+0.79}$ & $69.68_{-0.75}^{+0.71}$ & $69.57_{-0.76}^{+0.75}$ \\
$\sigma_8$ & $1.30_{-0.51}^{+0.01}$ & $1.60_{-0.76}^{+0.06}$ & $1.53_{-0.71}^{+0.03}$ \\
\hline
minimum $\chi^2$ & $ 3819.96$ & $3832.28$ & $3836.24$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$,
the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model B, see Tab.~\ref{tab:priors}.
We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_B}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{B_PlSB-vs-B_PlSBH-vs-B_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model B, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_B}
\end{center}
\end{figure*}
Focusing now on Model B, which assumes a negative coupling $\ensuremath{\delta{}_{DMDE}}$ and a constant, but freely varying, dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$ within the $\ensuremath{w_{\rm 0,fld}}>-1$ region,
we notice again the same shift on the cosmological parameters, regardless the prior is introduced in the Hubble parameter ($H_0$) or in the Supernovae Ia intrinsic magnitude ($M_B$), as can be noticed from Tab.~\ref{tab:model_B}.
As in Model A, the value of $H_0$ in this interacting cosmology is larger than within the $\Lambda$CDM framework (see the red curves in Fig.~\ref{fig:h0}),
albeit slightly smaller than in Model A, due to the strong anti-correlation between $\ensuremath{w_{\rm 0,fld}}$ and $H_0$~\cite{DiValentino:2016hlg,DiValentino:2019jae}.
Consequently, a larger value of $\ensuremath{w_{\rm 0,fld}}>-1$ implies a lower value of $H_0$.
Nevertheless, a $2\sigma$ preference for a non-zero value of the dark matter-dark energy coupling is present also in this case, and also when the CSB dataset is considered:
for the three data combinations presented here, there is always a preference for a non-zero dark matter-dark energy coupling.
Notice that the minimum $\chi^2$ in Model B is smaller than that corresponding to the minimal $\Lambda$CDM framework, but slightly larger than that of Model A, which is nested in Model B. The differences between the minimum $\chi^2$ in Model A and Model B, however, are small
enough to be considered as numerical fluctuations. Since, as previously stated, $\ensuremath{w_{\rm 0,fld}}$ and $H_0$ are strongly anti-correlated, a more negative value of the dark energy equation of state (i.e.\ $\ensuremath{w_{\rm 0,fld}}=-0.999$ as in Model A, close to the prior limit) is preferred by both the CSBH and the CSBM data combinations.
In Fig.~\ref{fig:triangle_B} we depict the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained for Model B.
From a comparison to Fig.~\ref{fig:triangle_LCDM} and also confronting the mean values of Tab.~\ref{tab:model_B} to those shown in Tab.~\ref{tab:model_LCDM} (and, to a minor extent, to those in Tab.~\ref{tab:model_A}),
one can notice that the value of \ $\ensuremath{\Omega_{\rm 0,fld}}$ is much larger.
The reason for this is related to the lower value for the present matter energy density $\Omega_{\rm 0,m}$ (the values are also shown in the tables), which is required within the interacting cosmologies when the dark matter-dark energy coupling is negative.
In the context of a universe with a negative dark coupling, indeed, there is an energy flow from dark matter to dark energy.
Consequently, the (dark) matter content in the past is higher than in the standard $\Lambda$CDM scenario and the amount of intrinsic (dark) matter needed today is lower, because of the extra contribution from the dark energy sector.
In a flat universe, this translates into a much higher value of $\ensuremath{\Omega_{\rm 0,fld}}$.
On the other hand, a lower value of $\Omega_{m,0}$ requires a larger value of the clustering parameter $\sigma_8$ to be able to satisfy the overall normalization of the matter power spectrum. In any case, we find again that the addition of a prior on either $H_0$ or $M_B$ leads to exactly the very same shift for all the cosmological parameters.
Therefore, Model B also provides an excellent solution to the Hubble constant tension,
although at the expense of a very large $\sigma_8$.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.138_{-0.015}^{+0.008}$ & $0.137_{-0.016}^{+0.007}$ & $0.135_{-0.013}^{+0.008}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.655_{-0.021}^{+0.032}$ & $0.671_{-0.018}^{+0.031}$ & $0.675_{-0.018}^{+0.027}$ \\
$\Omega_{\rm 0,m}$ & $0.345_{-0.032}^{+0.021}$ & $0.329_{-0.031}^{+0.018}$ & $0.325_{-0.027}^{+0.018}$ \\
$\ensuremath{w_{\rm 0,fld}}$ & $-1.087_{-0.042}^{+0.051}$ & $-1.131_{-0.044}^{+0.053}$ & $-1.117_{-0.044}^{+0.048}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $0.183_{-0.180}^{+0.061}$ & $0.173_{-0.170}^{+0.051}$ & $0.150_{-0.150}^{+0.051}$ \\
$M_B$ & $-19.41\pm0.02$ & $-19.38\pm0.02$ & $-19.37\pm0.02$ \\
$H_0$ & $68.29_{-0.91}^{+0.66}$ & $69.74_{-0.73}^{+0.75}$ & $69.67_{-0.77}^{+0.78}$ \\
$\sigma_8$ & $0.735_{-0.057}^{+0.045}$ & $0.748_{-0.041}^{+0.068}$ & $0.755_{-0.047}^{+0.051}$ \\
\hline
minimum $\chi^2$ & $3818.24$ & $3830.56$ & $3835.10$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$,
the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model C, see Tab.~\ref{tab:priors}.
We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_C}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{C_PlSB-vs-C_PlSBH-vs-C_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model C, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_C}
\end{center}
\end{figure*}
Finally, Tab.~\ref{tab:model_C} shows the mean values and the $1\sigma$ errors on the usual cosmological parameters explored along this study, for Model C.
Notice that this model benefits from both its interacting nature and from the fact that $\ensuremath{w_{\rm 0,fld}}<-1$ and $\ensuremath{\delta{}_{DMDE}}>0$.
Both features of the dark energy sector have been shown to be excellent solutions to the Hubble constant problem.
As in the previous cases, the shift in the cosmological parameters induced by the addition of a prior is independent of its nature, i.e.\ it is independent on whether a prior on $H_0$ or $M_B$ is adopted.
Within this model, the value of the Hubble constant is naturally larger than within the $\Lambda$CDM model (see the blue lines in Fig.~\ref{fig:h0}),
regardless of the data sets assumed in the analyses.
Despite its phantom nature, as in this particular case $\ensuremath{w_{\rm 0,fld}}<-1$ to ensure a instability-free evolution of perturbations, Model C provides the \emph{best-fits to any of the data combinations explored here, performing even better than} the minimal $\Lambda$CDM picture,
as one can clearly notice from the last row of Tab.~\ref{tab:model_C}.
This fact makes Model C a very attractive cosmological scenario which can provide a solution for the long-standing $H_0$ tension. We must remember that model C, however, has two degrees of freedom more than the standard $\Lambda$CDM paradigm.
Figure~\ref{fig:triangle_C} illustrates the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model C.
Notice that here the situation is just the opposite one of Model B: the value of $\ensuremath{\Omega_{\rm 0,fld}}$ is much smaller than in standard scenarios,
due to the larger value required for the present matter energy density $\Omega_{\rm 0,m}$ when the dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}>0$ and $\ensuremath{w_{\rm 0,fld}}<-1$.
This larger value of the present matter energy density also implies a lower value for the clustering parameter $\sigma_8$, in contrast to what was required within Model B.
\section{Final Remarks}
\label{sec:conclusions}
In this study we have tried to reassess the ability of interacting dark matter-dark energy cosmologies in alleviating the long-standing and highly significant Hubble constant tension.
Despite the fact that in the past these models have been shown to provide an excellent solution to the discrepancy between local measurements and high redshift, Cosmic Microwave Background estimates of $H_0$, there have been recent works in the literature questioning
their effectiveness, related to a misinterpretation of SH0ES data, which indeed does not directly extract the value of $H_0$.
We have therefore computed the ability of interacting cosmologies of reducing the Hubble tension by means of two possible different priors in the cosmological analyses:
a prior on the Hubble constant and, separately, a prior on Type Ia Supernova absolute magnitude.
We combine these priors with Cosmic Microwave Background (CMB), Type Ia Supernovae (SN) and Baryon Acoustic Oscillation (BAO) measurements,
showing that the constraints on the cosmological parameters are independent of the choice of prior, and that the Hubble constant tension is always alleviated.
This last statement is also prior-independent.
Furthermore, one of the possible interacting cosmologies considered here,
with a phantom nature, provides a better fit than the canonical $\Lambda$CDM framework for all the considered data combinations, but with two extra degrees of freedom.
We therefore conclude that interacting dark-matter dark-energy cosmologies still provide a very attractive and viable theoretical and phenomenological scenario
where to robustly relieve the Hubble constant tension,
regardless the method one adopts to process SH0ES data.
\begin{acknowledgments}
\noindent
SG acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 754496 (project FELLINI).
EDV is supported by a Royal Society Dorothy Hodgkin Research Fellowship.
OM is supported by the Spanish grants PID2020-113644GB-I00, PROMETEO/2019/083 and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019//860881-HIDDeN).
RCN acknowledges financial support from the Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP, S\~{a}o Paulo Research Foundation) under the project No. 2018/18036-5.
\end{acknowledgments}
|
\section{Introduction} \label{sec:introduction} \input{introduction}
\section{Related Work} \label{sec:related_work} \input{relatedWork}
\section{Model Description} \label{sec:model} \input{modelDescription}
\section{Experiments} \label{sec:experiments} \input{experiments}
\section{Conclusions and Future Work} \label{sec:conclusions} \input{conclusion}
{\small
\textbf{Acknowledgements}
\input{acknowledgements}
}
{\small
\bibliographystyle{ieee}
\subsection{Composable Activities Dataset} \label{subsec:composableActivities}
\subsection{Inference of per-frame annotations.}
\label{subsec:action_annotation}
The hierarchical structure and compositional
properties of our model enable it to output a predicted global activity,
as well as per-frame annotations of predicted atomic actions and poses for each body
region.
It is important to highlight that in the generation of the per-frame annotations, no prior temporal
segmentation of atomic actions is needed. Also, no post-processing of the output is performed. The
proficiency of our model to produce
per-frame annotated data, enabling action detection temporally and
spatially, make our model unique.
Figure \ref{fig:annotation} illustrates
the capability of our model to provide per-frame annotation of the atomic
actions that compose each activity. The accuracy of
the mid-level action prediction can be evaluated as in \cite{Wei2013}.
Specifically, we first obtain segments of the same predicted action in each
sequence, and then compare these segments with ground truth action labels. The
estimated label of the segment is assumed correct if the detected segment is
completely contained in a ground truth segment with the same label, or if the
Jaccard Index considering the segment and the ground truth label is greater
than 0.6. Using these criteria, the accuracy of the mid-level actions is
79.4\%. In many cases, the wrong action prediction is only highly local in time
or space, and the model is still able to correctly predict the activity label
of the sequence. Taking only the correctly predicted videos in terms of global
activity prediction, the accuracy of action labeling reaches 83.3\%. When consider this number, it
is
important to note that not every ground truth action label is accurate: the
videos were hand-labeled by volunteers, so there is a chance for mistakes in
terms of the exact temporal boundaries of the action. In
this sense, in our experiments we observe cases where the predicted
labels showed more accuracte temporal boundaries than the ground
truth.
\begin{figure*}[th]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_all_sequences_red.pdf}
\end{center}
\caption{Per-frame predictions of atomic actions for selected activities,
showing 20 frames of each video. Each frame is joined with the predicted action
annotations of left arm, right arm, left leg and right leg. Besides the prediction of the global
activity of the video, our algorithm is able to
correctly predict the atomic actions that compose each activity in each frame,
as well as the body regions that are active during the execution of the action.
Note that in the example video of the activity \emph{Walking while calling with
hands}, the \emph{calling with hands} action is correctly annotated even when
the subject change the waving hand during the execution of the activity.}
\label{fig:annotation}
\end{figure*}
\subsection{Robustness to occlusion and noisy joints.}
Our method is also capable of inferring action and activity labels even if some
joints are not observed. This is a common situation in practice,
as body motions induce temporal self-occlusions of body regions.
Nevertheless, due to the joint estimation of poses, actions, and activities,
our model is able to reduce the effect of this problem. To illustrate this, we
simulate a totally occluded region by fixing its geometry to the position
observed in the first frame.
We select which region to be completely occluded in every sequence using uniform sampling.
In this scenario, the accuracy of our preliminary model in \cite{Lillo2014} drops
by 7.2\%. Using our new SR setup including NI handling, the accuracy only drops
by 4.3\%, showing that the detection of non-informative poses helps the model
to deal with occluded regions. In fact, as we show in Section
\ref{subsec:exp_non_info_handling}, many of truly occluded regions in the
videos are identified using NI handling. In contrast, the drop in performance of
BoW is 12.5\% and HMM 10.3\%: simpler models are less capable of robustly dealing
with occluded regions, since their pose assignments rely only on the descriptor
itself, while in our model the assigned pose depends on the descriptor,
sequences of poses and actions, and the activity evaluated, making inference
more robust. Fig. \ref{fig:occlusions} shows some qualitative results of
occluded regions.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]
{./subject_1_6.pdf} \\
{\footnotesize Right arm occluded} \\
\includegraphics[width=0.999\linewidth]
{./subject_1_23.pdf}\\
{\footnotesize Left leg occluded} \\
\includegraphics[width=0.999\linewidth]
{./subject_1_8.pdf}\\
{\footnotesize Left arm occluded}\\
\end{center}
\caption{The occluded body regions are depicted in light blue. When an arm or
leg is occluded, our method still provides a good estimation of the underlying actions in each
frame.}
\label{fig:occlusions}
\end{figure}
In terms of noisy joints, we manually add random Gaussian noise to change the
joints 3D location of testing videos, using the SR setup and the GEO descriptor
to isolate the effect of the joints and not mixing the motion descriptor. Figure
\ref{fig:joint_noise} shows the accuracy of testing videos in terms of noise
dispersion $\sigma_{noise}$ measured in inches. For little noise, there is no
much effect in our model accuracy, as expected for the robustness of the
geometric descriptor. However, for more drastic noise added to every joint, the
accuracy drops dramatically. This behavior is expected, since for highly noisy
joints the model can no longer predict well the sequence of actions and poses.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_acc_vs_noise.pdf} \\
\end{center}
\caption{Performance of our model in presence of simulated Gaussian noise in
every joint, as a function of $\sigma_{noise}$ measured in inches. When the
noise is less than 3 inches in average, the model performance is not very
affected, while for bigger noise dispersion the model accuracy is drastically
affected. It is important no note that in our simulation, every joint is
affected to noise, while in a real setup, noisy joint estimation tend to occur
more rarely. } \label{fig:joint_noise}
\end{figure}
\subsection{Early activity prediction.}
Our model needs the complete video to make an accurate activity and action
prediction of a query video. In this section, we analyze the number of frames
(as a percentage of a complete activity sequence) needed
to make an accurate activity prediction. Figure \ref{fig:accuracy_reduced_frames}
shows the mean accuracy over the dataset (using leave-one-subject-out
cross-validation) in function of the
percentage of frames used by the classifier to label each video. We note that
considering 30\% of the frames, the classifier performs reasonable predictions,
while 70\% of frames are needed to closely match the
accuracy of using all frames.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_acc_vs_frame_reduction.pdf}
\end{center}
\caption{Accuracy of activity recognition versus percentage of frames used in
Composable Activities dataset. In general, 30\% of the frames are needed to
perform reasonable predictions, while 70\% of frames are needed to closely match the
accuracy of using all frames.}
\label{fig:accuracy_reduced_frames}
\end{figure}
\subsection{Failure cases.}
We also study some of the failure cases that we observe during the
experimentation with our model.
Figure \ref{fig:errors} shows some error cases. It is interesting that
the sequences are confusing even for humans when only the skeleton is available
as in the figure. These errors probably will not be surpassed with the model
itself, and will need to use other sources of information like object
detectors, where a cup should be distinguished from a cellphone as in the
third row of Figure \ref{fig:errors}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]
{./sbj1_1.pdf} \\
{\footnotesize Ground truth: Walking while calling with hands\\
Prediction: Walking while waving hand} \\
\includegraphics[width=0.999\linewidth]
{./sbj4_4.pdf}\\
{\footnotesize Ground truth: Composed activity 1\\
Prediction: Talking on cellphone and drinking} \\
\includegraphics[width=0.999\linewidth]
{./sbj4_6.pdf}\\
{\footnotesize Ground truth: Waving hand and drinking\\
Prediction: Talking on cellphone and scratching head} \\
\end{center}
\caption{Failure cases. Our algorithm tends to confuse activities that share very similar
body postures.}
\label{fig:errors}
\end{figure}
\begin{comment}
\subsubsection{New activity characterization}
As we mention in previous section, our model using sparse regularization and
non-negative weights on activity ($\alpha$) classifiers and action ($\beta$)
classifiers do not \emph{punish} poses that have no influence in the
activities. For this reason, our model is able to model a new composed activity
just combining the coefficients of two known activities, leaving the rest of
the parameters of the model untouched. We use an heuristic approach to combine
two models: givint two classes $c_1$ and $c_2$, their coefficients for a region
$r$ and action $a$ are $ \alpha^r_{c_1,a}$ and $ \alpha^r_{c_2,a}$
respectively. For a new class $c_{new}$ composed of classes $c_1$ and $c_2$, we
use the mean value of the coefficients \begin{equation}
\alpha^r_{{c_{new},a}} = \frac{(\alpha^r_{c_1,a} + \alpha^r_{c_2,a})}{2}
\end{equation}
only when the corresponding coefficients for are positive; in other case, we
use the maximum value of the two coefficients. For all subjects of the dataset,
we create all the combinations od two activities, and tested the new model
using three composed videos per subject. The average accuracy of the activity
$16+1$ is 90.2\%, and in average the activities that compose the new activity
drops its accuracy in 12.3\%, showing that we effectively incorporate a new
composed activity to the model at a little cost of getting more confusion over
the original activities. Moreover, the accuracy of action labeling for the new
class is 74.2\%, similar to the accuracy of the action labeling of the
original model, so we can effectively transfer the learning of atomic action
classifiers to new compositions of activities.
\begin{table}
\begin{tabular}
\hline
Activity group & Accuracy of new class & \\
\hline
Simple & 92.
Complex & 87.2\% & \\
\hline
All & 90.2\% & \\
\end{tabular}
\caption{}
\label{tab:acc_new_class}
\end{table}
\end{comment}
\subsection{Classification of Simple and Isolated Actions}
As a first experiment,
we evaluate the performance of our model on the task of simple and
isolated human action recognition in the MSR-Action3D dataset
\cite{WanLi2010}.
Although our model is tailored at recognizing complex
actions, this experiment verifies the performance of our model in the
simpler scenario of isolated atomic action classification.
The MSR-Action3D dataset provides pre-trimmed depth videos and estimated body poses
for isolated actors performing actions from 20
categories. We use 557 videos
in a similar setup to
\cite{Wang2012}, where videos from subjects 1, 3, 5, 7, 9 are used for
training and the rest for testing. Table \ref{tab:msr3d} shows that in this
dataset our model achieves classification accuracies comparable to
state-of-the-art methods.
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Our model & 93.0\% \\
\hline
L. Tao \etal \cite{Tao2015} & 93.6\% \\
C. Wang \etal \cite{Wang2013} & 90.2\% \\
Vemulapalli \etal \cite{Vemulapalli2014} & 89.5\% \\
\hline
\end{tabular}
\caption{\footnotesize
Recognition accuracy in the MSR-Action3D
dataset.}
\label{tab:msr3d}
\end{table}
\subsection{Detection of Concurrent Actions}
Our second experiment evaluates the performance of our model in a concurrent
action recognition setting. In this scenario, the goal is to predict
the temporal localization of actions that may occur concurrently in a long
video. We evaluate this task on the Concurrent Actions dataset \cite{Wei2013},
which
provides 61 RGBD videos and pose estimation data annotated with 12
action categories.
We use a similar evaluation setup as proposed by the authors.
We split the dataset into training and testing sets with a 50\%-50\% ratio.
We evaluate performance by measuring precision-recall: a detected action
is declared as a true positive if its temporal overlap with the ground
truth action interval is larger than 60\% of their union, or if
the detected interval is completely covered by the ground truth annotation.
Our model is tailored at recognizing complex actions that are composed
of atomic components. However, in this scenario, only atomic actions are
provided and no compositions are explicitly defined. Therefore, we apply
a simple preprocessing step: we cluster training videos into groups
by comparing the occurrence of atomic actions within each video.
The resulting groups are used as complex actions labels in the training
videos of this dataset.
At inference time, our model outputs a single labeling per video,
which corresponds to the atomic action labeling that maximizes the energy of
our model.
Since there are no thresholds to adjust, our model produces the single
precision-recall measurement reported in Table \ref{tab:concurrent}.
Our model outperforms the state-of-the-art method in this
dataset at that recall level.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|c|}
\hline
\textbf{Algorithm} & \textbf{Precision} & \textbf{Recall}\\
\hline
Our full model & 0.92 & 0.81 \\
\hline
Wei et al. \cite{Wei2013} & 0.85 & 0.81 \\
\hline
\end{tabular}
\caption{
\footnotesize
Recognition accuracy in the Concurrent Actions dataset. }
\label{tab:concurrent}
\end{table}
\subsection{Recognition of Composable Activities}
In this experiment, we evaluate the performance of our model to recognize complex
and composable human actions. In the evaluation, we use the Composable
Activities dataset \cite{Lillo2014},
which provides 693 videos of 14 subjects performing 16 activities.
Each activity is a spatio-temporal composition of atomic actions.
The dataset provides a total of 26 atomic actions that are shared across
activities. We train our model using two levels of supervision during training:
i) spatial annotations that map body regions to the execution of each action are made available
ii) spatial supervision is not available, and therefore the labels $\vec{v}$ to assign spatial regions to actionlets
are treated as latent variables.
Table \ref{tab:composable} summarizes our results. We observe that under both
training conditions, our model achieves comparable performance. This indicates
that our weakly supervised model can recover some of the information
that is missing while performing well at the activity categorization task.
In spite of using less
supervision at training time, our method outperforms state-of-the-art
methodologies that are trained with full spatial supervision.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Base model + GC, GEO desc. only, spatial supervision & 88.5\%\\
Base model + GC, with spatial supervision & 91.8\% \\
Our full model, no spatial supervision (latent $\vec{v}$) & 91.1\%\\
\hline
Lillo \etal \cite{Lillo2014} (without GC) & 85.7\% \\
Cao et al. \cite{cao2015spatio} & 79.0\% \\
\hline
\end{tabular}
\caption{
\footnotesize
Recognition accuracy in the Composable Activities
dataset.}
\label{tab:composable}
\end{table}
\subsection{Action Recognition in RGB Videos}
Our experiments so far have evaluated the performance of our model
in the task of human action recognition in RGBD videos.
In this experiment, we explore the use of our model in the problem of human
action recognition in RGB videos. For this purpose, we use the sub-JHMDB
dataset \cite{Jhuang2013}, which focuses on videos depicting 12 actions and
where most of the actor body is visible in the image frames.
In our validation, we use the 2D body pose configurations provided by the
authors and compare against previous methods that also use them. Given that
this dataset only includes 2D image coordinates for each body joint, we obtain
the geometric descriptor by adding a depth coordinate with a value $z = d$ to
joints corresponding to wrist and knees, $z = -d$ to elbows, and $z = 0$ to other joints,
so we can compute angles between segments, using $d = 30$ fixed with cross-validation. We summarize the results in Table
\ref{tab:subjhmdb},
which shows that our method outperforms alternative state-of-the-art techniques.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Our model & 77.5\% \\
\hline
Huang et al. \cite{Jhuang2013} & 75.6\% \\
Ch\'eron et al. \cite{Cheron2015} & 72.5\%\\
\hline
\end{tabular}
\caption{\footnotesize
Recognition accuracy in the sub-JHMDB dataset.}
\label{tab:subjhmdb}
\end{table}
\subsection{Spatio-temporal Annotation of Atomic Actions}
In this experiment, we study the ability of our model to provide spatial and
temporal annotations of relevant atomic actions. Table \ref{tab:annotation}
summarizes our results. We report precision-recall rates
for the spatio-temporal annotations predicted by our model in the
testing videos (first and second rows). Notice that this is a
very challenging task. The testing videos do no provide any label, and
the model needs to predict both, the temporal extent of each action and the
body regions associated with the execution of each action. Although the
difficulty of the task, our model shows satisfactory results being able to
infer suitable spatio-temporal annotations.
We also study the capability of the model to provide spatial and temporal
annotations during training. In our first experiment, each video
is provided
with the temporal extent of each action, so the model only needs to infer the
spatial annotations (third row in Table \ref{tab:annotation}). In a
second experiment, we do not provide any temporal or spatial annotation,
but only the global action label of each video (fourth row in Table
\ref{tab:annotation}). In both experiments, we observe that the model is
still able to infer suitable spatio-temporal annotations.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Videos} & \textbf{Annotation inferred} & \textbf{Precision} & \textbf{Recall}\\
\hline
Testing set & Spatio-temporal, no GC & 0.59 & 0.77 \\
Testing set & Spatio-temporal & 0.62 & 0.78 \\
\hline
Training set & Spatial only & 0.86 & 0.90\\
Training set & Spatio-temporal & 0.67 & 0.85 \\
\hline
\end{tabular}
\caption{
\footnotesize
Atomic action annotation performances in the Composable Activities
dataset. The results show that our model is able to recover spatio-temporal
annotations both at training and testing time.}
\label{tab:annotation}
\end{table}
\subsection{Effect of Model Components}
In this experiment,
we study the contribution of key components of the
proposed model. First, using the sub-JHMDB dataset,
we measure the impact of three components of our model: garbage collector for
motion poselets (GC), multimodal modeling of actionlets, and use of latent
variables to infer spatial annotation about body regions (latent $\vec{v}$). Table
\ref{tab:components} summarizes our experimental results.
Table \ref{tab:components} shows that the full version
of our model achieves the best performance, with each of the components
mentioned above contributing to the overall success of the method.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Base model, GEO descriptor only & 66.9\%\\
Base Model & 70.6\%\\
Base Model + GC & 72.7\% \\
Base Model + Actionlets & 75.3\%\\
Our full model (Actionlets + GC + latent $\vec{v}$) & 77.5\% \\
\hline
\end{tabular}
\caption{
\footnotesize
Analysis of contribution to recognition performance from
each model component in the sub-JHMDB dataset.}
\label{tab:components}
\end{table}
Second, using the Composable Activities dataset, we also analyze the
contribution of the proposed self-paced learning scheme for initializing and
training our model. We summarize our results in
Table \ref{tab:initialization} by reporting action
recognition accuracy under different initialization schemes: i) Random: random
initialization of latent variables $\vec{v}$, ii) Clustering: initialize
$\vec{v}$ by first computing a BoW descriptor for the atomic action intervals
and then perform $k$-means clustering, assigning the action intervals to the
closer cluster center, and iii) Ours: initialize $\vec{v}$ using the proposed
self-paced learning scheme. Our proposed initialization scheme helps the model to achieve its best
performance.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Initialization Algorithm} & \textbf{Accuracy}\\
\hline
Random & 46.3\% \\
Clustering & 54.8\% \\
Ours & 91.1\% \\
\hline
Ours, fully supervised & 91.8\%\\
\hline
\end{tabular}
\caption{
\footnotesize
Results in Composable Activities dataset, with latent $\vec{v}$ and different initializations. }
\label{tab:initialization}
\end{table}
\subsection{Qualitative Results}
Finally, we provide a qualitative analysis of
relevant properties of our model. Figure \ref{fig:poselets_img}
shows examples of moving poselets learned in the Composable
Activities dataset. We observe that each moving poselet captures
a salient body configuration that helps to discriminate among atomic
actions. To further illustrate this, Figure \ref{fig:poselets_img}
indicates the most likely underlying atomic action for each moving poselet.
Figure \ref{fig:poselets_skel} presents a similar analysis for moving
poselets learned in the MSR-Action3D dataset.
We also visualize the action annotations produced by our model.
Figure \ref{fig:actionlabels} (top) shows the action labels associated
with each body part in a video from the Composable Activities dataset.
Figure \ref{fig:actionlabels} (bottom) illustrates per-body part action
annotations for a video in the Concurrent Actions dataset. These
examples illustrate the capabilities of our model to correctly
annotate the body parts that are involved in the execution of each action,
in spite of not having that information during training.
\begin{figure}[tb]
\begin{center}
\scriptsize
Motion poselet \#4 - most likely action: talking on cellphone\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets1}
Motion poselet \#7 - most likely action: erasing on board\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets2}
Motion poselet \#19 - most likely action: waving hand\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets3}
\end{center}
\caption{
\footnotesize
Moving poselets learned from the Composable Activities
dataset.}
\label{fig:poselets_img}
\end{figure}
\begin{figure}[tb]
\begin{center}
\scriptsize
Motion poselet \#16 - most likely action: tennis swing\\
\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\textwidth]{Fig/poselets4}
Motion poselet \#34 - most likely action: golf swing\\
\includegraphics[trim=0 0 0cm 0cm,clip, width=0.49\textwidth]{Fig/poselets5}
Motion poselet \#160 - most likely action: bend\\
\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\textwidth]{Fig/poselets6}
\end{center}
\caption{
\footnotesize
Moving poselets learned from the MSR-Action3D
dataset.}
\label{fig:poselets_skel}
\end{figure}
\begin{figure}[tb]
\begin{center}
\scriptsize
\includegraphics[]{Fig/labels_acciones}
\end{center}
\caption{
\footnotesize
Automatic spatio-temporal annotation of atomic actions. Our method
detects the temporal span and spatial body regions that are involved in
the performance of atomic actions in videos.}
\label{fig:actionlabels}
\end{figure}
\begin{comment}
[GENERAL IDEA]
What we want to show:
\begin{itemize}
\item Show tables of results that can be useful to compare the model.
\item Show how the model is useful for videos of simple and composed actions, since now the level of annotations is similar.
\item Show how the inference produces annotated data (poses, actions, etc). In particular, show in Composable Activities and Concurrent actions how the action compositions are handled by the model without post-processing.
\item Show results in sub-JHMDB,showing how the model detects the action in the videos and also which part of the body performs the action (search for well-behaved videos). It could be interesting to show the annotated data over real RGB videos.
\item Show examples of poses (like poselets) and sequences of 3 or 5 poses for actions (Actionlets?)
\end{itemize}
\subsection{Figures}
The list of figures should include:
\begin{itemize}
\item A figure showing the recognition and mid-level labels of Composable Activities, using RGB videos
\item Comparison of action annotations, real v/s inferred in training set, showing we can recover (almost) the original annotations.
\item Show a figure similar to Concurrent Actions paper, with a timeline showing the actions in color. We can show that our inference is more stable than proposed in that paper, and it is visually more similar to the ground truth than the other methods.
\item Show a figure for sub-JHMDB dataset, where we can detect temporally and spatially the action without annotations in the training set.
\item Show Composable Activities and sub-JHMDB the most representative poses and actions.
\end{itemize}
\paragraph{Composable Activities Dataset}
In this dataset we show several results.
(1) Comparing TRAJ descriptor (HOF over trajectory);
(2) Compare the results using latent variables for action assignations to
regions, with different initializations;
(3) Show results of the annotations of the videos in inference.
We must include figures comparing the real annotations
and the inferred annotations for training data, to show we are able to get the
annotations only from data.
\subsection{Recognition of composable activities}
\label{subsec:experiments_summary}
\subsection{Impact of including motion features}
\label{subsec:exp_motionfeats}
\subsection{Impact of latent spatial assignment of actions}
\label{subsec:exp_vlatent}
\subsection{Impact of using multiple classifiers per semantic action}
\label{subsec:exp_multiple}
\subsection{Impact of handling non-informative poses}
\label{subsec:exp_non_info_handling}
\end{comment}
\begin{comment}
\subsection{CAD120 Dataset}
The CAD120 dataset is introduced in \cite{Koppula2012}. It is composed of 124
videos that contain activities in 10 clases performed by 4 actors. Activities
are related to daily living: \emph{making cereal}, \emph{stacking objects}, or
\emph{taking a meal}. Each activity is composed of simpler actions like
\emph{reaching}, \emph{moving}, or \emph{eating}. In this database, human-object
interactions are an important cue to identify the actions, so object
locations and object affordances are provided as annotations. Performance
evaluation is made through leave-one-subject-out cross-validation. Given
that our method does not consider objects, we use only
the data corresponding to 3D joints of the skeletons. As shown in Table
\ref{Table-CAD120},
our method outperforms the results reported in
\cite{Koppula2012} using the same experimental setup. It is clear that using
only 3D joints is not enough to characterize each action or activity in this
dataset. As part of our future work, we expect that adding information related
to objects will further improve accuracy.
\begin{table}
\centering
{\small
\begin{tabular}{|c|c|c|}
\hline
\textbf{Algorithm} & \textbf{Average precision} & \textbf{Average recall}\\
\hline
Our method & 32.6\% & 34.58\% \\
\hline
\cite{Koppula2012} & 27.4\% & 31.2\%\\
\cite{Sung2012} & 23.7\% & 23.7\% \\
\hline
\end{tabular}
}
\caption{Recognition accuracy of our method compared to state-of-the-art methods
using CAD120 dataset.}
\label{Table-CAD120}
\end{table}
\end{comment}
\subsection{Latent spatial actions for hierarchical action detection}
\subsection{Hierarchical activity model}
Suppose we have a video $D$ with $T$ frames, each frame described by a feature vector $x_t$. Assume we have available $K$ classifiers$\{w_k\}_{k=1}^K$ over the frame descriptors, such that each frame descriptor can be associated to a single classifier. If we choose the maximum response for every frame, encoded as $z_t = \argmax_k\{w_k^\top x_t\}$, we can build a BoW representation to feed linear action classifiers $\beta$, computing the histogram $h(Z)$ of $Z = \{z_1,z_2,\dots,z_T\}$ and using these histograms as a feature vector for the complete video to recognize single actions. Imagine now that we would like to use the scores of the maximum responses, $w_{z_t}^\top x_t$ as a potential to help discriminating between videos that present reliable poses from videos that does not. We can build a joint energy function, combining the action classifier score and the aggregated frame classifier scores, as
\begin{equation}
\label{eq:2-levels}
\begin{split}
E(D) &= \beta_{a}^\top h(Z) + \sum_{t=1}^T w_{z_t}^\top x_t \\ & = \sum_{t=1}^T\sum_{k=1}^K\left(\beta_{a,k} + w_k^\top x_t \right)\delta(z_t=k)
\end{split}
\end{equation}
What is interesting of Eq. (\ref{eq:2-levels}) is that it every term in the sum is tied for the value of $z_t$, creating a model such that all its components depends of the labeling $Z$. We can expand the previous model to more levels using the same philosophy. In fact, for a new level, we could create a new indicator $v_t$ for every frame that indicates the election of which classifier $\beta$ will be used (the same as $z_t$ indicates which classifier of $w$). If we name $w$ as \emph{pose classifiers}, and $\beta$ as \emph{action classifiers}, we can create a hierarchical model where multiple poses and actions can be present in a single video. Supposing we have $A$ actions; the energy for a three-level hierarchy could be, for an \emph{activity} $l$,
\begin{equation}
E(D) =\alpha_l^\top h(V) + \sum_{a=1}^A \beta_{a}^\top h^a(Z,V) + \sum_{t=1}^T w_{z_t}^\top x_t
\end{equation}
where $h^a(Z,V)$ refers to the BoW representation of $Z$ for those frames labeled as action $v_t = a$.
[NEW MODEL]
Recent work in action recognition \cite{Cheron2015,Tao2015, Wang2011,Jhuang2013} shows a resurgence of describing human actions as a collection of dynamic spatial parts that resembles Poselets. In line with these research, we split the human body into $R$ semantic regions. As modeling actions using the whole body is hard, separating the body into groups of limbs helps in recognition of actions, specially for complex datasets \cite{Tao2015}. Our wiew is that while poses are in general well defined in most research, little effort has been made to mine actions from videos, in terms of detecting the temporal spanning (action detection) and action localization. In addition to the fact that most action datasets are only single actions, there is a lack of research in the general setup where actions are combined in the same video. Nevertheless, a few works have noticed that humans usually performs complex action in real life \cite{Wei2013, Lillo2014}, providing their own datasets based in RGB-D cameras. In our work, we aim to group both worlds of single and composed actions in a single hierarchical model of three semantic levels, and using human body regions to improve the representativeness.
During training, we assume there is temporal annotations of actions. As we want our model to perform action localization, we model the action assignments $V_r$ in each region as latent variables during training, allowing the model to infer which human part execute the action without needing this kind of annotations in the training set, including a model for the initialization of action labels. In this way, we advance from a simple detection problem to infer also \emph{how} the subject executes the action, important in surveillance applications, health monitoring, between others. We also expand the modeling of recurrent patterns of poses to construct a general model for shared actions, aiming to handle multimodal information, which is produced by actions with the same label but with different execution patterns, or by changes in representation of actions such as varying camera view. We handle this problem by augmenting the number of action classifiers, where each original action acts as a parent node of several non-overlapping child actions. Finally, as we are using local information for poses, some frames could be noisy or representing an uncommon pose, not useful to build the pose models. We attack this issue by adding a garbage collector for poses, where only the most-informative poses are used by pose classifiers during learning. We describe these contributions in the following paragraphs.
\paragraph{[EDIT] Latent assignments of actions to human regions}
Knowing the parts of the body involved in the actions is highly appealing. Suppose we have $M$ videos, each video annotated with $Q_m$ action intervals. Each action interval can be associated with any number of regions, from $1$ to all $R$ regions. For example, a \emph{waving hand} action could be associated only with \emph{right\_arm}, while the action \emph{jogging} could be associated with the whole body. We want to learn the associations of actions and human parts for training videos, and we build these associations using latent variables. The main problem to solve is to how to get a proper initialization for actions, since there is a very high chance to get sticked in a local minimum far away of the optimum, producing bad results.
Our first contribution is a method to get a proper initialization of fine-grained spatial action labels, knowing only the time span of the actions. Using the known action intervals, we formulate the problem of action to region assignment as an optimization problem, constrained using structural information: the actions intervals must not overlap in the same region, and all the action intervals must be present at least in one region. We formulate this labeling problem as a binary Integer Linear Programming (ILP) problem. We define as $v_{r,q}^m=1$ when the action interval $q \in \{1,\dots,Q_m\}$ appears in region $r$ in the video $m$, and $v_{r,q}^m=0$ otherwise. We assume we have pose labels $z_{t,r}$ in each frame, independent for each region, learned via clustering the poses for all frames in all videos. For an action interval $q$, we use as descriptor the histogram of pose labels for each region in the action interval, defined for the video $m$ as $h_{r,q}^m$ . We can solve the problem of finding the correspondence between action intervals and regions in a formulation similar to $k$-means, using the structure of the problem as constraints in the labels, and using $\chi^2$ distance between the action interval descriptors and the cluster centers:
\begin{equation}
\begin{split}
P1) \quad \min_{v,\mu} &\sum_{m=1}^M \sum_{r=1}^R \sum_{q=1}^{Q_m} v_{r,q}^m d( h_{r,q}^m - \mu_{a_q}^r) -\frac{1}{\lambda} v_{r,q}^m\\
\text{s. to}
\quad
& \sum_{r=1}^R v_{r,q}^m \ge 1\text{, }\forall q\text{, }\forall m \\
& v_{r,q_1}^m + v_{r,q_2}^m \le 1 \text{ if } q_1\cap q_2 \neq \emptyset \text{, }\forall r\text{, }\forall m\\
& v_{r,q}^m \in \{0,1\}\text{, }\forall q\text{, }\forall{r}\text{, }\forall m
\end{split}
\end{equation}
with
\begin{equation}
d( h_{r,q}^m - \mu_{a_q}^r) = \sum_{k=1}^K (h_{r,q}^m[k] - \mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\mu_{a_q}^r[k]).
\end{equation}
$\mu_{a_q}^r$ are computed as the mean of the descriptors with the same action label within the same region. We solve $P1$ iteratively as $k$-means, finding the cluster centers for each region $r$, $\mu_{a}^r$ using the labels $v_{r,q}^m$, and then finding the best labeling given the cluster centers, solving an ILP problem. Note that the first term of the objective function is similar to a $k$-means model, while the second term resembles the objective function of \emph{self-paced} learning as in \cite{Kumar2010}, fostering to balance between assigning a single region to every action, towards assigning all possible regions to the action intervals when possible.
[IL: INCLUDE FIGURE TO SHOW P1 GRAPHICALLY]
We describe the further changes in the hierarchical model of \cite{Lillo2014} in the learning and inference sections.
\paragraph{[EDIT] Representing semantic actions with multiple atomic sequences}.
As the poses and atomic actions in \cite{Lillo2014} model are shared, a single classifier is generally not enough to model multimodal representations, that occur usually in complex videos. We modify the original hierarchical model of \cite{Lillo2014} to include multiple linear classifiers per action. We create two new concepts: \textbf{semantic actions}, that refer to actions \emph{names} that compose an activity; and \textbf{atomic sequences}, that refers to the sequence of poses that conform an action. Several atomic sequences can be associated to a single semantic action, creating disjoint sets of atomic sequences, each set associated to a single semantic action. The main idea is that the action annotations in the datasets are associated to semantic actions, whereas for each semantic action we learn several atomic sequence classifiers. With this formulation, we can handle the multimodal nature of semantic actions, covering the changes in motion, poses , or even changes in meaning of the action according to the context (e.g. the semantic action ``open'' can be associated to opening a can, opening a door, etc.).
Inspired by \cite{Raptis2012}, we first use the \emph{Cattell's Scree test} for finding a suitable number of atomic sequence for every semantic action. Using the semantic action labels, we compute a descriptor for every interval using normalized histograms of pose labels. Then, for a particular semantic action $u$, we compute the the eigenvalues $\lambda_u$ of the affinity matrix of the semantic action descriptors, using $\chi^2$ distance. For each semantic action $u \in \{1,\dots,U\}$ we find the number of atomic sequences $G_u$ as $G_u = \argmin_i \lambda_{i+1}^2 / (\sum_{j=1}^i \lambda_j) + c\cdot i$, with $c=2\cdot 10^{-3}$. Finally, we cluster the descriptors corresponding to each semantic action using k-means, using a different number of clusters for each semantic action $u$ according to $G_u$. This approach generates non-overlapping atomic sequences, each associated to a single semantic action.
To transfer the new labels to the model, we define $u(v)$ as the function that given the atomic sequence label $v$, returns the corresponding semantic action label $u$. The energy for the activity level is then
\begin{equation}
E_{\text{activity}} = \sum_{u=1}^U\sum_{t=1}^T \alpha_{y,u}\delta(u(v_t)=u)
\end{equation}
For the action and pose labels the model remains unchanged. Using the new atomic sequences allows a richer representation for actions, while in he activity level, several atomic sequences will map to a single semantic action. This behavior resembles a max-pooling operation, where we will choose at inference the atomic sequences that best describe the performed actions in the video, keeping the semantics of the original labels.
\paragraph{Towards a better representation of poses: adding a garbage collector}
The model in \cite{Lillo2014} uses all poses to feed action classifiers. Out intuition is that only a subset of poses in each video are really discriminative or informative for the actions performed, while there is plenty of poses that corresponds to noisy or non-informative ones. [EXPAND] Our intuition is that low-scored frames in terms of poses (i.e. a low value of $w_{z_t}^\top x_t$ in Eq. (\ref{eq:energy2014})) make the same contribution as high-scored poses in higher levels of the model, while degrading the pose classifiers at the same time since low-scored poses are likely to be related to non-informative frames. We propose to include a new pose, to explicitly handling those low-scored frames, keeping them apart for the pose classifiers $w$, but still adding a fixed score to the energy function to avoid normalization issues and to help in the specialization of pose classifiers. We call this change in the model a \emph{garbage collector} since it handles all low-scores frames and group them having a fixed energy score $\theta$. In practice, we use a special pose entry $K+1$ to identify the non-informative poses. The equation representing the energy for pose level is
\begin{equation} \label{Eq_poseEnergy}
E_{\text{poses}} = \sum_{t=1}^T \left[ {w_{z_t}}^\top x_{t}\delta(z_{t} \le K) + \theta
\delta(z_{t}=K+1)\right]
\end{equation}
where $\delta(\ell) = 1$ if $\ell$ is true and $\delta(\ell) = 0$ if
$\ell$ is false. The action level also change its energy:
\begin{equation}
\begin{split}
\label{Eq_actionEnergy}
E_{\text{actions}} = \sum_{t=1}^T \sum_{a=1}^A \sum_{k=1}^{K+1} \beta_{a,k} \delta(z_t = k) \delta(v_t = a).
\end{split}
\end{equation}
\begin{comment}
Integrating all contribution detailed in previous sections, the model is written as:
Energy function:
\begin{equation}
E = E_{\text{activity}} + E_{\text{action}} + E_{\text{pose}}
+ E_{\text{action transition}} + E_{\text{pose transition}}.
\end{equation}
\begin{equation}
E_{\text{poses}} = \sum_{t=1}^T \left[ {w_{z_t}}^\top x_{t}\delta(z_{t} \le K) + \theta
\delta(z_{t}=K+1)\right]
\end{equation}
\begin{equation}
E_{\text{actions}} = \sum_{t=1}^T \sum_{a=1}^A \sum_{k=1}^{K+1} \beta_{a,k} \delta(z_t = k) \delta(v_t = a).
\end{equation}
\begin{equation}
h_g^{r}(U) = \sum_{t} \delta_{u_{t,r}}^g
\end{equation}
So the energy in the activity level is
\begin{equation}
E_{\text{activity}} = \sum_{r} {\alpha^r_{y}}^\top h^{r}(U) = \sum_{r,g,t} \alpha^r_{y,g} \delta_{u_{t,r}}^g
\end{equation}
\begin{equation}
E_{\text{action transition}} = \sum_{r,a,a'} \gamma^r_{a',a} \sum_{t} \delta_{v_{t-1,r}}^{a'}\delta_{v_{t,r}}^a
\end{equation}
\begin{equation}
E_{\text{pose transition}} =\sum_{r,k,k'} \eta^r_{k',k}\sum_{t}\delta_{z_{t-1,r}}^{k'}\delta_{z_{t,r}}^{k}
\end{equation}
\end{comment}
\subsection{Inference}
\label{subsec:inference}
The input to the inference algorithm is a new video sequence with features
$\vec{x}$. The task is to infer the best complex action label $\hat y$, and to
produce the best labeling of actionlets $\hat{\vec{v}}$ and motion poselets $\hat{\vec{z}}$.
{\small
\begin{equation}
\hat y, \hat{\vec{v}}, \hat{\vec{z}} = \argmax_{y, \vec{v},\vec{z}} E(\vec{x}, \vec{v}, \vec{z}, y)
\end{equation}}
We can solve this by exhaustively enumerating all values of complex actions $y$, and solving for $\hat{\vec{v}}$ and $\hat{\vec{z}}$ using:
\small
\begin{equation}
\begin{split}
\hat{\vec{v}}, \hat{\vec{z}} | y ~ =~ & \argmax_{\vec{v},\vec{z}} ~ \sum_{r=1}^R \sum_{t=1}^T \left( \alpha^r_{y,u(v{(t,r)})}
+ \beta^r_{v_{(t,r)},z_{(t,r)}}\right. \\
&\quad\quad \left.+ {w^r_{z_{(t,r)}}}^\top x_{t,r} \delta(z_{(t,r)} \le K) + \theta^r \delta_{z_{(t,r)}}^{K+1} \right. \\
& \quad\quad \left.+ \gamma^r_{v_{({t-1},r)},v_{(t,r)}} + \eta^r_{z_{({t-1},r)},z_{(t,r)}} \vphantom{{w^r_{z_{(t,r)}}}^\top x_{t,r}} \right). \\
\end{split}
\label{eq:classify_inference}
\end{equation}
\normalsize
\subsection{Learning} \label{subsec:learning}
\textbf{Initial actionlet labels.} An important step in the training process is
the initialization of latent variables. This is a challenging due to the lack
of spatial supervision: at each time instance, the available atomic actions can be associated with
any of the $R$ body regions.
We adopt the machinery of
self-paced
learning \cite{Kumar:EtAl:2010} to provide a suitable solution and
formulate the association between actions and body regions as an
optimization problem. We constrain this optimization using two structural
restrictions:
i) atomic actions intervals must not overlap in the same region, and
ii) a labeled atomic action must be present at least in one region. We
formulate the labeling
process as a binary Integer Linear Programming (ILP) problem, where we define
$b_{r,q}^m=1$ when action interval $q \in \{1,\dots,Q_m\}$ is active in region
$r$ of video $m$; and $b_{r,q}^m=0$ otherwise. Each action interval $q$ is
associated with a single atomic action. We assume that we have initial
motion poselet labels
$z_{t,r}$ in each frame and region.
We describe the action interval $q$ and region $r$ using
the histogram $h_{r,q}^m$ of motion poselet labels. We can find
the correspondence between action intervals and regions using a formulation
that resembles the operation of$k$-means, but using the
structure of the problem to constraint the labels:
\small
\begin{equation}
\begin{split}
\text{P1}) \quad \min_{b,\mu} &\sum_{m=1}^M \sum_{r=1}^R \sum_{q=1}^{Q_m} b_{r,q}^m
d( h_{r,q}^m - \mu_{a_q}^r) -\frac{1}{\lambda} b_{r,q}^m\\
\text{s.t.}
\quad
& \sum_{r=1}^R b_{r,q}^m \ge 1\text{, }\forall q\text{, }\forall m \\
& b_{r,q_1}^m + b_{r,q_2}^m \le 1 \text{ if } q_1\cap q_2 \neq \emptyset
\text{,
}\forall r\text{, }\forall m\\
& b_{r,q}^m \in \{0,1\}\text{, }\forall q\text{, }\forall{r}\text{, }\forall m
\end{split}
\end{equation}
with
\begin{equation}
d( h_{r,q}^m - \mu_{a_q}^r) = \sum_{k=1}^K (h_{r,q}^m[k] -
\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\mu_{a_q}^r[k]).
\end{equation}
\normalsize
Here, $\mu_{a_q}^r$ are the means of the descriptors with action
label $a_q$ within region $r$. We solve $\text{P1}$ iteratively using a block coordinate
descending scheme, alternating between solving $b_{r,q}^m$ with $\mu_{a}^r$
fixed, which has a trivial solution; and then fixing $\mu_{a}^r$ to solve
$b_{r,q}^m$, relaxing $\text{P1}$ to solve a linear program. Note that the second term
of the objective function in $\text{P1}$ resembles the objective function of
\emph{self-paced} learning \cite{Kumar:EtAl:2010}, managing the balance between
assigning a single region to every action or assigning all possible regions to
the respective action interval.
\textbf{Learning model parameters.}
We formulate learning the model parameters as a Latent Structural SVM
problem \cite{Yu:Joachims:2010}, with latent variables for motion
poselets $\vec{z}$ and actionlets $\vec{v}$. We find values for parameters in
equations
(\ref{eq:motionposelets}-\ref{eq:actionletstransition}),
slack variables $\xi_i$, motion poselet labels $\vec{z}_i$, and actionlet labels $\vec{v}_i$,
by solving:
{\small
\begin{equation}
\label{eq:big_problem}
\min_{W,\xi_i,~i=\{1,\dots,M\}} \frac{1}{2}||W||_2^2 + \frac{C}{M} \sum_{i=1}^M\xi_i ,
\end{equation}}
where
{\small \begin{equation}
W^\top=[\alpha^\top, \beta^\top, w^\top, \gamma^\top, \eta^\top, \theta^\top],
\end{equation}}
and
{\small
\begin{equation} \label{eq:slags}
\begin{split}
\xi_i = \max_{\vec{z},\vec{v},y} \{ & E(\vec{x}_i, \vec{z}, \vec{v}, y) + \Delta( (y_i,\vec{v}_i), (y, \vec{v})) \\
& - \max_{\vec{z}_i}{ E(\vec{x}_i, \vec{z}_i, \vec{v}_i, y_i)} \}, \; \;\; i\in[1,...M].
\end{split}
\end{equation}}
In Equation (\ref{eq:slags}), each slack variable
$\xi_i$ quantifies the error of the inferred labeling for
video $i$. We solve Equation (\ref{eq:big_problem}) iteratively using the CCCP
algorithm \cite{Yuille:Rangarajan:03}, by solving for
latent labels $\vec{z}_i$ and $\vec{v}_i$ given model parameters $W$,
temporal atomic action annotations (when available), and labels of complex actions occurring in
training videos (see Section \ref{subsec:inference}). Then, we solve for
$W$ via 1-slack formulation using Cutting Plane algorithm
\cite{Joachims2009}.
The role of the loss function $\Delta((y_i,\vec{v}_i),(y,\vec{v}))$ is to penalize inference errors during
training. If the true actionlet labels are known in advance, the loss function is the same as in \cite{Lillo2014} using the actionlets instead of atomic actions:
\small \begin{equation}
\Delta((y_i,\vec{v}_i),(y,\vec{v})) = \lambda_y(y_i \ne y) + \lambda_v\frac{1}{T}\sum_{t=1}^T
\delta({v_t}_{i} \neq v_t),
\end{equation}
\normalsize
\noindent where ${v_t}_{i}$ is the true actionlet label. If the spatial ordering of actionlets is unknown (hence the latent
actionlet formulation), but the temporal composition is known, we can compute a
list $A_t$ of possible actionlets for frame $t$, and include that information
on the loss function as
\small \begin{equation}
\Delta((y_i,\vec{v}_i),(y,\vec{v})) = \lambda_y(y_i \ne y) + \lambda_v\frac{1}{T}\sum_{t=1}^T
\delta(v_t \notin A_t)
\end{equation}
\normalsize
\subsection{Body regions}
We divide the body pose into $R$ fixed spatial regions and independently compute
a pose feature vector for each region. Figure \ref{fig:skeleton_limbs_regions}
illustrates the case when $R = 4$ that we use in all our experiments. Our body
pose feature vector consists of the concatenation of two descriptors. At frame
$t$ and region $r$, a descriptor $x^{g}_{t,r}$ encodes geometric information
about the spatial configuration of body joints, and a descriptor $x^{m}_{t,r}$
encodes local motion information around each body joint position.
We use the geometric descriptor from \cite{Lillo2014}:
we construct six segments that connect pairs of joints at each
region\footnote{Arm segments: wrist-elbow, elbow-shoulder, shoulder-neck, wrist-shoulder, wrist-head, and neck-torso; Leg segments: ankle-knee, knee-hip, hip-hip center, ankle-hip, ankle-torso and hip center-torso}
and compute 15 angles between those segments.
Also, three angles are calculated between a plane formed by three
segments\footnote{Arm plane: shoulder-elbow-wrist; Leg plane: hip-knee-ankle} and
the remaining three non-coplanar segments, totalizing an 18-D geometric descriptor (GEO) for every region.
Our motion descriptor is based on tracking motion trajectories of key points
\cite{WangCVPR2011}, which in our case coincide with body joint positions.
We extract a HOF descriptor
using 32x32 RGB patches centered at the joint location for a temporal window of 15
frames. At each joint location, this produces a 108-D descriptor,
which we concatenate across all joints in each a region to obtain our motion descriptor. Finally,
we apply PCA to reduce the dimensionality of our concatenated motion descriptor
to 20. The final descriptor is the concatenation of the geometric and
motion descriptors, $x_{t,r} = [x_{t,r}^g ; x_{t,r}^m]$.
\subsection{Hierarchical compositional model}
We propose a hierarchical compositional model that spans three semantic
levels. Figure \ref{fig:overview} shows a schematic of our model. At the
top level, our model assumes that each input video has a single complex action
label $y$. Each complex action is composed of a
temporal and spatial arrangement of atomic actions with labels $\vec{u}=[u_1,\dots,u_T]$, $u_i \in \{1,\dots,S\}$.
In turn, each atomic action consists of several non-shared \emph{actionlets}, which correspond to representative sets of pose configurations for action identification, modeling the multimodality of each atomic action.
We capture actionlet assignments in $\vec{v}=[v_1,\dots,v_T]$, $v_i \in \{1,\dots,A\}$.
Each actionlet index $v_i$ corresponds to a unique and known actomic action label $u_i$, so they are related by a mapping $\vec{u} = \vec{u}(\vec{v})$. At the
intermediate level, our model assumes that each actionlet is composed of a
temporal arrangement of a subset from $K$ body poses, encoded in $\vec{z} = [z_1,\dots,z_T]$, $z_i \in \{1,\dots,K\}$,
where $K$ is a hyperparameter of the model.
These subsets capture pose geometry and local motion, so we call them \emph{motion poselets}.
Finally, at the bottom level, our model identifies motion poselets
using a bank of linear classifiers that are applied to the incoming frame
descriptors.
We build each layer of our hierarchical model on top of BoW
representations of labels. To this end, at the bottom level of our hierarchy, and for
each body region, we learn a dictionary of motion poselets. Similarly, at the mid-level of our hierarchy, we learn a dictionary of actionlets, using the BoW representation of motion poselets as inputs. At each of these levels,
spatio-temporal activations of the respective dictionary words are used
to obtain the corresponding histogram encoding the BoW representation.
The next two sections provide
details on the process to represent and learn the dictionaries of motion
poselets and actionlets. Here we discuss our
integrated hierarchical model.
We formulate our hierarchical model using an energy function.
Given a video of $T$ frames corresponding to complex action $y$ encoded by descriptors $\vec{x}$, with the label vectors $\vec{z}$ for motion poselets,
$\vec{v}$ for actionlets and $\vec{u}$ for atomic actions, we
define an energy function for a video as:
\small
\begin{align}\label{Eq_energy}
E(\vec{x},&\vec{v},\vec{z},y) = E_{\text{motion poselets}}(\vec{z},\vec{x}) \nonumber \\&+ E_{\text{motion poselets BoW}}(\vec{v},\vec{z}) +
E_{\text{atomic actions BoW}}(\vec{u}(\vec{v}),y) \nonumber \\
& + E_{\text{motion poselets transition}}(\vec{z}) + E_{\text{actionlets
transition}}(\vec{v}).
\end{align}
\normalsize
Besides the BoW representations and motion poselet classifiers
described above, Equation (\ref{Eq_energy}) includes
two energy potentials that encode information related to
temporal
transitions between pairs of motion poselets ($E_{\text{motion poselets
transition}}$) and
actionlets ($E_{\text{actionlets transition}}$).
The energy potentials are given by:
{\small
\begin{align}
\label{eq:motionposelets}
&E_{\text{mot. poselet}}(\vec{z},\vec{x}) = \sum_{r,t} \left[ \sum_{k} {w^r_k}^\top
x_{t,r}\delta_{z_{(t,r)}}^{k} + \theta^r \delta_{z_{(t,r)}}^{K+1}\right] \\
&E_{\text{mot. poselet BoW}}(\vec{v},\vec{z}) = \sum_{r,a,k} {\beta^r_{a,k}}\delta_{v_{(t,r)}}^{a}\delta_{z_{(t,r)}}^{k}\\
\label{eq:actionlets_BoW}
&E_{\text{atomic act. BoW}}(\vec{u}(\vec{v}),y) =\sum_{r,s} {\alpha^r_{y,s}}\delta_{u(v_{(t,r)})}^{s} \\
&E_{\text{mot. pos. trans.}}(\vec{z}) =
\sum_{r,k_{+1},k'_{+1}} \eta^r_{k,k'}
\sum_{t} \delta_{z_{(t-1,r)}}^{k}\delta_{z_{(t,r)}}^{k'} \\
\label{eq:actionletstransition}
&E_{\text{acttionlet trans.}}(\vec{v}) =\sum_{r,a,a'} \gamma^r_{a,a'}
\sum_{t}
\delta_{v_{(t-1,r)}}^{a}\delta_{v_{(t,r)}}^{a'}
\end{align}
}
Our goal is to
maximize $E(\vec{x},\vec{v},\vec{z},y)$, and obtain the
spatial and temporal arrangement
of motion poselets $\vec{z}$ and actionlets $\vec{v}$, as well as, the underlying
complex action $y$.
In the previous equations, we use $\delta_a^b$ to indicate the Kronecker delta function $\delta(a = b)$, and use indexes $k \in \{1,\dots,K\}$ for motion poselets, $a \in \{1,\dots,A\}$ for actionlets, and $s \in \{1,\dots,S\}$ for atomic actions.
In the energy term for motion poselets,
$w^r_k$ are a set of $K$ linear pose classifiers applied to frame
descriptors $x_{t,r}$, according to the label of the latent variable $z_{t,r}$.
Note that there is a special label $K+1$; the role of this label will be
explained in Section \ref{subsec:garbage_collector}.
In the energy potential associated to
the BoW representation for motion poselets, $\vec{\beta}^r$ denotes a set of $A$
mid-level classifiers, whose inputs are histograms of motion
poselet labels at those frame annotated as actionlet $a$. At the highest level,
$\alpha^r_{y}$ is a linear classifier associated with complex action $y$, whose
input is the histogram of atomic action labels,
which are related to actionlet assignments by the mapping function $\vec{u}(\vec{v})$. Note that all classifiers
and labels here correspond to a single region $r$. We add the contributions of all
regions to compute the global energy of the video. The transition terms act as
linear classifiers $\eta^r$ and $\gamma^r$ over histograms of temporal transitions of motion poselets
and temporal transitions of actionlets respectively. As we have a special label $K+1$ for motion poselets, the summation index
$k_{+1}$ indicates the interval $\lbrack 1,\dots,K+1 \rbrack$.
\subsection{Learning motion poselets}
In our model, motion poselets are learned by treating them as latent variables
during training. Before training, we fix the number of motion poselets per region to $K$.
In every region $r$, we learn an independent
set of pose classifiers $\{w^r_k\}_{k=1}^K$, initializing the motion poselet
labels using the $k$-means algorithm. We learn pose classifiers,
actionlets and complex actions classifiers jointly, allowing the model to discover
discriminative motion poselets useful to detect and recognize complex actions.
As shown in previous work, jointly learning linear
classifiers to identify body parts and atomic actions improves recognition
rates \cite{Lillo2014,Wang2008}, so here we follow a similar hierarchical
approach, and integrate learning
of motion poselets with the learning of actionlets.
\subsection{Learning actionlets}
\label{sec:learningactionlets}
A single linear classifier does not offer enough flexibility to identify atomic
actions that exhibit high visual variability. As an example, the atomic action
``open'' can be associated with ``opening a can'' or ``opening a
book'', displaying high variability in action execution. Consequently, we
augment our hierarchical model including multiple classifiers to
identify different modes of action execution.
Inspired by \cite{Raptis2012}, we use the \emph{Cattell's Scree test} to
find a suitable number of actionlets to model each atomic
action. Specifically, using the atomic action labels, we compute a descriptor
for every video interval using
normalized histograms of initial pose labels obtained with $k$-means. Then, for a particular atomic action
$s$, we compute the eigenvalues $\lambda(s)$ of the affinity matrix of the
atomic action descriptors, which is build using $\chi^2$ distance. For each
atomic action
$s \in \{1,\dots,S\}$, we find the number of actionlets $G_s$ as $G_s =
\argmin_i {\lambda(s)}_{i+1}^2 / (\sum_{j=1}^i {\lambda(s)}_j) + c\cdot i$, with $c=2\cdot
10^{-3}$. Finally, we cluster the descriptors from each atomic
action $s$ running $k$-means with $k = G_s$. This scheme generates
a set of non-overlapping actionlets to model each single atomic
action. In our experiments, we notice that the number of actionlets used to
model each atomic action varies typically from 1 to 8.
To transfer the new labels to the model, we define $u(v)$ as a function that
maps from actionlet label $v$ to the corresponding atomic action label
$u$. A dictionary of actionlets provides a richer representation for actions,
where several actionlets will map to a single atomic action. This behavior
resembles a max-pooling operation, where at inference time we will choose the
set of actionlets that best describe the performed actions in the video, keeping
the semantics of the original atomic action labels.
\subsection{A garbage collector for motion poselets}
\label{subsec:garbage_collector}
While poses are highly informative for action recognition, an input video
might contain irrelevant or idle zones, where the underlying poses are noisy
or non-discriminative to identify the actions being performed in the video. As
a result, low-scoring motion poselets could degrade the pose classifiers during
training, decreasing their performance. To deal with this problem, we include in
our model a \emph{garbage collector} mechanism for motion poselets. This
mechanism operates by assigning all low-scoring motion poselets to
the $(K+1)$-th pose dictionary entry. These collected poses are
associated with a learned score lower than $\theta^r$, as in Equation
(\ref{eq:motionposelets}). Our experiments show that this mechanism leads
to learning more discriminative motion poselet classifiers.
\input{learning}
\input{inference}
\subsection{Video Representation} \label{subsec:videorepresentation}
[EXPLAIN BETTER, ADD FIGURE]
Our model is based on skeleton information encoded in joint annotations. We use the same geometric descriptor as in \cite{Lillo2014}, using angles between segments connecting two joints, and angles between these segments and a plane formed by three joints. In addition to geometry, other authors \cite{Zanfir2013,Tao2015,Wang2014} have noticed that including local motion information is beneficial to the categorization of videos. Moreover, in \cite{zhu2013fusing} the authors create a fused descriptor using spatio-temporal descriptors and joint descriptors, showing that they combined perform better than separated. With this is mind, we augment the original geometric descriptor with motion information: when there is only skeleton jonints data, we use the displacement of vectors (velocity) as a motion descriptor. If RGB video is available, we use the HOF descriptor extracted from the trajectory of the joint in a small temporal window.
For the geometric descriptor, we use 6 segments per human action (see Fig. XXXX). The descriptor is composed by the angles between the segments (15 angles), and the angles between a plane formed by three segments and the non-coplanar segments (3 angles). For motion descriptor, we use either the 3D velocity of every joint in each region as a concatenated vector (18 dimensions), or the concatenated HOF descriptor of the joint trajectories, transformed to a low-dimensional space using PCA (20 dimensions).
|
\section{introduction}
Recent discovery of Weyl semimetals (WSMs)~\cite{Lv2015TaAs,Xu2015TaAs,Yang2015TaAs} in realistic materials has stimulated tremendous research interest in topological semimetals, such as WSMs, Dirac semimetals, and nodal line semimetals~\cite{volovik2003universe,Wan2011,Balents2011,Burkov2011,Hosur2013,Vafek2014}, as a new frontier of condensed matter physics after the discovery of topological insulators~\cite{qi2011RMP, Hasan2010}.
The WSMs are of particular interest not only because of their exotic Fermi-arc-type surface states but also because of their appealing bulk chiral magneto-transport properties, such as the chiral anomaly effect~\cite{Xiong2015,Huang2015anomaly,Arnold2015}, nonlocal transport~\cite{Parameswaran2014,Baum2015}, large magnetoresistance, and high mobility~\cite{Shekhar2015}.
Currently discovered WSM materials can be classified into two groups. One group breaks crystal inversion symmetry but preserves time-reversal symmetry (e.g., TaAs-family transition-metal pnictides~\cite{Weng2015,Huang2015}and WTe$_2$- and MoTe$_2$-family transition-metal dichalcogenides~\cite{Soluyanov2015WTe2,Sun2015MoTe2,Wang2016MoTe2,Koepernik2016,Deng2016,Jiang2016}). The other group breaks time-reversal symmetry in ferromagnets with possible tilted moments (e.g., magnetic Heusler GdPtBi~\cite{Hirschberger2016,Shekhar2016} and YbMnBi$_2$~\cite{Borisenko2015}). An antiferromagnetic (AFM) WSM compound has yet to be found, although Y$_2$Ir$_2$O$_7$ with a noncoplanar AFM structure was theoretically predicted to be a WSM candidate~\cite{Wan2011}.
In a WSM, the conduction and valence bands cross each other linearly through nodes called Weyl points. Between a pair of Weyl points with opposite chiralities (sink or source of the Berry curvature)~\cite{volovik2003universe}, the emerging Berry flux can lead to the anomalous Hall effect (AHE) ~\cite{Burkov2014}, as observed in GdPtBi~\cite{Hirschberger2016,Shekhar2016}, and an intrinsic spin Hall effect (SHE), as predicted in TaAs-type materials~\cite{Sun2016}, for systems without and with time-reversal symmetry, respectively. Herein, we raise a simple recipe to search for WSM candidates among materials that host strong AHE or SHE.
Recently, Mn$_3$X (where $\rm X=Sn$, Ge, and Ir), which exhibit noncollinear antiferromagetic (AFM) phases at room temperature, have been found to show large AHE~\cite{Kubler2014,Chen2014,Nakatsuji2015,Nayak2016} and SHE~\cite{Zhang2016}, provoking our interest to investigate their band structures. In this work, we report the existence of Weyl fermions for Mn$_3$Ge and Mn$_3$Sn compounds and the resultant Fermi arcs on the surface by \textit{ab initio} calculations, awaiting experimental verifications. Dozens of Weyl points exist near the Fermi energy in their band structure, and these can be well understood with the assistance of lattice symmetry.
\section{methods}
The electronic ground states of Mn$_3$Ge and Mn$_3$Sn were calculated by using density-functional theory (DFT) within the Perdew-Burke-Ernzerhof-type generalized-gradient approximation (GGA)~\cite{Perdew1996} using the Vienna {\it ab initio} Simulation Package (\textsc{vasp})~\cite{Kresse1996}. The $3d^6 4s^1$, $4s^24p^2$, and $5s^2 5p^2$ electrons were considered as valance electrons for Mn, Ge, and Sn atoms, respectively. The primitive cell with experimental crystal parameters $a=b=5.352$ and $c=4.312$ \AA~ for Mn$_3$Ge
and $a=b=5.67$ and $c=4.53$ \AA~ for Mn$_3$Sn
were adopted. Spin-orbit coupling (SOC) was included in all calculations.
To identify the Weyl points with the monopole feature, we calculated the Berry curvature distribution in momentum space.
The Berry curvature was calculated based on a tight-binding Hamiltonian based on localized Wannier functions\cite{Mostofi2008} projected from the DFT Bloch wave functions. Chosen were atomic-orbital-like Wannier functions, which include Mn-$spd$ and Ge-$sp$/Sn-$p$ orbitals, so that the tight-binding Hamiltonian is consistent with the symmetry of \textit{ab initio} calculations.
From such a Hamiltonian, the Berry curvature can be calculated using the Kubo-formula approach\cite{Xiao2010},
\begin{equation}
\begin{aligned}\label{equation1}
\Omega^{\gamma}_n(\vec{k})= 2i\hbar^2 \sum_{m \ne n} \dfrac{<u_{n}(\vec{k})|\hat
v_{\alpha}|u_{m}(\vec{k})><u_{m}(\vec{k})|\hat v_{\beta}|u_{n}(\vec{k})>}{(E_{n}(\vec{k})-E_{m}(\vec{k}))^2},
\end{aligned}
\end{equation}
where $\Omega^{\gamma}_n(\vec{k})$ is the Berry curvature in momentum space for a given band $n$,
$\hat{v}_{\alpha (\beta, \gamma)}=\frac{1}{\hbar}\frac{\partial\hat{H}}{\partial k_{\alpha (\beta, \gamma)}}$ is the velocity operator with $\alpha,\beta,\gamma=x,y,z$, and $|u_{n}(\vec{k})\rangle$ and $E_{n}(\vec{k})$ are the eigenvector and eigenvalue of the Hamiltonian $\hat{H}(\vec{k})$, respectively. The summation of $\Omega^{\gamma}_n(\vec{k})$ over all valence bands gives the Berry curvature vector $\mathbf{\Omega} ~(\Omega^x,\Omega^y,\Omega^z)$.
In addition, the surface states that demonstrate the Fermi arcs were calculated on a semi-infinite surface, where the momentum-resolved local density of states (LDOS) on the surface layer was evaluated based on the Green's function method. We note that the current surface band structure corresponds to the bottom surface of a half-infinite system.
\section{Results and Discussion}
\subsection{Symmetry analysis of the antiferromagnetic structure}
Mn$_3$Ge and Mn$_3$Sn share the same layered hexagonal lattice (space group $P6_3/mmc$, No. 193).
Inside a layer, Mn atoms form a Kagome-type lattice with mixed triangles and hexagons and Ge/Sn atoms are located at the centers of these hexagons.
Each Mn atom carries a magnetic moment of 3.2 $\mu$B in Mn$_3$Sn and 2.7 $\mu$B in Mn$_3$Ge.
As revealed in a previous study~\cite{Zhang2013}, the ground magnetic state is a
noncollinear AFM state, where Mn moments align inside the $ab$ plane and form 120-degree angles with neighboring moment vectors, as shown in Fig.\ref{stru}b. Along the $c$ axis, stacking two layers leads to the primitive unit cell.
Given the magnetic lattice, these two layers can be transformed into each other by inversion symmetry or with a mirror reflection ($M_y$) adding a half-lattice ($c/2$) translation, i.e., a nonsymmorphic symmetry $\{M_y|\tau = c/2\}$. In addition, two other mirror reflections ($M_x$ and $M_z$) adding time reversal (T), $M_x T$ and $M_z T$, exist.
In momentum space, we can utilize three important symmetries, $M_x T$, $M_z T$, and $M_y$, to understand the electronic structure and locate the Weyl points. Suppose a Weyl point with chirality $\chi$ (+ or $-$) exists at a generic position $\mathbf{k}~(k_x,k_y,k_z)$.
Mirror reflection reverses $\chi$ while time reversal does not and both of them act on $\mathbf{k}$. The transformation is as follows:
\begin{equation}
\begin{aligned}
M_x T : & ~ (k_x,k_y,k_z) \rightarrow (k_x, -k_y, -k_z); &~\chi &\rightarrow -\chi \\
M_z T : &~ (k_x,k_y,k_z) \rightarrow (-k_x, -k_y, k_z); &~ \chi &\rightarrow -\chi \\
M_y : &~ (k_x,k_y,k_z) \rightarrow (k_x, -k_y, k_z); &~ \chi &\rightarrow -\chi \\
\end{aligned}
\label{symmetry}
\end{equation}
Each of the above three operations doubles the number of Weyl points. Thus, eight nonequivalent Weyl points can be generated at $(\pm k_x,+k_y,\pm k_z)$ with chirality $\chi$ and
$(\pm k_x,-k_y,\pm k_z)$ with chirality $-\chi$ (see Fig. 1d). We note that the $k_x=0/\pi$ or $k_z=0/\pi$ plane can host Weyl points. However, the $k_y=0/\pi$ plane cannot host Weyl points, because $M_y$ simply reverses the chirality and annihilates the Weyl point with its mirror image if it exists. Similarly the $M_y$ mirror reflection requires that a nonzero anomalous Hall conductivity can only exist in the $xz$ plane (i.e., $\sigma_{xz}$), as already shown in Ref.~\onlinecite{Nayak2016}.
In addition, the symmetry of the 120-degree AFM state is slightly broken in the materials, owing to the existence of a tiny net moment ($\sim$0.003 ~$\mu$B per unit cell)~\cite{Nakatsuji2015,Nayak2016,Zhang2013}. Such weak symmetry breaking seems to induce negligible effects in the transport measurement. However, it gives rise to a perturbation of the band structure, for example, shifting slightly the mirror image of a Weyl point from its position expected, as we will see in the surface states of Mn$_3$Ge.
\begin{figure
\begin{center}
\includegraphics[width=0.45\textwidth]{figure1.png}
\end{center}
\caption{ Crystal and magnetic structures of Mn$_3X$ (where $\rm X = Sn$ or Ge) and related symmetry.
(a) Crystal structure of Mn$_3$X. Three mirror planes are shown in purple, corresponding to
\{$M_y|\tauup=c/2$\}, $M_xT$, and $M_zT$ symmetries.
(b) Top view along the $c$ axis of the Mn sublattice. Chiral AFM with an angle of 120 degrees between neighboring magnetic moments is formed in each Mn layer.
The mirror planes that correspond to $M_xT$ and \{$M_y|\tauup=c/2$\} are marked by dashed lines.
(c) Symmetry in momentum space, $M_y$, $M_xT$, and $M_zT$.
If a Weyl point appears at $(k_x,k_y,k_z)$, eight Weyl points in total can be generated at $(\pm k_x,\pm k_y,\pm k_z)$ by the above three symmetry operations. For convenience, we choose the $k_y=\pi$ plane for $M_y$ here.
}
\label{stru}
\end{figure}
\begin{table
\caption{
Positions and energies of Weyl points in first Brillouin zone for Mn$_3$Sn.
The positions ($k_x$, $k_y$, $k_z$) are in units of $\pi$.
Energies are relative to the Fermi energy $E_F$.
Each type of Weyl point has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, k_z=0)$.
}
\label{table:Mn3Sn}
\centering
\begin{tabular}{cccccc}
\toprule
\hline
Weyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\
\hline
W$_1$ & $-0.325$ & 0.405 & 0.000 & $-$ & 86 \\
W$_2$ & $-0.230$ & 0.356 & 0.003 & + & 158 \\
W$_3$ & $-0.107$ & 0.133 & 0.000 & $-$ & 493 \\
\hline
\end{tabular}
\end{table}
\begin{table
\caption{
Positions and energies of Weyl points in the first Brillouin zone for Mn$_3$Ge.
The positions ($k_x$, $k_y$, $k_z$) are in units of $\pi$.
Energies are relative to the Fermi energy $E_F$.
Each of W$_{1,2,7}$ has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, k_z=0)$.
W$_4$ has four copies at $(k_x \approx 0, \pm k_y, \pm k_z)$ and
W$_9$ has two copies at $(k_x \approx 0, \pm k_y, k_z =0)$.
Each of the other Weyl points has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, \pm k_z)$.
} \label{table:Mn3Ge}
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
\hline
Weyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\
\hline
W$_1$ & $-0.333$ & 0.388 & $-0.000$ & $-$ & 57 \\
W$_2$ & 0.255 & 0.378 & $-0.000$ & + & 111 \\
W$_3$ & $-0.101$ & 0.405 & 0.097 & $-$ & 48 \\
W$_4$ & $-0.004$ & 0.419 & 0.131 & + & 8 \\
W$_5$ & $-0.048$ & 0.306 & 0.164 & + & 77 \\
W$_6$ & 0.002 & 0.314 & 0.171 & $-$ & 59 \\
W$_7$ & $-0.081$ & 0.109 & 0.000 & + & 479 \\
W$_8$ & 0.069 & $-0.128$ & 0.117 & + & 330 \\
W$_9$ & 0.004 & $-0.149$ & $-0.000$ & + & 470 \\
\hline
\end{tabular}
\end{table}
\subsection{Weyl points in the bulk band structure}
The bulk band structures are shown along high-symmetry lines in Fig.~\ref{bandstrucure} for Mn$_3$Ge and Mn$_3$Sn. It is not surprising that the two materials exhibit similar band dispersions.
At first glance, one can find two seemingly band degenerate points at $Z$ and $K$ points, which are below the Fermi energy. Because of $M_z T$ and the nonsymmorphic symmetry \{$M_y|\tauup=c/2$\}, the bands are supposed to be quadruply degenerate at the Brillouin zone boundary $Z$, forming a Dirac point protected by the nonsymmorphic space group~\cite{Young2012,Schoop2015,Tang2016}. Given the slight mirror symmetry breaking by the residual net magnetic moment, this Dirac point is gapped at $Z$ (as shown in the enlarged panel) and splits into four Weyl points, which are very close to each other in $k$ space. A tiny gap also appears at the $K$ point. Nearby, two additional Weyl points appear, too. Since the Weyl point separations are too small near both $Z$ and $K$ points, these Weyl points may generate little observable consequence in experiments such as those for studying Fermi arcs. Therefore, we will not focus on them in the following investigation.
\begin{figure
\begin{center}
\includegraphics[width=0.45\textwidth]{figure2.png}
\end{center}
\caption{
Bulk band structures for (a) Mn$_3$Sn and (b) Mn$_3$Ge along high-symmetry lines with SOC.
The bands near the $Z$ and $K$ (indicated by red circles) are expanded to show details in (a).
The Fermi energy is set to zero.}
\label{bandstrucure}
\end{figure}
Mn$_3$Sn and Mn$_3$Ge are actually metallic, as seen from the band structures. However, we retain the terminology of Weyl semimetal for simplicity and consistency. The valence and conduction bands cross each many times near the Fermi energy, generating multiple pairs of Weyl points. We first investigate the Sn compound. Supposing that the total valence electron number is $N_v$, we search for the crossing points between the $N_v ^{\rm th}$ and $(N_v +1) ^{\rm th}$ bands.
As shown in Fig.~\ref{bc_Mn3Sn}a, there are six pairs of Weyl points in the first Brillouin zone; these can be classified into three groups according to their positions, noted as W$_1$, W$_2$, and W$_3$. These Weyl points lie in the $M_z$ plane (with W$_2$ points being only slightly off this plane owing to the residual-moment-induced symmetry breaking) and slightly above the Fermi energy. Therefore, there are four copies for each of them according to the symmetry analysis in Eq.~\ref{symmetry}.
Their representative coordinates and energies are listed in Table~\ref{table:Mn3Sn} and also indicated in Fig.~\ref{bc_Mn3Sn}a. A Weyl point (e.g., W$_1$ in Figs.~\ref{bc_Mn3Sn}b and ~\ref{bc_Mn3Sn}c) acts as a source or sink of the Berry curvature $\mathbf{\Omega}$, clearly showing the monopole feature with a definite chirality.
In contrast to Mn$_3$Sn, Mn$_3$Ge displays many more Weyl points. As shown in Fig.~\ref{bc_Mn3Ge}a and listed in Table~\ref{table:Mn3Ge}, there are nine groups of Weyl points. Here W$_{1,2,7,9}$ lie in the $M_z$ plane with W$_9$ on the $k_y$ axis, W$_4$ appears in the $M_x$ plane, and the others are in generic positions. Therefore, there are four copies of W$_{1,2,7,4}$, two copies of W$_9$, and eight copies of other Weyl points.
Although there are many other Weyl points in higher energies owing to different band crossings, we mainly focus on the current Weyl points that are close to the Fermi energy. The monopole-like distribution of the Berry curvature near these Weyl points is verified; see W$_1$ in Fig.~\ref{bc_Mn3Ge} as an example.
Without including SOC, we observed a nodal-ring-like band crossing in the band structures of both Mn$_3$Sn and Mn$_3$Ge. SOC gaps the nodal rings but leaves isolating band-touching points, i.e., Weyl points. Since Mn$_3$Sn exhibits stronger SOC than Mn$_3$Ge, many Weyl points with opposite chirality may annihilate each other by being pushed by the strong SOC in Mn$_3$Sn. This might be why Mn$_3$Sn exhibits fewer Weyl points than Mn$_3$Ge.
\begin{figure
\begin{center}
\includegraphics[width=0.5\textwidth]{figure3.png}
\end{center}
\caption{Surface states of Mn$_3$Sn.
(a) Distribution of Weyl points in momentum space.
Black and white points represent Weyl points with $-$ and + chirality, respectively.
(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.
(d) Fermi surface at $E_F= 86$ meV crossing the W$_1$ Weyl points.
The color represents the surface LDOS.
Two pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.
(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.
(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.
}
\label{bc_Mn3Sn}
\end{figure}
\begin{figure
\begin{center}
\includegraphics[width=0.5\textwidth]{figure4.png}
\end{center}
\caption{ Surface states of Mn$_3$Ge.
(a) Distribution of Weyl points in momentum space.
Black and white points represent Weyl points with $-$' and + chirality, respectively. Larger points indicate two Weyl points ($\pm k_z$) projected into this plane.
(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.
(d) Fermi surface at $E_F= 55$ meV crossing the W$_1$ Weyl points.
The color represents the surface LDOS.
Two pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.
(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.
(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.
}
\label{bc_Mn3Ge}
\end{figure}
\subsection{Fermi arcs on the surface}
The existence of Fermi arcs on the surface is one of the most significant consequences of Weyl points inside the three-dimensional (3D) bulk. We first investigate the surface states of Mn$_3$Sn that have a simple bulk band structure with fewer Weyl points. When projecting W$_{2,3}$ Weyl points to the (001) surface, they overlap with other bulk bands that overwhelm the surface states. Luckily, W$_1$ Weyl points are visible on the Fermi surface. When the Fermi energy crosses them, W$_1$ Weyl points appear as the touching points of neighboring hole and electron pockets. Therefore, they are typical type-II Weyl points~\cite{Soluyanov2015WTe2}. Indeed, their energy dispersions demonstrate strongly tilted Weyl cones.
The Fermi surface of the surface band structure is shown in Fig.~\ref{bc_Mn3Sn}d for the Sn compound. In each corner of the surface Brillouin zone, a pair of W$_1$ Weyl points exists with opposite chirality. Connecting such a pair of Weyl points, a long Fermi arc appears in both the Fermi surface (Fig.~\ref{bc_Mn3Sn}d) and the band structure (Fig.~\ref{bc_Mn3Sn}e). Although the projection of bulk bands exhibit pseudo-symmetry of a hexagonal lattice, the surface Fermi arcs do not. It is clear that the Fermi arcs originating from two neighboring Weyl pairs (see Fig.~\ref{bc_Mn3Sn}d) do not exhibit $M_x$ reflection, because the chirality of Weyl points apparently violates $M_x$ symmetry. For a generic $k_x$--$k_z$ plane between each pair of W$_1$ Weyl points, the net Berry flux points in the $-k_y$ direction. As a consequence, the Fermi velocities of both Fermi arcs point in the $+k_x$ direction on the bottom surface (see Fig.~\ref{bc_Mn3Sn}f). These two right movers coincide with the nonzero net Berry flux, i.e., Chern number $=2$.
For Mn$_3$Ge, we also focus on the W$_1$-type Weyl points at the corners of the hexagonal Brillouin zone. In contrast to Mn$_3$Sn, Mn$_3$Ge exhibits a more complicated Fermi surface. Fermi arcs exist to connect a pair of W$_1$-type Weyl points with opposite chirality, but they are divided into three pieces as shown in Fig.~\ref{bc_Mn3Ge}d. In the band structures (see Figs. ~\ref{bc_Mn3Ge}e and f), these three pieces are indeed connected together as a single surface state. Crossing a line between two pairs of W$_1$ points, one can find two right movers in the band structure, which are indicated as p1 and p2 in Fig. ~\ref{bc_Mn3Ge}f. The existence of two chiral surface bands is consistent with a nontrivial Chern number between these two pairs of Weyl points.
\section{Summary}
In summary, we have discovered the Weyl semimetal state in the chiral AFM compounds Mn$_3$Sn and Mn$_3$Ge by {\it ab~initio} band structure calculations.
Multiple Weyl points were observed in the bulk band structures, most of which are type II.
The positions and chirality of Weyl points are in accordance with the symmetry of the magnetic lattice.
For both compounds, Fermi arcs were found on the surface, each of which connects a pair of Weyl points with opposite chirality, calling for further experimental investigations such as angle-resolved photoemission spectroscopy.
The discovery of Weyl points verifies the large anomalous Hall conductivity observed recently in titled compounds.
Our work further reveals a guiding principle to search for Weyl semimetals among materials
that exhibit a strong anomalous Hall effect.
\begin{acknowledgments}
We thank Claudia Felser, J{\"u}rgen K{\"u}bler and Ajaya K. Nayak for helpful discussions.
We acknowledge the Max Planck Computing and Data Facility (MPCDF) and Shanghai Supercomputer Center for computational resources and the German Research Foundation (DFG) SFB-1143 for financial support.
\end{acknowledgments}
|
\section{Introduction}
Conformal invariance was first recognised to be of physical interest when it was realized that the Maxwell equations are covariant under the $15$-dimensional conformal group \cite{Cu,Bat}, a fact that motivated a more detailed analysis of conformal invariance in other physical contexts such as General Relativity, Quantum Mechanics or high energy physics \cite{Ful}. These applications further suggested to study conformal invariance in connection with the physically-relevant groups, among which the Poincar\'e and Galilei groups were the first to be considered. In this context, conformal extensions of the Galilei group have been considered in Galilei-invariant field theories, in the study of possible dynamics of interacting particles as well as in the nonrelativistic AdS/CFT correspondence
\cite{Bar54,Hag,Hav,Zak,Fig}. Special cases as the (centrally extended) Schr\"odinger algebra $\widehat{\mathcal{S}}(n)$ corresponding to the maximal invariance group of the
free Schr\"odinger equation have been studied in detail by various authors, motivated by different applications such as the kinematical invariance of hierarchies of partial differential equations, Appell systems, quantum groups or representation theory \cite{Ni72,Ni73,Do97,Fra}. The class of Schr\"odinger algebras can be generalized in natural manner to the so-called conformal Galilei algebras $\mathfrak{g}_{\ell}(d)$ for (half-integer) values $\ell\geq \frac{1}{2}$,
also corresponding to semidirect products of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ with a Heisenberg algebra but with a higher dimensional characteristic representation.\footnote{By characteristic representation we mean the representation of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ that describes the action on the Heisenberg algebra.} Such algebras, that can be interpreted as a nonrelativistic analogue of the conformal algebra, have been used in a variety of contexts, ranging from classical (nonrelativistic) mechanics, electrodynamics and fluid dynamics to higher-order Lagrangian mechanics \cite{Ai12,Tac,Du11,St13}
The algebraic structure of the conformal Galilei algebra $\mathfrak{g}_{\ell}(d)$ for values of $\ell\geq \frac{3}{2}$ and its representations have been analyzed in some detail, and an algorithmic procedures to compute their Casimir operators have been proposed (see e.g. \cite{Als17,Als19} and references therein). In the recent note \cite{raub}, a synthetic formula for the Casimir operators of the $\mathfrak{g}_{\ell}(d)$ algebra has been given. Although not cited explicitly, the
procedure used there corresponds to the so-called ``virtual-copy" method, a technique well-known for some years that enables to compute the Casimir operators of a Lie algebra using those of its maximal semisimple subalgebra (\cite{Que,C23,C45,SL3} and references therein).
\medskip
\noindent
In this work, we first propose a further generalization of the conformal Galilei algebras $\mathfrak{g}_{\ell}(d)$, replacing the $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ subalgebra of the latter by the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$. As the defining representation $\rho_d$ of $\mathfrak{so}(p,q)$ is real for all values $p+q=d$ \cite{Tits}, the structure of a semidirect product with a Heisenberg Lie algebra remains unaltered. The Lie algebras $\mathfrak{Gal}_{\ell}(p,q)$ describe a class of semidirect products of semisimple and Heisenberg Lie algebras among which $\mathfrak{g}_{\ell}(d)$ corresponds to the case with a largest maximal compact subalgebra.
Using the method developed in \cite{C45}, we construct a virtual copy of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ in the enveloping algebra of $\mathfrak{Gal}_{\ell}(p,q)$ for all half-integer values of $\ell$ and any $d=p+q\geq 3$. The Casimir operators of these Lie algebras are determined combining the analytical and the matrix trace methods, showing how to compute them explicitly in terms of the determinant of a polynomial matrix.
\medskip
\noindent We further determine the exact number of Casimir operators for the unextended Lie algebras $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ obtained by factorizing
$\mathfrak{Gal}_{\ell}(p,q)$ by its centre. Using the reformulation of the Beltrametti-Blasi formula in terms of the Maurer-Cartan equations, we show that albeit the number $\mathcal{N}$ of invariants increases considerably for fixed $\ell$ and varying $d$, a generic polynomial formula at most quadratic in $\ell$ and $d$ that gives the exact value of $\mathcal{N}$ can be established. Depending on the fact whether the relation $d\leq 2\ell+2$ is satisfied or not, it is shown that $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ admits a complete set of invariants formed by operators that do not depend on the generators of the Levi subalgebra. An algorithmic procedure to compute these invariants by means of a reduction to a linear system is proposed.
\section{Maurer-Cartan equations of Lie algebras and Casimir operators }
Given a Lie algebra $ \frak{g}=\left\{X_{1},..,X_{n}\; |\;
\left[X_{i},X_{j}\right]=C_{ij}^{k}X_{k}\right\}$ in terms of
generators and commutation relations, we are principally interested
on (polynomial) operators
$C_{p}=\alpha^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$ in the
generators of $\frak{s}$ such that the constraint $
\left[X_{i},C_{p}\right]=0$,\; ($i=1,..,n$) is satisfied. Such an
operator can be shown to lie in the centre of the enveloping
algebra of $\frak{g}$ and is called a (generalized) Casimir
operator. For semisimple Lie algebras, the determination of
Casimir operators can be done using structural properties
\cite{Ra,Gel}. However, for non-semisimple Lie algebras the relevant
invariant functions are often rational or even transcendental
functions \cite{Bo1,Bo2}. This suggests to develop a method in order to
cover arbitrary Lie algebras. One convenient approach is the
analytical realization. The generators of the Lie algebra
$\frak{s}$ are realized in the space $C^{\infty }\left(
\frak{g}^{\ast }\right) $ by means of the differential operators:
\begin{equation}
\widehat{X}_{i}=C_{ij}^{k}x_{k}\frac{\partial }{\partial x_{j}},
\label{Rep1}
\end{equation}
where $\left\{ x_{1},..,x_{n}\right\}$ are the coordinates in a dual basis of
$\left\{X_{1},..,X_{n}\right\} $. The invariants of $\frak{g}$ hence correspond to solutions of the following
system of partial differential equations:
\begin{equation}
\widehat{X}_{i}F=0,\quad 1\leq i\leq n. \label{sys}
\end{equation}
Whenever we have a polynomial solution of (\ref{sys}), the
symmetrization map defined by
\begin{equation}
{\rm Sym(}x_{i_{1}}^{a_{1}}..x_{i_{p}}^{a_{p}})=\frac{1}{p!}\sum_{\sigma\in
S_{p}}x_{\sigma(i_{1})}^{a_{1}}..x_{\sigma(i_{p})}^{a_{p}}\label{syma}
\end{equation}
allows to rewrite the Casimir operators in their usual form
as central elements in the enveloping algebra of $\frak{g}$,
after replacing the variables $x_{i}$ by the corresponding
generator $X_{i}$. A maximal set of functionally
independent invariants is usually called a fundamental basis. The
number $\mathcal{N}(\frak{g})$ of functionally independent
solutions of (\ref{sys}) is obtained from the classical criteria
for differential equations, and is given by the formula
\begin{equation}
\mathcal{N}(\frak{g}):=\dim \,\frak{g}- {\rm
sup}_{x_{1},..,x_{n}}{\rm rank}\left( C_{ij}^{k}x_{k}\right),
\label{BB}
\end{equation}
where $A(\frak{g}):=\left(C_{ij}^{k}x_{k}\right)$ is the matrix
associated to the commutator table of $\frak{g}$ over the given
basis \cite{Be}.\newline
The reformulation of condition (\ref{BB}) in terms of differential forms (see e.g. \cite{C43})
allows to compute $\mathcal{N}(\frak{g})$ quite efficiently and even to
obtain the Casimir
operators under special circumstances \cite{Peci,C72}. In terms of the
Maurer-Cartan equations, the Lie algebra $\frak{g}$
is described as follows: If $\left\{ C_{ij}
^{k}\right\} $ denotes the structure tensor over the basis $\left\{ X_{1},..,X_{n}\right\} $,
the identification of the dual space $\frak{g}^{\ast}$ with the
left-invariant 1-forms on the simply connected Lie group the Lie algebra of which is isomorphic to $\frak{g}$ allows to define an exterior
differential $d$ on $\frak{g}^{\ast}$ by
\begin{equation}
d\omega\left( X_{i},X_{j}\right) =-C_{ij}^{k}\omega\left(
X_{k}\right) ,\;\omega\in\frak{g}^{\ast}.\label{MCG}
\end{equation}
Using the coboundary operator $d$, we rewrite $\frak{g}$ as a
closed system of $2$-forms%
\begin{equation}
d\omega_{k}=-C_{ij}^{k}\omega_{i}\wedge\omega_{j},\;1\leq
i<j\leq\dim\left( \frak{g}\right) ,\label{MC2}
\end{equation}
called the Maurer-Cartan equations of $\frak{g}$.
In order to reformulate equation (\ref{BB}) in this context, we consider the linear subspace
$\mathcal{L}(\frak{g})=\mathbb{R}\left\{ d\omega_{i}\right\}
_{1\leq i\leq \dim\frak{g}}$ of $\bigwedge^{2}\frak{g}^{\ast}$
generated by the $2$-forms $d\omega_{i}$. Now, for
a generic element $\omega=a^{i}d\omega_{i}\,\;\left(
a^{i}\in\mathbb{R}\right) $ of $\mathcal{L}(\frak{g})$ there
exists a positive integer $j_{0}\left( \omega\right)
\in\mathbb{N}$ such that $\bigwedge^{j_{0}\left( \omega\right)
}\omega\neq0$ and $\bigwedge ^{j_{0}\left( \omega\right)
+1}\omega\equiv0$. We define the scalar $j_{0}\left(
\frak{g}\right) $ as the maximal rank of generic elements,
\begin{equation}
j_{0}\left( \frak{g}\right) =\max\left\{ j_{0}\left(
\omega\right) \;|\;\omega\in\mathcal{L}(\frak{g})\right\},
\label{MCa1}
\end{equation}
As shown in \cite{C43}, this is a scalar invariant of the Lie algebra $\frak{g}$ that
satisfies the relation
\begin{equation}
\mathcal{N}\left( \frak{g}\right) =\dim\frak{g}-2j_{0}\left( \frak{g}%
\right). \label{BB1}
\end{equation}
\medskip
\section{Virtual copies of semisimple Lie algebras}
\noindent The method of virtual copies, essentially developed in \cite{SL3}, constitutes a natural generalization
of a method due to Ch. Quesne (see \cite{Que}) that combines the boson formalism and enveloping algebras of Lie algebras in
order to compute Casimir operators of semidirect products
$\frak{s}\overrightarrow {\frak{\oplus}}_{R}\frak{r}$ of simple Lie algebras $\frak{s}$ and solvable algebras $\mathfrak{r}$.
\medskip
\noindent We briefly recall the procedure, the details of which can be found in \cite{SL3}: Let $\frak{g}$ be
a non-semisimple Lie algebra admitting the Levi decomposition
$\frak{g}=\frak{s}\overrightarrow{\frak{\oplus}}_{\Gamma}\frak{r}$,
where $\frak{s}$ denotes the Levi subalgebra, $\Gamma$ the characteristic representation and $\frak{r}$ the
radical, i.e., the maximal solvable
ideal of $\frak{g}$. Let $\left\{
X_{1},..,X_{n},Y_{1},..,Y_{m}\right\} $ be a basis such that
$\left\{ X_{1},..,X_{n}\right\} $ spans $\frak{s}$ and $\left\{
Y_{1},..,Y_{m}\right\} $ spans $\frak{r}$. We further suppose
that the structure tensor in $\frak{s}$ is given by
\begin{equation}
\left[ X_{i},X_{j}\right] =C_{ij}^{k}X_{k}.\label{ST}
\end{equation}
We now define operators $X_{i}^{\prime}$ in the enveloping algebra
of $\frak{g}$ by means of
\begin{equation}
X_{i}^{\prime}=X_{i}\,f\left( Y_{1},..,Y_{m}\right) +P_{i}\left(
Y_{1},..,Y_{m}\right) ,\label{OP1}%
\end{equation}
where $P_{i}$ is a homogeneous polynomial of degree $k$ and $f$ is
homogeneous of degree $k-1$. We require
the constraints
\begin{eqnarray}
\left[ X_{i}^{\prime},Y_{k}\right] & =0,\label{Bed1}\\
\left[ X_{i}^{\prime},X_{j}\right] & =\left[
X_{i},X_{j}\right] ^{\prime}:=C_{ij}^{k}\left(
X_{k}f+P_{k}\right).\label{Bed2}
\end{eqnarray}
to be satisfied for all generators. This leads to
conditions on $f$ and $P_{i}$. It can be shown that condition (\ref{Bed1}) leads to
\begin{equation}
\left[ X_{i}^{\prime},Y_{j}\right] =\left[ X_{i}f,Y_{j}\right] +\left[ P_{i}%
,Y_{j}\right] =X_{i}\left[ f,Y_{j}\right] +\left[
X_{i},Y_{j}\right]
\,f+\left[ P_{i},Y_{j}\right] .\label{Eq1}%
\end{equation}
By homogeneity, we can reorder the terms according to
their degree, so that $X_{i}\left[ f,Y_{j}\right] $ is
homogeneous of degree $k-1$ in the variables $\left\{
Y_{1},..,Y_{m}\right\} $ and $\left[ X_{i},Y_{j}\right]
\,f+\left[ P_{i},Y_{j}\right] $ of degree $k$. Hence the conditions
\begin{eqnarray}
\left[ f,Y_{j}\right] =0,\;
\left[ X_{i},Y_{j}\right] \,f+\left[ P_{i},Y_{j}\right]
=0\label{Eq1A}
\end{eqnarray}
are satisfied, showing that $f$ is a Casimir operator
of the radical $\frak{r}$. Expanding the condition (\ref{Bed2}) and taking into
account the homogeneity degrees, after a routine computation we find that the system
\begin{eqnarray}
\left[ X_{i},X_{j}\right] \,f-X_{i}\left[ X_{j},f\right] =C_{ij}
^{k}X_{k}f,\quad
\left[ P_{i},X_{j}\right] =C_{ij}^{k}P_{k}\label{Eq3}
\end{eqnarray}
is satisfied for any indices $i,j$. Using now
(\ref{ST}), the first identity reduces to
\begin{equation}
X_{i}\left[ X_{j},f\right] =0.
\end{equation}
From this we conclude that the function $f$ is a Casimir operator of $\frak{g}$ that depends
only on the variables of the radical $\frak{r}$. The second
identity in (\ref{Eq3}) implies that $P_{i}$ transforms under the
$X_{j}^{\prime}s$ like a generator of the semisimple part
$\frak{s}$. Taken together, it follows that the operators
$X_{i}^{\prime}$ fulfill the condition
\begin{eqnarray}
\left[ X_{i}^{\prime},X_{j}^{\prime}\right] & =f\left[ X_{i},X_{j}\right] ^{\prime}.
\end{eqnarray}
We shall say that the operators $X_{i}^{\prime}$
generate a virtual copy of $\frak{s}$ in the enveloping algebra of
$\frak{g}$. If $f$ can be
identified with a central element of $\mathfrak{g}$, as happens for a radical isomorphic to a Heisenberg
algebra, the virtual copy actually generates a copy in
$\mathcal{U}\left( \frak{g}\right) $ \cite{Que,C23}. The computation of the invariants of $\mathfrak{g}$ reduces to application of the following result proved in \cite{SL3}:
\begin{theorem}
Let $\frak{s}$ be the Levi subalgebra of $\frak{g}$ and
let $X_{i}^{\prime}=X_{i}\,f\left( \mathbf{Y}\right)
+P_{i}\left(\mathbf{Y}\right) $ be homogeneous polynomials in the
generators of $\frak{g}$ satisfying equations (\ref{Eq1A}) and
(\ref{Eq3}). If $C=\sum\alpha ^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$
is a Casimir operator of $\frak{s}$ having degree $p$, then
$C^{\prime}=\sum\alpha^{i_{1}..i_{p}}X_{i_{1}}^{\prime
}..X_{i_{p}}^{\prime}$ is a Casimir operator of $\frak{g}$ of
degree $(\deg f+1)p$. In particular, $\mathcal{N}\left( \frak{g}\right)
\geq\mathcal{N}\left( \frak{s}\right) +1.$
\end{theorem}
\medskip
\noindent
The independence of the invariants obtained in such manner follows at once from the
conditions (\ref{Bed1}) and (\ref{Bed2}). For the particular case of a radical isomorphic to a
Heisenberg Lie algebra, it follows that the number of non-central invariants is given by the rank
of the semisimple part, i.e., $\mathcal{N}\left( \frak{g}\right)
=\mathcal{N}\left( \frak{s}\right) +1$ (see \cite{C45} for a proof).
\section{The conformal generalized pseudo-Galilean Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ }
\noindent Structurally, the conformal Galilean algebra $\mathfrak{\widehat{g}}_\ell(d)$ is a semidirect product of the semisimple Lie algebra $\mathfrak{s}=\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(d)$ and a Heisenberg Lie algebra of dimension $N= d(2\ell+1)+1$. The action of $\mathfrak{s}$ over the radical is given by the characteristic representation
$\widehat{\Gamma}=\left(D_{\ell}\otimes \rho_d\right)\oplus \Gamma_0$, where $D_{\ell}$ denotes the irreducible representation of $\mathfrak{sl}(2,\mathbb{R})$ with highest weight $2\ell$ and dimension $2\ell+1$, $rho_d$ is the defining $d$-dimensional representation of $\mathfrak{so}(d)$ and $\Gamma_0$ denotes the trivial representation.
\noindent
Considering the basis (see e.g \cite{Als17}) given by the generators $\left\{ H, D, C, E_{ij}=-E_{ji}, P_{n,i}\right\}$
with $n = 0,1, 2, \ldots, 2\ell; \, i,j =1, 2, \ldots, d$,
the commutators are
\begin{eqnarray}
\fl [D,H]= 2H,\quad [D,C]= -2C,\quad [C, H]=D, \nonumber\\
\fl[H,P_{n,i}]=-nP_ {n-1,i},\,\,[D,P_{n,i}]=2(\ell - n)P_ {n,i}, \,\,[C,P_{n,i}]=(2\ell - n)P_{n+1,i},\nonumber \\
\fl[E_{ij}, P_{n,k} ]=\delta_{ik}P_{n,j} - \delta_{jk}P_{n,i}, \,\,
[E_{ij}, E_{k\ell}] =\delta_{ik}E_{j\ell} + \delta_{j \ell}E_{ik} - \delta_{i \ell}E_{jk} - \delta_{jk}E_{i \ell} \nonumber\\
\fl \left[ P_{m,i},P_{n,j}\right]= \delta _{ij} \delta _{m+n,2\ell} I_{m} M ,\quad\qquad I_{m}= (-1)^{m+\ell+1/2} (2\ell -m)! \,\ m! \label{CG3}
\end{eqnarray}
\noindent The invariants can be deduced from the Casimir operators of the semisimple subalgebra of $\mathfrak{g}$ by replacing the generators by expressions of the type
(\ref{OP1}) that generate a virtual copy of $\mathfrak{s}$. For the case of the conformal generalized Galilean algebra $\widehat{\mathfrak{g}}_{\ell}(d)$, these invariants have recently been given implicitly in \cite{raub} essentially applying this method, although the nomenclature use there is referred to as ``disentaglement" of the generators.
\subsection{The conformal generalized pseudo-Galilean algebra}
\noindent Introducing a non-degenerate metric tensor of signature $(p,q)$, the structure of conformal Galilean algebras can be easily extended to the pseudo-orthogonal Lie algebras $\mathfrak{so}(p,q)$ ($p+q=d$) along the same lines.
The pseudo-orthogonal algebra $\frak{so}(p,q)$ with
$d=p+q$ is given by the $\frac{1}{2}d(d-1)$ operators
$E_{\mu\nu}=-E_{\nu\mu}$ satisfying:
\begin{eqnarray*}
\left[ E_{\mu \nu },E_{\lambda \sigma }\right] &=&g_{\mu \lambda
}E_{\nu \sigma }+g_{\mu \sigma }E_{\lambda \nu }-g_{\nu \lambda
}E_{\mu \sigma
}-g_{\nu \sigma }E_{\lambda \mu } \\
\left[ E_{\mu \nu },P_{\rho }\right] &=&g_{\mu \rho }P_{\nu
}-g_{\nu \rho }P_{\mu },
\end{eqnarray*}
where $g={\rm diag}\left( 1,..,1,-1,..,-1\right)$ is the matrix of the non-degenerate metric. Let $\rho_1$ be the $d$-dimensional defining representation of $\frak{so}(p,q)$ and define the tensor product $\Gamma=D_{\ell}\otimes \rho_1$ for $\ell\in\mathbb{Z}+\frac{1}{2}$. Then $\Gamma$ is an irreducible representation of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(p,q)$ that satisfies the condition $\Gamma_0\subset \Gamma\wedge \Gamma $, i.e., the wedge product of $\Gamma$ contains a copy of the trivial representation. Following the characterization given in \cite{C45}, this implies that the Lie algebra $\left(\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(d)\right)\overrightarrow{\oplus}_{\Gamma\oplus\Gamma_0}\mathfrak{h}_{N}$
with $N=d(2\ell+1)$ is well defined. Over the basis ${ H, D, C, E_{ij}=-E_{ji}, P_{n,i},M}$ with $0\leq n \leq 2\ell$ and $1\leq i<j\leq p+q$, the brackets are given by
\begin{eqnarray}
[D,H]= 2H,\quad [D,C]= -2C,\quad [C, H]=D, \nonumber\\
\,\,[H,P_{n,i}]=-nP_ {n-1,i},\,\,[D,P_{n,i}]=2(\ell - n)P_ {n,i}, \,\,[C,P_{n,i}]=(2\ell - n)P_{n+1,i},\label{CG3} \\
\,\,[E_{ij}, P_{n,k} ]=g_{ik}P_{n,j} - g_{jk}P_{n,i}, \,\,
[E_{ij}, E_{k\ell}] =g_{ik}E_{j\ell} + g_{j \ell}E_{ik} - g_{i \ell}E_{jk} - g_{jk}E_{i \ell}, \nonumber\\
\left[ P_{n,k},P_{m,l}\right]= g_{ij} \delta _{m+n,2\ell} I_{m} M ,\quad\qquad I_{m}= (-1)^{m+\ell+1/2} (2\ell -m)! \,\ m!.\nonumber
\end{eqnarray}
\noindent As commented above, the number of Casimir operators is given by $2+\left[\frac{d}{2}\right]$ and can be deduced in closed form by
means of the virtual copy method.
\begin{proposition}
For any $\ell\in\mathbb{Z}+\frac{1}{2}\geq \frac{1}{2}$, the operators%
\begin{eqnarray}
\fl \widetilde{D}=D\,M+\sum_{i=1}^{d}\sum_{s=0}^{q}\left( -1\right) ^{s+q-1}\frac{\mu
^{1}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l-s,i},\nonumber \\
\fl \widetilde{H}=H\,M+\sum_{i=1}^{d}\sum_{s=0}^{q-1}\left( -1\right) ^{s+q-1}\frac{\mu
^{2}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l-1-s,i}-\sum_{i=1}^{d}\frac{1}{2\Gamma(q+1)^2g_{ii}} P_{q,i}^2,\nonumber \\
\fl\widetilde{C}=C\,M+\sum_{i=1}^{d}\sum_{s=0}^{q}\left( -1\right) ^{s+q}\frac{\mu
^{3}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l+1-s,i}-\sum_{i=1}^{d}\frac{1}{2\Gamma(q+1)^2g_{ii}} P_{q+1,i}^2,\nonumber \\
\fl \widetilde{E}_{i,j}=M E_{i,j}+ \sum_{s=0}^{l}\frac{(-1)^{\frac{2l-1}{2}+s}}{s!\; (2l-s)!}\left(P_{s,i}P_{2l-s,j}-P_{s,j}P_{sl-s,i}\right),\; 1\leq i<j\leq d,
\label{NE3}
\end{eqnarray}
with coefficients defined by
\begin{eqnarray}
\fl \mu ^{1}\left( s,q\right) =2^{\frac{s-2}{2}}\left( 1+\sqrt{2}+\left( -1\right)
^{s}\left( \sqrt{2}-1\right) \right) \prod_{a=0}^{\left[ \frac{s+1}{2}\right]
-1}\left( q-\left[ \frac{s}{2}\right] -a\right) \prod_{b=s+1-\left[ \frac{s}{%
2}\right] }^{s}\left( 2q+3-2b\right) ,\nonumber\\
\fl \mu ^{2}\left( s,q\right) =\frac{1}{s!\; \Gamma(2q+1-s)},\quad \mu ^{3}\left( s,q\right) =\frac{1}{(s-1)!\; \Gamma(2q+2-s)}
\end{eqnarray}
generate a (virtual) copy of $\mathfrak{sl}(2,\mathbb{R})\oplus\frak{so}\left(p,q\right) $ in the
enveloping algebra of $\mathfrak{Gal}_{\ell}(p,q)$.
\end{proposition}
\noindent The proof, albeit long and computationally cumbersome, is completely straightforward and reduces to a direct verification of the
conditions (\ref{Bed1}) and (\ref{Bed2}) with the choice $f=M$, taking into account the following relations between the generators and quadratic products:
\begin{eqnarray*}
\left[D,P_{n,i}P_{m,j}\right]=2\left(2\ell-m-n\right)\;P_{n,i}P_{m,j},\\
\left[H,P_{n,i}P_{m,j}\right]=-\left(n P_{n-1,i}P_{m,j}+m P_{n,i}P_{m-1,j}\right),\\
\left[D,P_{n,i}P_{m,j}\right]=(2\ell-m)P_{n+1,i}P_{m,j}+(2\ell-m)M P_{n,i}P_{m+1,j},\\
\left[M E_{i,j},M E_{k,l}\right]=M^2\left(g_{i,k}E_{j,l}+g_{j,l}E_{i,k}-g_{i,l}E_{j,k}-g_{j,k}E_{i,l}\right),\\
\left[E_{i,j},P_{n,k}P_{m,l}\right]=-\left(g_{i,k}P_{n,j}P_{m,l}-g_{j,l}P_{n,k}P_{m,i}+g_{i,l}P_{n,k}P_{m,j}-g_{j,k}P_{n,i}P_{m,l}\right) ,\\
\left[P_{n,i}P_{m,j},P_{q,k}\right]=-I_{q}M\left(g_{i,k}\delta_{n+q}^{2\ell}P_{m,j}+g_{j,k}\delta_{m+q}^{2\ell}P_{n,i}\right).\\
\end{eqnarray*}
In particular, for the metric tensor $g_{ii}=1$ corresponding to the compact orthogonal algebra $\mathfrak{so}(d)$, we obtain an equivalent realization to
the disentaglement conditions given in \cite{raub}.
\subsection{Explicit formulae for the Casimir operators of $\mathfrak{Gal}_{\ell}(p,q)$}
\noindent Once the (virtual) copy of the semisimple Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ is found, explicit expression for the Casimir operators can be immediately deduced, in its
unsymmetrized analytic form, by means of the well known trace methods (see e.g. \cite{Ra,Gel,Gr64,Per,Po66,Ok77,Mac}). To this extent, let $\left\{d,h,c,e_{i,j},p_{n,k}\right\}$
be the coordinates in $\mathfrak{Gal}_{\ell}(p,q)^{\ast}$ and let $\left\{\widehat{d},\widehat{h},\widehat{c},\widehat{e}_{i,j},\widehat{p}_{n,k}\right\}$ denote the analytical counterpart of the operators in (\ref{NE3}). As the simple subalgebras $\mathfrak{sl}(2,\mathbb{R})$ and $\mathfrak{so}(p,q)$ commute, it follows at once that any invariant of $\mathfrak{Gal}_{\ell}(p,q)$ must be also an invariant of the subalgebra $\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N$. Semidirect products of $\mathfrak{sl}(2,\mathbb{R})$ and a Heisenberg
Lie algebra are well-known to possess only one Casimir operators besides the central generator \cite{C45}, the analytic expression of which is given by
\begin{equation}
C^{\prime}_{4}= \widehat{d}^2-4\widehat{c}\widehat{h}\label{ins}
\end{equation}
This invariant can also be described as a determinant as follows (see e.g. \cite{C23}): Let $B=\left\{X_{2\ell(k-1)+k+2+s}\:\; 1\leq k\leq d,\; 1\leq s\leq 2\ell+1\right\}$ be a basis of
$\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ such that $\left\{D,H,C\right\}=\left\{X_1,X_2,X_3\right\}$ and such that the element
$X_{2\ell(k-1)+k+2+s}$ coresponds to the generator $P_{s,k}$ for $1\leq k\leq d,\; 1\leq s\leq 2\ell+1$. The commutators of $\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ are then described in uniform manner by
\begin{equation}
\left[X_i,X_j\right]= C_{ij}^{k}X_{k},\; 1\leq i<j,k\leq (d+1)(2\ell+1).
\end{equation}
Let $B^{\ast}=\left\{x_{2\ell(k-1)+k+2+s}\:\; 1\leq k\leq d,\; 1\leq s\leq 2\ell+1\right\}$ be the dual basis of $B$ and define be the polynomial matrix $A$ of order $4 + (2 \ell + 1) d$, the entries of which are given by
\begin{eqnarray}
A_{i,j}= C_{ij}^{k} x_{k},\quad 1\leq i,j\leq 3 + (2 \ell + 1) d,\nonumber\\
A_{i,2 + (2 \ell + 1) d}= -A_{2 + (2 \ell + 1) d,i}x_{i},\quad 1\leq i\leq 3,\label{maxa}\\
A_{j,2 + (2 \ell + 1) d}=-A_{2 + (2 \ell + 1) d,j}=\frac{1}{2}x_j,\quad j\geq 4.\nonumber
\end{eqnarray}
It follows from the analysis in \cite{C23} that the determinant $\det{A}$ provides the non-central Casimir invariant of the Lie algebra $\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ . Comparing the result with that deduced from (\ref{ins}) using the copy in the enveloping algebra, we have the relation
\begin{equation}
\det(A)=\prod_{s=1}^{d}\prod_{m=0}^{2\ell}\left(2\ell-m\right)!m!\;M^{2\ell d+d-4}\left(C_{4}^{\prime}\right)^2.\label{insa}
\end{equation}
\medskip
\noindent Similarly, we can consider the invariants of $\mathfrak{Gal}_{\ell}(p,q)$ that are simultaneously invariants of the subalgebra $\mathfrak{so}(p,q)\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N$ with $d=p+q$.
For the pseudo-orthogonal Lie algebra $\frak{so}(p,q)$, a maximal set of Casimir operators is well known to be given
by the coefficients $C_{k}$ of the characteristic polynomial $P(T)$ of the matrix
\begin{equation}
B_{p,q}:=\left(
\begin{array}{cccccc}
0 & .. & -g_{jj}e_{1j} & .. & -g_{NN}e_{1,N} \\
: & & : & & : & \\
g_{11}e_{1j} & .. & 0 & .. & -g_{NN}e_{j,N} \\
: & & : & & : & \\
g_{11}e_{1,N} & .. & g_{jj}e_{j,N} & .. & 0 &
\end{array}
\right)\label{MA2}
\end{equation}
\noindent The same formula, replacing the generators $e_{i,j}$ by those $\widetilde{e}_{i,j}$ of the virtual copy will provide us with the invariants of $\mathfrak{Gal}_{\ell}(p,q)$ that only depend on the generators of $\frak{so}(p,q)$ and the characteristic representation $\Gamma$.
\begin{proposition}
A maximal set of $\left[\frac{d}{2}\right]$ independent Casimir operators of $\mathfrak{Gal}_{\ell}(p,q)$ depending only on the generators of $\frak{so}(p,q)$ and the $\left\{P_{0,i},\cdots P_{2\ell,i}\right\}$ with $1\leq i\leq p+q=d$
is given by the coefficients $\widetilde{C}_{k}$ of the polynomial $P(T)$ defined by
\begin{equation}
P(T):=\det \left( B_{p,q}-T\;\mathrm{Id}_{N}\right) , \label{Pol1}
\end{equation}
where
\begin{equation}
B_{p,q}:=\left(
\begin{array}{cccccc}
0 & .. & -g_{jj}\widetilde{e}_{1j} & .. & -g_{NN}\widetilde{e}_{1,N} \\
: & & : & & : & \\
g_{11}\widetilde{e}_{1j} & .. & 0 & .. & -g_{NN}\widetilde{e}_{j,N} \\
: & & : & & : & \\
g_{11}\widetilde{e}_{1,N} & .. & g_{jj}\widetilde{e}_{j,N} & .. & 0 &
\end{array}
\right)
\end{equation}
\end{proposition}
The actual symmetric representatives ${\rm Sym}(\widetilde{C}_k)$ of the invariants as elements in the enveloping algebra are obtained from the symmetrization map (\ref{syma}).
\medskip
\noindent It follows that the orders of the $1+\left[\frac{p+q}{2}\right]$ non-central invariants of $\mathfrak{Gal}_{\ell}(p,q)$ are
\begin{itemize}
\item $4,4,8,\cdots ,2(p+q-1)$ if $d=p+q$ is odd,
\item $4,4,8,\cdots ,2(p+q)-4,p+q$ if $d=p+q$ is even.
\end{itemize}
\section{The unextended case}
\noindent As the centre of the Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ is one-dmensional, the corresponding factor algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q)=\mathfrak{Gal}_{\ell}(p,q)/Z\left(\mathfrak{Gal}_{\ell}(p,q)\right)$ inherits the structure of a semidirect product of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(p,q)$ with the Abelian Lie algebra of dimension $d(2\ell+1)$, where the characteristic representation $\Gamma$ is given by $D_{\ell}\otimes \rho_1$. As this Lie algebra contains in particular the affine Lie algebra $\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{D_{\ell}^{d}} \mathbb{R}^{(2\ell d+d)}$ as well as the multiply-inhomogeneous algebra $\mathfrak{so}(p,q)\overrightarrow{\oplus}_{\rho^{2\ell+1}} \mathbb{R}^{(2\ell d+d)}$, it is expected that the number of Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ will be much higher than that of $\mathfrak{Gal}_{\ell}(p,q)$. An exception is given by the special case $\overline{\mathfrak{Gal}}_{\frac{1}{2}}(p,q)$, isomorphic to the unextended Schr\"odinger algebra $\widehat{\mathcal{S}}(p+q)$, for which the number of invariants is given by $\mathcal{N}(\widehat{\mathcal{S}}(p+q))=1+\left[\frac{p+q}{2}\right]$, constituting the only case where the number of (non-central) Casimir operators of the extension is preserved when passing to the factor Lie algebra.
\begin{proposition}\label{pro4}
For any $\ell\in \mathbb{Z}+\frac{1}{2}\geq \frac{1}{2}$ and $p+q=d\geq 3$ the number $\mathcal{N}(\mathfrak{g})$ of Casimir operators of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ is given by
\begin{eqnarray}
\mathcal{N}(\mathfrak{g})=\left\{
\begin{array}[c]{rc}
1+\left[\frac{d}{2}\right], & \ell=\frac{1}{2},\; d\geq 3\\[0.1cm]
\frac{1}{2}\left(4\ell d+3d-d^2-6\right), & \ell\geq \frac{3}{2},\; d\leq 2\ell+2\\[0.1cm]
2\ell^2+2\ell-\frac{5}{2}+\left[\frac{d}{2}\right], & \ell\geq \frac{3}{2},\; d\geq 2\ell+3 \\
\end{array}
\right.
\end{eqnarray}
\end{proposition}
\noindent To prove the assertion, the best strategy is to use the reformulation of the formula (\ref{BB}) in terms of differential forms \cite{C43}.
Let $\left\{\theta_1,\theta_2,\theta_3,\omega_{i,j},\sigma_{n,j}\right\}$ with $1\leq i,j\leq d$, $0\leq n\leq 2\ell$ be a basis of 1-forms dual to the basis $\left\{H,D,C,E_{i,j},P_{n,j}\right\}$ of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. Then the Maurer-Cartan equations are given by
\begin{eqnarray}
d\theta_1=-\theta_2\wedge\theta_3,\quad d\theta_2=2\theta_1\wedge\theta_2,\quad d\theta_3=-2\theta_1\wedge\theta_3,\nonumber\\
d\omega_{i,j}=\sum_{s=1}^{d} g_{ss} \omega_{i,s}\wedge\omega_{j,s},\quad 1\leq i<j\leq d,\nonumber\\
d\sigma_{0,j}=2\ell \theta_1\wedge\sigma_{0,j}-\theta_2\wedge\sigma_{1,j}+\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{0,s},,\quad 1\leq j\leq d,\label{MCA}\\
d\sigma_{n,j}=2(\ell-n) \theta_1\wedge\sigma_{n,j}-(n+1)\theta_2\wedge\sigma_{n+1,j}+(2\ell+1-n)\theta_3\wedge\sigma_{n-1,j}\nonumber\\
\quad +\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{n,s},\quad 1\leq n\leq 2\ell-1,\; 1\leq j\leq d,\nonumber\\
d\sigma_{2\ell,j}=-2\ell \theta_1\wedge\sigma_{2\ell,j}+\theta_3\wedge\sigma_{2\ell-1,j}+\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{n,s},,\quad 1\leq j\leq d,.\nonumber
\end{eqnarray}
We first consider the case $\ell=\frac{1}{2}$ corresponding to the unextended Schr\"{o}dinger algebra. For $%
d\leq 4$ the assertion follows at once considering the 2-form
\begin{equation*}
\Xi _{1}=d\sigma _{0,1}+d\sigma _{1,d},
\end{equation*}
that has rank $5$ for $d=3$ and rank $7$ for $d=4$ respectively. For values $%
d\geq 5$ we define the forms
\begin{equation*}
\Xi _{1}=d\sigma _{0,1}+d\sigma _{1,d},\;\Xi _{2}=\sum_{s=0}^{\alpha
}d\omega _{2+2s,3+2s},\;\alpha =\frac{2d-11-\left( -1\right) ^{d}}{4}.
\end{equation*}
Proceeding by induction, it can be easily shown that the product
\begin{equation*}
\bigwedge^{d+1}d\sigma _{0,1}\bigwedge^{d-2}d\sigma
_{0,1}\bigwedge^{d-4}d\omega _{2,3}\cdots \bigwedge^{d-4-2\alpha }d\sigma
_{2+2\alpha ,3+2\alpha }
\end{equation*}
contains all of the 1-forms associated to generators of the Lie algebra $
\widehat{\mathcal{S}}\left( d\right) $ with the following exceptions
\begin{equation}
\theta _{3},\omega _{2,3},\omega _{4,5},\cdots ,\omega _{d-2,d-1},\sigma
_{1,d}. \label{exe}
\end{equation}
Counting the latter elements we conclude that
\begin{equation}
2d-1+\sum_{s=0}^{\alpha }\left( d-4-2s\right) =\mu =\frac{1}{4}\left(
d^{2}+3d-4-2\left[ \frac{d}{2}\right] \right) . \label{exec}
\end{equation}
Therefore, taking the 2-form $\Xi =\Xi _{1}+\Xi _{2}$, it is straightforward
to verify that it satisfies
\begin{equation*}
\bigwedge^{\mu }\Xi =\bigwedge^{d+1}d\sigma _{0,1}\bigwedge^{d-2}d\sigma
_{0,1}\bigwedge^{d-4}d\omega _{2,3}\cdots \bigwedge^{d-4-2\alpha }d\omega
_{2+2\alpha ,3+2\alpha }+\cdots \neq 0,
\end{equation*}
showing that
\begin{equation*}
\mathcal{N}\left( \widehat{\mathcal{S}}\left( d\right) \right) =1+\left[ \frac{d}{2}\right] .
\end{equation*}
\noindent This argumentation, with slight modifications, generalizes naturally for any value $\ell\geq \frac{3}{2}$, where it is also necessary to distinguish two cases, depending whether $d=p+q\leq 2\ell +2$ or $d>2\ell+2$.
\begin{enumerate}
\item Let $d=p+q\leq 2\ell +2$. In this case the dimension of the characteristic representation $\Gamma$ is clearly larger than that of the Levi subalgebra, so that a 2-form of maximal rank can be constructed using only the differential forms associated to the generators $P_{n,k}$. Consider the 2-form in (\ref{MCA}) given by $\Theta=\Theta_1+\Theta_2$, where
\begin{eqnarray}
\Theta_1=d\sigma_{0,1}+d\sigma_{2\ell,d}+d\sigma_{2\ell-1,d-1},\;
\Theta_2=\sum_{s=1}^{d-4} d\sigma_{s,s+1}.\label{difo1}
\end{eqnarray}
Using the decomposition formula $\bigwedge^{a}\Theta=\sum_{r=0}^{a} \left(\bigwedge^{r}\Theta_1\right) \wedge \left(\bigwedge^{a-r}\Theta_2\right)$ we obtain that
\begin{eqnarray}
\fl \bigwedge^{\frac{1}{2}\left(6-d+d^2\right)}\Theta= &\bigwedge^{d+1}d\sigma_{0,1}\wedge\bigwedge^{d-1}d\sigma_{2\ell,d}\wedge\bigwedge^{d-3}d\sigma_{2\ell-1,d-1}\wedge
\bigwedge^{d-4}d\sigma_{1,2}\wedge\nonumber\\
& \wedge\bigwedge^{d-5}d\sigma_{2,3}\wedge\bigwedge^{d-6}d\sigma_{3,4}\wedge\cdots \bigwedge^{2}d\sigma_{d-5,d-4}\wedge d\sigma_{d-4,d-3}+\cdots \neq 0.\label{pro2}
\end{eqnarray}
As $\frac{1}{2}\left(6-d+d^2\right)=\dim\left(\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)\right)$, the 2-form $\Theta$ is necessarily of maximal rank, as all the generators of the Levi subalgebra appear in some term of the product (\ref{pro2}) and no products of higher rank are possible due to the Abelian nilradical. We therefore conclude that $j(\mathfrak{g})=\frac{1}{2}\left(6-d+d^2\right)$ and by formula (\ref{BB1}) we have
\begin{equation}
\mathcal{N}(\mathfrak{g})= \frac{1}{2}\left(4\ell d+3d-d^2-6\right).\label{inva1}
\end{equation}
\item Now let $d \geq 2\ell +3$. The main difference with respect to the previous case is that a generic form $\omega\in\mathcal{L}(\mathfrak{g})$ of maximal rank must necessarily contain linear combinations of the 2-forms $d\omega_{i,j}$ corresponding to the semisimple part of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. Let us consider first the 2-form
\begin{equation}
\Xi_1= \Theta_1+\Theta_2,
\end{equation}
where $\Theta_1$ is the same as in (\ref{difo1}) and $\Theta_2$ is defined as
\begin{equation}
\Theta_2=\sum_{s=0}^{2\ell-3} d\sigma_{1+s,2+s}.
\end{equation}
In analogy with the previous case, for the index $\mu_1=(2\ell+1)d+(\ell+2)(1-2\ell)$ the first term of the following product does not vanish:
\begin{equation}
\fl \bigwedge^{\mu_1}\Xi_1=\bigwedge^{d+1}d\sigma_{0,1}\bigwedge^{d-1}d\sigma_{2\ell,d}\bigwedge^{d-3}d\sigma_{2\ell-1,d-1}
\bigwedge^{d-4}d\sigma_{1,2}\cdots \bigwedge^{d-1-2\ell}d\sigma_{2\ell-2,2\ell-1}+\cdots \neq 0.\label{Pot1}
\end{equation}
This form, although not maximal in $\mathcal{L}(\mathfrak{g})$, is indeed of maximal rank when restricted to the subspace $\mathcal{L}(\mathfrak{r})$ generated by the 2-forms $d\sigma_{n,k}$ with $0\leq n\leq 2\ell$, $1\leq k\leq d$.
This means that the wedge product of $\bigwedge^{\mu_1}\Xi_1$ with any other $d\sigma_{n,k}$ is identically zero. Hence, in order to construct a 2-form of maximal rank in $\mathcal{L}(\mathfrak{g})$, we have to consider a 2-form $\Xi_2$ that is a linear combination of the differential forms associated to the generators of the Levi subalgebra of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. As follows at once from (\ref{Pot1}), the forms $\theta_1,\theta_2,\theta_3$ associated to $\mathfrak{sl}(2,\mathbb{R})$-generators have already appeared, thus it suffices to restrict our analysis to linear combinations of the forms $d\omega_{i,j}$ corresponding to the pseudo-orthogonal Lie algebra $\mathfrak{so}(p,q)$. Specifically, we make the choice
\begin{equation}
\Xi_2= \sum_{s=0}^{\nu}d\omega_{3+2s,4+2s},\quad \nu=\frac{1}{4}\left(2d-4\ell-9+(-1)^{1+d}\right).
\end{equation}
Consider the integer $\mu_2=\frac{1}{4}\left(11+(d-4\ell)(1+d)-4\ell^2-2\left[\frac{d}{2}\right]\right)$ and take the 2-form $\Xi=\Xi_1+\Xi_2$. A long but routine computation shows that following identity is satisfied:
\begin{eqnarray}
\fl \bigwedge^{\mu_1+\mu_2}\Xi =& \left(\bigwedge^{\mu_1}\Xi_1\right)\wedge \left(\bigwedge^{\mu_2}\Xi_2\right) \nonumber\\
& = \left(\bigwedge^{\mu_1}\Xi_1\right)\wedge\bigwedge^{d-6}d\omega_{3,4}\bigwedge^{d-8}d\omega_{5,6}\cdots \bigwedge^{d-6-2\nu}d\omega_{3+2\nu,4+2\nu}+\cdots \neq 0.\label{pro1}
\end{eqnarray}
We observe that this form involves $\mu_1+2\mu_2$ forms $\omega_{i,j}$ from $\mathfrak{so}(p,q)$, hence there remain $\frac{d(d-1)}{2}-\mu_1-2\mu_2$ elements of the pseudo-orthogonal that do not appear in the first term in (\ref{pro1}). From this product and (\ref{MCA}) it can be seen that these uncovered elements are of the type $\left\{\omega_{i_1,i_1+1},\omega_{i_2,i_2+1},\cdots \omega_{i_r,i_r+1}\right\}$ with the subindices satisfying $i_{\alpha+1}-i_{\alpha}\geq 2$ for $1\leq \alpha\leq r$, from which we deduce that no other 2-form $d\omega_{i_\alpha,i_\alpha+1}$, when multiplied with $\bigwedge^{\mu_1+\mu_2}\Xi $ will be different from zero.
We conclude that $\Xi$ has maximal rank equal to $j_0(\mathfrak{g})=\mu_1+\mu_2$, thus applying (\ref{BB1}) we find that
\begin{equation}
\fl \mathcal{N}(\mathfrak{g})= 3 + \frac{d(d-1)}{2}+ (2 \ell + 1) d-2(\mu_1+\mu_2)= 2\ell^2+2\ell-\frac{5}{2}+\left[\frac{d}{2}\right],
\end{equation}
as asserted.
\end{enumerate}
\medskip
\noindent In Table \ref{Tabelle1} we give the numerical values for the number of Casimir operators of the Lie algebras $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ with $d=p+q\leq 12$, and where the linear increment with respect to $\ell$ can be easily recognized.
\smallskip
\begin{table}[h!]
\caption{\label{Tabelle1} Number of Casimir operators for $\overline{\mathfrak{Gal}}_{\ell}(p,q)$.}
\begin{indented}\item[]
\begin{tabular}{c||cccccccccc}
$\;d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\\hline
{$\ell=\frac{1}{2}$} & $2$ & $3$ & $3$ & $4$ & $4$ & $5$ & $5$
& $6$ & $6$ & $7$ \\
{$\ell=\frac{3}{2}$} & $6$ & $7$ & $7$ & $8$ & $8$ & $9$ & $9$
& $10$ & $10$ & $11$ \\
{$\ell=\frac{5}{2}$} & $12$ & $15$ & $17$ & $18$ & $18$ & $19$
& $19$ & $20$ & $20$ & $21$ \\
{$\ell=\frac{7}{2}$} & $18$ & $23$ & $27$ & $30$ & $32$ & $33$
& $33$ & $34$ & $34$ & $35$ \\
{$\ell=\frac{9}{2}$} & $24$ & $31$ & $37$ & $42$ & $46$ & $49$
& $51$ & $52$ & $52$ & $53$ \\
{$\ell=\frac{11}{2}$} & $30$ & $39$ & $47$ & $54$ & $60$ & $65
$ & $69$ & $72$ & $74$ & $75$%
\end{tabular}
\end{indented}
\end{table}
\medskip
\noindent As follows from a general property concerning virtual copies \cite{C45}, Lie algebras of the type $\mathfrak{g}=\mathfrak{s}\overrightarrow{\oplus} \mathfrak{r}$ with an Abelian radical $\mathfrak{r}$ do not admit virtual copies of $\mathfrak{s}$ in $\mathcal{U}\left(\mathfrak{g}\right)$. Thus for Lie algebras of this type the Casimir invariants must be computed either directly from system (\ref{sys}) or by some other procedure. Among the class $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, an exception is given by the unextended (pseudo-)Schr\"odinger algebra $\overline{\mathfrak{Gal}}_{\frac{1}{2}}(p,q)\simeq \widehat{\mathcal{S}}(p,q)$, where the invariants can be deduced from those of the central extension $\widehat{\mathcal{S}}(p,q)$ by the widely used method of contractions (see e.g. \cite{IW,We}). For the remaining values $\ell\geq \frac{3}{2}$ the contraction procedure is useless in practice, given the high number of invariants. However, an interesting property concerning the invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ emerges when we try to find the Casimir operators $F$ that only depend on variables $p_{n,k}$ associated to generators $P_{n,k}$ of the radical, i.e., such that the condition
\begin{equation}
\quad \frac{\partial F}{\partial x}=0,\quad \forall x\in\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q).\label{kond}
\end{equation}
is satisfied. As will be shown next, the number of such solutions tends to stabilize for high values of $d=p+q$, showing that almost any invariant will depend on all of the variables in $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, implying that finding a complete set of invariants is a computationally formidable task, as there is currently no general method to derive these invariants in closed form.
\begin{proposition}
Let $\ell\geq \frac{3}{2}$. For sufficiently large $d$, the number of Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ depending only on the variables $p_{n,k}$ of the Abelian radical is constant and given by
\begin{equation}
\mathcal{N}_1(S)=2\ell^2+3\ell-2.\label{sr2}
\end{equation}
\end{proposition}
\noindent The proof follows analyzing the rank of the subsystem of (\ref{sys}) corresponding to the differential operators $\widehat{X}$ associated to the generators of the Levi subalgebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ and such that condition (\ref{kond}) is fulfilled. Specifically, this leads to the system $S$ of PDEs
\begin{eqnarray}
\widehat{D}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} (2\ell-n)p_{n,i}\frac{\partial F}{\partial p_{n,i}}=0,\;
\widehat{H}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} n p_{n-1,i}\frac{\partial F}{\partial p_{n,i}}=0,\nonumber\\
\widehat{C}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} (2\ell-n)p_{n+1,i}\frac{\partial F}{\partial p_{n,i}}=0,\label{kond2}\\
\widehat{E}_{j,k}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} \left( g_{ij} p_{n,k} -g_{ik} p_{n,j}\right) \frac{\partial F}{\partial p_{n,i}}=0, 1\leq j<k\leq d.\nonumber
\end{eqnarray}
This system consists of $\frac{1}{2}\left(6-d+d^2\right)$ equations in $(2\ell+1)d$ variables that becomes overdetermined for increasing values of $d$ (and fixed $\ell$). In Table \ref{Tabelle2} the rank of such systems is given for values $d\leq 15$, showing that for fixed $\ell$, from $d\geq 2\ell+1$ onwards, the rank of the system increases always by the same constant amount, given precisely by $2\ell+1$.
\begin{table}[h!]
\caption{\label{Tabelle2} Rank of system (\ref{kond2}).}
\begin{indented}\item[]
\begin{tabular}{c||ccccccccccccc}
$d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$& $14$& $15$ \\ \hline
$\ell =\frac{3}{2}$ & 6 & 9 & 13 & 17 & 21 & 25 & 29 & 33 & 37 & 41 & 45 & 49 & 53\\
$\ell =\frac{5}{2}$ & 6 & 9 & 13 & 18 & 24 & 30 & 36 & 42 & 48 & 54 & 60 & 66 & 72\\
$\ell =\frac{7}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 47 & 55 & 63 & 71 & 79
& 87 \\
$\ell =\frac{9}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 68 & 78 & 88
& 98 \\
$\ell =\frac{11}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 69 & 81 &
93 & 105 \\
$\ell =\frac{13}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 69 & 81 & 94 & 108
\end{tabular}
\end{indented}
\end{table}
\noindent With these observations, it is not difficult to establish that for any $\ell\geq \frac{3}{2}$ and $d\geq 2\ell+1$ the rank of the system (\ref{kond2}) is given by
\begin{equation}
{\rm rank}\; S =\left(2+d\right)+\ell\left(2d-3\right)-2\ell^2.\label{kond3}
\end{equation}
As the number of variables is $(2\ell+1)d$, we conclude that the system admits exactly
\begin{equation}
\mathcal{N}_1(S)= (2\ell+1)d- {\rm rank}\; S = 2\ell^2+3\ell-2
\end{equation}
solutions satisfying the constraint (\ref{kond}). Further, comparison with Proposition \ref{pro4} allows us to establish that for any fixed $\ell$ and $d\leq 2\ell+2$, the following identity holds:
\begin{equation}
\mathcal{N}\left(\overline{\mathfrak{Gal}}_{\ell}(p,q)\right)=\mathcal{N}_1(S).\label{trox}
\end{equation}
For increasing values of $d$, there appear additional invariants that necessarily depend on variables associated to the generators of the Levi subalgebra of $\overline{\mathfrak{Gal}}_{\ell}(p,q) $.
\medskip
\noindent Although there is currently no algorithmic procedure to construct a complete set of invariants of these Lie algebras for arbitrary values $d>2\ell+2$, those invariants of $\mathfrak{Gal}_{\ell}(p,q)$ satisfying the condition (\ref{kond}) can be easily computed by means of a reduction argument that leads to a linear system. To this extent, consider the last of the equations in (\ref{kond2}). As the generators of $\mathfrak{so}(p,q)$ permute the generators of the Abelian radical, it is straightforward to verify that the quadratic polynomials
\begin{equation}
\Phi_{n,s}= \sum_{k=1}^{d} \frac{g_{11}}{g_{kk}}\;p_{n,k}p_{n+s,k},\; 0\leq n\leq 2\ell,\; 0\leq s\leq 2\ell-n.\label{ELE}
\end{equation}
are actually solutions of these equations. Indeed, any solution of the type (\ref{kond}) is built up from these functions. Let $\mathcal{M}_d=\left\{\Phi_{n,s},\; 0\leq n\leq 2\ell,\; 0\leq s\leq 2\ell-n\right\}$. The cardinal of this set is given by $2\ell^2+3\ell+1$, and we observe that not all of the elements in $\mathcal{M}_d$ are independent. It follows by a short computation that
\begin{equation}
\widehat{D}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\; \widehat{H}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\; \widehat{C}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\label{ELE2}
\end{equation}
showing that this set is invariant by the action of $\mathfrak{sl}(2,\mathbb{R})$. Therefore, we can construct the solutions of system (\ref{kond2}) recursively using polynomials in the new variables $\Phi_{n,s}$. Specifically, renumbering the elements in $\mathcal{M}_d$ as $\left\{u_{1},\cdots ,u_{2\ell^2+3\ell+1}\right\}$, for any $r\geq 2$ we define a polynomial of degree $2r$ as
\begin{equation}
\Psi_r= \sum_{1\leq i_1< \cdots <i_r\leq |\mathcal{M}_d|} \alpha^{i_1\cdots i_r} u_{i_1}u_{i_2}\cdots u_{i_r},\; i_1+\cdots i_r=r.\label{poly}
\end{equation}
Now, imposing the constraints
\begin{equation}
\widehat{D}^{\prime}(\Psi_r)=0,\; \widehat{H}^{\prime}(\Psi_r)=0,\; \widehat{C}^{\prime}(\Psi_r)=0,\label{ELE3}
\end{equation}
leads to a linear system in the coefficients $\alpha^{i_1\cdots i_r}$, the solutions of which enable us to find the polynomials that satisfy system (\ref{kond2}). Alternatively, the functions
$\Phi_{n,s}$ can be used as new variables to reduce the equations in (\ref{ELE3}) to a simpler form, which may be computationally more effective, albeit the underlying argument is essentially the same \cite{Dick}. In the case where the identity (\ref{trox}) holds, this reduction procedure allows us to obtain a complete set of invariants for the Lie algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q) $.
\medskip
\noindent As an example to illustrate the reduction, consider the 18-dimensional Lie
algebra $\overline{\frak{Gal}}_{\frac{3}{2}}\left( 3\right) $. As $d<2\ell+2$,
formula (\ref{trox}) applies and the algebra has 6 Casimir operators. From these,
two of order four in the generators can be derived from the central
extension $\frak{Gal}_{\frac{3}{2}}\left( 3\right) $ by contraction \cite
{We}. In this case, the set $\mathcal{M}_{3}$ has ten elements that we
enumerate as follows:
\begin{equation*}
\left\{ \Phi _{00},\Phi _{01},\Phi _{02},\Phi _{03},\Phi _{10},\Phi
_{11},\Phi _{12},\Phi _{20},\Phi _{21},\Phi _{30}\right\} =\left\{
u_{1},\cdots ,u_{10}\right\} .
\end{equation*}
The action of the differential operators associated to $\frak{sl}\left( 2,%
\mathbb{R}\right) $ on $\mathcal{M}_{3}$ is explicitly given in Table \ref{Tabelle3}.
\begin{table}[h!]
\caption{\label{Tabelle3} Transformation rules of variables $u_i$ under the $\mathfrak{sl}(2,\mathbb{R})$-action (\ref{kond2}).}
\footnotesize\rm
\begin{tabular}{@{}*{1}{c|cccccccccc}}
& $u_{1}$ & $u_{2}$ & $u_{3}$ & $u_{4}$ & $u_{5}$ & $u_{6}$ & $u_{7}$ & $%
u_{8}$ & $u_{9}$ & $u_{10}$ \\[0.1cm] \hline
$\widehat{D}^{\prime }$ & $6u_{1}$ & $4u_{2}$ & $2u_{3}$ & $0$ & $2u_{5}$ & $%
0$ & $-2u_{7}$ & $-2u_{8}$ & $-4u_{9}$ & $-6u_{10}$ \\
$\widehat{H}^{\prime }$ & $0$ & $-u_{1}$ & $-2u_{2}$ & $-3u_{3}$ & $-2u_{2}$
& $-u_{3}-2u_{5}$ & $-u_{4}-3u_{6}$ & $-4u_{6}$ & $-2u_{7}-3u_{8}$ & $-6u_{9}
$ \\
$\widehat{C}^{\prime }$ & $6u_{2}$ & $2u_{3}+3u_{5}$ & $u_{4}+3u_{6}$ & $%
3u_{7}$ & $4u_{6}$ & $u_{2}+2u_{8}$ & $2u_{9}$ & $2u_{9}$ & $u_{10}$ & $0$%
\end{tabular}
\end{table}
It follows from this action that polynomials $\Psi _{r}$ in the $u_{i}$ that satisfy the system (\ref
{ELE3}) are the solutions of the following system of linear first-order
partial differential equations:
{\footnotesize
\begin{equation}
\fl
\begin{tabular}{rr}
$6u_{1}\frac{\partial F}{\partial u_{1}}+4u_{2}\frac{\partial F}{\partial
u_{2}}+2u_{3}\frac{\partial F}{\partial u_{3}}+2u_{5}\frac{\partial F}{%
\partial u_{5}}-2u_{7}\frac{\partial F}{\partial u_{7}}-2u_{8}\frac{\partial
F}{\partial u_{8}}-4u_{9}\frac{\partial F}{\partial u_{9}}-6u_{10}\frac{%
\partial F}{\partial u_{10}}$ & $=0,$ \\
$-u_{1}\frac{\partial F}{\partial u_{2}}-2u_{2}\frac{\partial F}{\partial
u_{3}}-3u_{3}\frac{\partial F}{\partial u_{4}}-2u_{2}\frac{\partial F}{%
\partial u_{5}}-\left( u_{3}+2u_{5}\right) \frac{\partial F}{\partial u_{6}}%
-\left( u_{4}+3u_{6}\right) \frac{\partial F}{\partial u_{7}}-4u_{6}\frac{%
\partial F}{\partial u_{8}}$ & \\
$-\left( 2u_{7}+3u_{8}\right) \frac{\partial F}{\partial u_{9}}-6u_{9}\frac{%
\partial F}{\partial u_{10}}$ & $=0,$ \\
$6u_{2}\frac{\partial F}{\partial u_{1}}+\left( 2u_{3}+3u_{5}\right) \frac{%
\partial F}{\partial u_{2}}+\left( u_{4}+3u_{6}\right) \frac{\partial F}{%
\partial u_{3}}+3u_{7}\frac{\partial F}{\partial u_{4}}+4u_{6}\frac{\partial
F}{\partial u_{5}}+\left( u_{2}+2u_{8}\right) \frac{\partial F}{\partial
u_{6}}+2u_{9}\frac{\partial F}{\partial u_{7}}$ & \\
$+2u_{9}\frac{\partial F}{\partial u_{8}}+u_{10}\frac{\partial F}{\partial
u_{9}}$ & $=0.$%
\end{tabular}\label{reda}
\end{equation}
}
\noindent This system admits two quadratic solutions given by
\begin{eqnarray*}
F_{1}
&=&3u_{4}^{2}+27u_{6}^{2}-18u_{3}u_{7}-27u_{5}u_{8}+12u_{2}u_{9}-3u_{1}u_{10},
\\
F_{2}
&=&27u_{6}^{2}-5u_{4}^{2}+18u_{4}u_{6}+12u_{3}u_{7}-4u_{1}u_{10}+24u_{2}u_{9}-36\left( u_{5}u_{7}+u_{3}u_{8}\right) .
\end{eqnarray*}
Incidentally, these are the invariants that are obtained by contraction from those of the centrally-extended algebra $\mathfrak{Gal}_{
\frac{3}{2}}\left( 3\right) $.
In addition, there exist four additional independent fourth-order solutions,
the explicit expression of which is omitted because of its length. We
conclude that a complete set of Casimir operators of $\overline{\frak{Gal}}_{%
\frac{3}{2}}\left( 3\right) $ is given by two fourth-order polynomials in
the generators (corresponding to the quadratic solutions of (\ref{reda}))
and four invariants of order eight corresponding to the fourth-order
solutions of (\ref{reda}).
\section{Final remarks}
We have seen that the generalized conformal Galilean algebras $\widehat{\mathfrak{g}}_{\ell}(d)$ based on the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ can be extended naturally to pseudo-Galilean algebras possessing a Levi subalgebra isomorphic to $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ introducing a nondegenerate metric tensor into the orthogonal part. Virtual copies of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ in the enveloping algebra of the semidirect product can be obtained simultaneously for all (half-integer) values of $\ell$ and $p+q=d$. The resulting Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ can be seen, to a certain extent, as ``real" forms of the conformal Galilean algebra $\widehat{\mathfrak{g}}_{\ell}(d)$, their main structural difference residing in the maximal compact subalgebra. Whether these Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ have some definite physical meaning is still an unanswered question, but it is conceivable that they appear in the context of dynamical groups of higher order Lagrangian systems or as the (maximal) invariance symmetry group of a (hierachy of) partial differential equations. The search of physical realizations of the Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ is currently being developed.
\smallskip
\noindent We observe that the obstructions found for integer values of $\ell$ and leading to the so-called exotic extensions (see e.g. \cite{Als19} and reference therein) are a direct consequence of the incompatibility of the odd-dimensional representation $D_{\ell}$ with a Heisenberg algebra. Indeed, as shown in \cite{C45}, the necessary and sufficient condition for a semidirect product $\mathfrak{s}\overrightarrow {\oplus}_{\Gamma\oplus \Gamma_0}\mathfrak{h}_n$ to exist is that the (nontrivial) characteristic representation $\Gamma$ satisfies the condition $\Gamma\wedge \Gamma\supset \Gamma_0$. For the decomposition of $\Gamma$ into irreducible components, this implies in particular that an irreducible representation of $\mathfrak{s}$ must appear with the same multiplicity as its dual or be self-dual. Therefore, in order to further generalize the notion of Galilean algebras to $\mathfrak{sl}(2,\mathbb{R})$-representations with even highest weight, the characteristic representation $\Gamma$ must have the form
\begin{equation}
\Gamma =\left(D_\ell\oplus D_\ell\right)\otimes \rho_d
\end{equation}
As happens with any coupling of a semisimple Lie algebra $\mathfrak{s}$ and a Heisenberg Lie algebra $\mathfrak{h}_n$, the (noncentral) Casimir operators of the semidirect product
$\mathfrak{s}\overrightarrow {\oplus}_{\Gamma\oplus \Gamma_0}\mathfrak{h}_n$ can be constructed using the invariants of $\mathfrak{s}$ by means of the virtual copy method \cite{Que,C45}. Application of this procedure in combination with the trace method provides explicit expressions for the invariants of $\mathfrak{Gal}_{\ell}\left( p,q\right) $ for arbitrary values of $\ell$ and $p+q=d$, comprising in particular the case $\widehat{\mathfrak{g}}_{\ell}(d)=\mathfrak{Gal}_{\ell}\left( d,0\right) $ recently announced \cite{raub}.
\medskip
\noindent The case of the unextended conformal pseudo-Galilean algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q) $ corresponding to the factor of $\mathfrak{Gal}_{\ell}(p,q) $ by its centre has also been considered. As this Lie algebra has an Abelian radical, it does not admit a virtual copy in the corresponding enveloping algebra, hence their invariants must be computed by other means. The number of Casimir operators for arbitrary values of the parameters has been computed by means of the Maurer-Cartan equations of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, where a varying increasement behaviour for the number of invariants in dependence of the proportion between the dimension of the pseudo-orthogonal subalgebra and the dimension $2\ell+1$ of the $\mathfrak{sl}(2,\mathbb{R})$-representation $D_\ell$ has been observed. Although explicit formulae for the Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q) $ with $\ell\geq \frac{3}{2}$ can probably not be found generically, it has been shown that the functions depending only on variables of the radical provide a complete set of invariants for the Lie algebra whenever the condition $d\leq 2\ell+2$ is satisfied. A procedure that reduces the computation of such invariants to solving a linear system has been proposed. However, even with this systematization, the problem still involves cumbersome computations, as the orders of such invariants are quite elevated and there is currently no result that allows to predict these orders. For values $d\geq 2\ell+3$, where there exist Casimir operators that do not satisfy the condition (\ref{kond}), no valuable ansatz has been found that allows to find them systematically. Any kind of progress in this direction would constitute a useful tool for the generic analysis of invariant functions of semidirect products of semisimple and Abelian Lie algebras, a class that up to certain relevant special cases has still not been exhaustively studied.
\medskip
\noindent
\section*{Acknowledgment}
During the
preparation of this work, the RCS was financially supported by
the research project MTM2016-79422-P of the AEI/FEDER (EU). IM was supported
by the Australian Research Council Discovery Grant DP160101376 and Future Fellowship FT180100099.
\section*{References}
|
\section{Introduction}
The fascinating discoveries of the quantum Hall effect (QHE)
originally found in single two dimensional electron layers, have
also been extended to double layer systems , thanks to the
development of techniques for growing GaAs heterostructures
containing two separated layers of two-dimensional electron gas
(see for example references \cite{Eisen}). Apart from finding
plateaus in Hall conductivity at total filling fractions $\nu$
corresponding to the "direct sum" of the familiar integral and
odd-denominator fractional QHE in the individual single layers,
experiments also show the occurence of new plateaus which are
intrinsic to double-layer systems and rely on interlayer quantum
coherence and correlations. On the theoretical front, a large
body of work has already been done on double-layer systems. An
extensive list of references to this literature has been given
in the lucid review of this subject by Girvin and MacDonald
\cite{GirvMac} and in the paper by Moon ${\it et al}$ \cite{Moon}.
Generally one analyses double layer systems by attributing to the
electrons, in addition to their spatial coordinates on the
plane, a two-component "pseudospin" whose up and down
components refer to the amplitude for the electron to be in the
first and second layers respectively. The real physical spin of
the electrons is assumed , as a staring approximation, to be
fully polarised by the strong magnetic field and hence frozen
as a degree of freedom. However, even when real physical spin is
suppressed, the use of a pseudospin to represent the layer
degree of freedom maps the double layer spinless problem into a
monolayer problem with spin \cite{Mac}. Such a mapping allows one
to borrow for double layer systems, the rich body of insights and
results available from single layer systems with real spin.
Thus one may expect a fully symmetric (polarised) pseudospin
state to be energetically preferred because of a combination of
Coulomb repulsion and the Pauli principle which forces an
associated antisymmetric spatial wavefunction, just as in
itenerant ferromagnetism. Further, the relevance of Skyrmions to
systems with real spin, predicted by theoretical considerations
\cite{Sondhi}, \cite{Fertig} and supported by experimental evidence
\cite{Barrett}, has in turn prompted studies of similar topological
excitations in spinless double layer systems, but now involving
pseudospin (See Girvin and MacDonald
\cite{GirvMac}, Moon ${\it et al}$ \cite{Moon} and references given therein).
Because of interplane-intraplane anisotropy in Coulomb repulsion
between electrons located in the two layers, as well the
capacitance energy of maintaining unequal charge density in the
two layers, the effective Action governing pseudospin enjoys
only U(1) symmetry of rotations about the z-axis (the direction
perpendicular to the x-y plane of the layers). Finiteness of
the capacitance energy between the two layers requires that
asymptotically the pseudospin must lie on the easy (x-y) plane.
The basic topological excitations in that case are the so-called
merons which are vortices in pseudospin with a winding number of
one-half (with respect to the second Homotopy group $\Pi_{2}$.
These are similar to vortices in the X-Y model, but non singular
at the origin since the pseudospin is not restricted to lie on
the x-y plane. But like the former they do have an energy that
grows logarithmically with size. One can also have meron
anti-meron bound pairs whose energy is finite. Such a pair is
topologically equivalent to Skyrmions and carries unit winding
number. (For an introduction to such topological excitations,
their winding numbers, etc. see reference \cite{Raj}.)
The possibilty of topological excitations like merons and
bimerons in double layer systems has generated much interest, in
part because of the excitement surrounding the Skyrmion
excitations in systems with real spin, and in part because of
the additional possibility here of Kosterlitz-Thoulles type
\cite{KT} phase transitions caused by the break-up of bound bimerons into
separated meron pairs \cite{GirvMac},\cite{Moon}. Bimeron
solutions have already been extensively studied in a body of
papers by Girvin, MacDonald and co-workers \cite{Brey}
\cite{Yang} and \cite{Moon} . These calculations are based on
optimising microscopic wavefunctions with respect to the
microscopic interaction Hamiltonian.
We will also calculate bimeron solutions and their energies
here, but by using an alternate method. An Effective Action for
slowly varying pseudospin textures has already been obtained by
Moon et al \cite{Moon}. If one extremises that Action one will
get differential equations which the unit-vector valued field of
psuedo spin configurations ${\vec m}(\vec r )$ should obey in
the classical limit.
In this paper we solve these coupled non-linear differential equations,
through a combination of analytically motivated ansatze followed
by numerical calculations. We obtain bimerons as approximate
time-independent solutions with appropriate topologically
non-trivial boundary conditions, for a range
of separations between the meron and its partner the anti-meron
and also for a set of different inter-layer distances. The
dependence of the bimeron texture on these variables is
discussed. They turn out to be reasonably similar to
what one would expect on
general grounds. We also obtain the energy of this bimeron as a
function of the separation between the meron centers. We
include in this energy contributions coming from the
pseudospin stiffness, its anisotropy, the capacitance energy and the
Coulomb energy. By minimising this energy with respect to
the meron separation, we are also able to give an independent
value for the optimal meron separation in a bimeron. We compare
these results with earlier work, including our own.
Apart from this, our work also enables us to independently check
the validity of a physical picture often used \cite{Yang} in
estimating bimeron energies, namely, that they can be viewed as
a pair of rigid objects carrying electric charge of ${1 \over
2}$ and a logarithmically growing energy. A work somewhat
similar in spirit to ours, but in the context of Skyrmions
of real spin systems was done by Abolfath ${\it et. al.}$ who
compared results obtained from solving a non-linear differential equation
with those obtained from microscopic calculations \cite {Abolfath}.
For yet another way of approaching meron solutions, starting from
a Chern- Simons field theory see the work of Ichinose and Sekiguchi
\cite{Ichinose}.
In an earlier paper \cite{Ghosh} we had done a similar study of
single meron solutions. But the present work is much more
complicated at the computational level. Single meron solutions are
circularly symmetric, with the spin component on the plane
pointing along the coordinate direction. Thus the only unknown,
namely, the spin-z component obeys an ordinary (though
non-linear) differential equation in the radius variable $r$.
Further, the boundary conditions relevant to a single meron can
be imposed, conveniently, at the end points $r=0$ and $r= \
\infty$. By contrast the boundary conditions characterising a bimeron
are $m_z = \pm 1$ at two finite points on the plane where the two merons
have their centers. The spin direction is also not simply related to
the coordinate direction, so that there are two independent
fields, say, $m_z$ and $tan^{-1}\bigg( {m_x \over m_y} \bigg)$,
(since the ${\vec m}$ is constrained to be a unit vector)
which obey coupled partial differential equations on the plane.
We found it quite challenging to analyse these coupled equations
analytically as far as posible, and use that information to
employ an appropriate ansatz and coordinate system to numerically
solve the equtions (using a desk-top computer).
Finally, we should reiterate that our work here clearly
relies heavily on the advances already
made by the Indiana group \cite{Moon}, \cite{Yang}, \cite{Brey}
and is to be viewed as something which will hopefully augment
their findings.
\section{The Spin Texture Equations .}
The differential equations obeyed by spin textures is obtained
by extremising an effective action which has already been
derived by Moon {\it et al} \cite{Moon} starting from the basic
microscopic physics. See also Ezawa \cite{z}. These results were
summarised in our earlier paper \cite{Ghosh}. Briefly , the
pseudospin texture of a state is described by a classical {\it
unit} vector ${\vec m}(\vec r )$ which gives the local direction
of the pseudospin. Here ${\vec r} $ is the coordinate on the x-y
plane carrying the layers, while the magnetic field B is along
the z-direction. The fully polarised "ferromagnetic" ground
state corresponds to ${\vec m}$ pointing
everywhere in the same direction, say, along the x-axis.
Using this as the reference state, any other state
with some arbitrary texture ${\vec m}(\vec r )$ is given by
performing a local pseudospin rotation on this uniform ground
state . The leading low-wavelength terms in the effective
Action for time independent configurations ${\vec m}(\vec r )$,
as obtained by Moon {\it et al} \cite{Moon} is
\begin{equation}
I ({\vec m})=\int d^{2}r \ \bigg[\frac{1}{2} \rho_{A} \big(\nabla
m_{z})^{2} + \frac{1}{2} \rho_{E} \big((\nabla m_{x})^{2} +
(\nabla m_{y})^{2}\big) +
\beta \ m_{z}^{2} \bigg] \ + \ C_{1}[m] \ \ + \ C_{2}[m] \
\label {Eff} \end{equation}
where
\begin{equation}
C_{1}[{\bf m}] \ = \ \frac{1}{2}\int d{\vec r}d{\vec r'}V({\vec r}-{\vec r'})
q({\vec r})q({\vec r'})
\end{equation}
and
\begin{equation} C_{2}[{\bf m}] \ \equiv {e^{2}d^{2} \over 32\pi^{2}\epsilon}
\int d^{2}r\int
d^{2}r'({m_{z}({\bf r})\nabla^{2}m_{z}({\bf r'}) \over |({\bf
r}-{\bf r'}|})
\end{equation}
The constants $\rho_A$ and
$\rho_E$ are pseudospin stiffness parameters whose physical origin is the
exculsion principle (Hund's rule) mentioned earlier. They are given by
\begin{eqnarray} \rho_A \ &=& \ \big( {\nu \over 32 \pi^2}\big) \int_{0}^{\infty} dk
k^3 \ V^A_k \ exp({-k^2 \over 2}) \nonumber \\
\rho_E \ &=& \ \big( {\nu \over 32 \pi^2}\big) \int_{0}^{\infty} dk
k^3 \ V^E_k \ exp({-k^2 \over 2}) \label{rho} \end{eqnarray}
where $V^A_k \ = \ 2\pi e^2 /(\epsilon k)$ and $V^E_k \ = \
(exp (-kd) 2\pi e^2) /(\epsilon k) $
are the Fourier transforms of the Coulomb interactions between electrons
in the same and different layers respectively.
All distances (and inverse wave vectors) are in units of the
magnetic length {\it l}.
The $ \beta m_{z}^{2}$ term represents the so-called capacitance or
charging energy needed to maintain unequal amounts of charge density
in the two layers. Recall that the z-component of pseudospin represents
the difference between the densities in the two layers. The constant
$\beta $ is given by
\begin{equation} \beta \ = \ \big( {\nu \over 8 \pi^2}\big) \int_{0}^{\infty} dk
\ k \ (V^{z}(0) - V^{z}(k)) \ exp({-k^2 \over 2}) \label{beta} \end{equation}
where $V^z_k = {1 \over2} (V^A_k - V^E_k)$.
Finally, $q({\vec r})$ is the topological density associated with pseudospin
texture, which is also its charge density \cite{Sondhi}. It is given by
\begin{equation}
q({\vec r})=-\frac{\bf\nu}{8\pi}\epsilon_{\nu \mu}{\bf m}({\vec r}).[
{ \partial_{\nu}}{\bf m}({\vec r}){\times}
{ \partial_{\mu}}{\bf m}({\vec r})]
\label{topo}\end{equation}
We will refer to the the non-local term $C_1$, as the Coulomb term since
it has been identified as the Coulomb energy associated with topological
structures in the pseudospin textures \cite{Moon}, \cite{Sondhi}. The
other non local term $C_2$ arises in the gradient expansion but is not
amenable to simple physical interpretation.
The field equations are obtained by extremising this Hamiltonian with respect
to the independent field variables, which can be taken to be
$m_z$ and
$\alpha \equiv tan^{-1}\bigg( {m_x \over m_y} \bigg)$ . This $\alpha$ is
just the azimuthal angle of the projection of ${\vec m}$ on to the x-y plane.
The non-local terms
$C_1$ ansd $C_2$ in the Action (\ref{Eff}) will render the field
equations into coupled integro-differential equations. While in the
single meron
case we did solve such an integro differential equation \cite{Ghosh},
for the more complicated
case of bimerons we will be content to solve the equations in the absence
of the integral terms $C_1$ ansd $C_2$ .The contributions of these
terms can however be included in the total energy, but by using solutions of
the local equations. In mild justification of this strategy, we will
find later that the Coulomb energy $C_2$ for instance is less than
half the energy from the local terms in eq. (\ref{Eff}).
The coupled field equations for $m_z$ and
$\alpha \equiv tan^{-1}\bigg( {m_x \over m_y} \bigg)$ resulting
from eq. (\ref{Eff}) in the absence of $C_1$ and $C_2$ are
\begin{equation}
\rho_{A}\nabla^{2}m_{z} \ + \ \rho_{E} m_{z} \bigg( \frac{(\nabla
m_{z})^{2}}{(1-m_{z}^{2})^{2}}+\frac{m_{z} \nabla^{2}m_{z}}{1-m_{z}^{2}}+
\nabla^{2}\alpha \bigg) \ - \ 2\beta m_{z} \ \\ = \ 0 \label{mz} \end{equation}
and
\begin{equation} {\vec \nabla} .\big[ (1-m_{z}^{2}){\vec \nabla} \alpha \big]=0
\label{alpha} \end{equation}
\section {Bipolar coordinates}
To find bimeron solutions we have to numerically solve the
coupled partial differential
equations (PDE) in (\ref{mz}) and (\ref{alpha}).
The defining boundary condition of a bimeron is $m_z \ = \ \pm 1$ at
the points $(0, \pm a)$
Our strategy will be to use
the known exact solution of these equations in the Non-Linear Sigma Model
(NLSM) limit, and solve the full equations
iteratively starting with the NLSM solution.
The NLSM limit is realised when the layer separation $d$ goes
to zero in which case we see from their defining equations above that $ \rho_A
\ = \ \rho_E $, i.e. the stiffness is isotropic and further that the
capacitance coefficient $\beta$ vanishes. Then , with $C_1$ and $C_2$ also
neglected, the action in (\ref{Eff}) is just that of the NLSM, all of whose
solutions are exactly known \cite{Raj}. They are conveniently described by
the complex field w(z) which represents the stereographic
projection of the unit sphere of textures ${\vec m}$. It is defined by
\begin{equation} w(z) \equiv {m_x + im_y \over (1 - m_z)} \end{equation}
where z = x+iy.
Our texture variables $m_z$ and $\alpha$
are related to w(z) by
\begin{eqnarray} m_z \ &=& \ {|w|^2 - 1 \over |w|^2 + 1} \nonumber \\
and \ \ \ \ \alpha \ &=& \ arg \ (w) \label{mzw} \end{eqnarray}
Any analytic function w(z) will be a solution of the NLSM.
In particular the function
\begin{equation} w(z) \ = \ {z - a \over z + a} \label{NLSM} \end{equation}
represents the bimeron, with the points (0,-a) and (0,a) representing
the centers of the two merons, where the solution gives $m_z = \pm 1$
respectively. It may be checked that (\ref{NLSM}) satisfies the coupled
equations (\ref{mz}) and (\ref{alpha}) in the isotropic limit.
When the interlayer separation d is not zero, we have to cope with
the coupled field equations (\ref{mz}) and (\ref{alpha}) with both
the anisotropic stiffness and capacitance terms present. Some analysis
of this system was done long ago by Ogilvie and Guralnik \cite{Ogil}
who studied the NLSM with the mass (capacitance) term included but
no anisotropy. ( An ansatz suggested
in ref (\cite{Ogil}) does not work ,as we will show below.)
Meanwhile Watanabe and Otsu \cite{Wata} studied the anisotropic NLSM but
without the mass term. Both made considerable progress analytically,
but neither offered exact or numerical solutions. Here we
will try to solve (\ref{mz}) and (\ref{alpha}) numerically after
including both the capacitance and anisotropic terms .
To do so , it will be convenient to use a bipolar coordinate system to
describe the x-y plane, as
might be expected when we have to impose boudary conditions at two
finite points (0,-a) and (0,a). These coordinates, $\eta$ and $\phi$,
are defined by
\begin{eqnarray} \ \ \ \eta \equiv \ \ ln \ |{z-a \over z+a}| \nonumber \\
and \ \ \ \phi \equiv arg \ \bigg( {z-a \over z+a } \bigg)
\label{etaphi} \end{eqnarray}
This coordinate set has many advantages \cite{Margenau}.
The points (0,-a) and (0,a) at which we have to impose
boundary conditions are now mapped into $\eta \rightarrow \pm
\infty $. The full x-y plane is mapped in $(\eta,\phi)$ coordinates to an
infinite strip with $\eta = [-\infty, +\infty]$ and $\phi =
[-\pi, \pi]$. Finally, it is clear upon comparing eq(\ref{etaphi})
to eq (\ref{NLSM}) that this set of coordinates is
closely related to the exact NLSM bimeron solution. Clearly the
the exact NLSM solution (\ref{NLSM})
corresponds to the simple expressions
\begin{eqnarray} m_z \ = \ tanh \eta \nonumber \\
and \ \ \ \ \alpha \ = \ \phi \end{eqnarray}
Away from the NLSM limit, since this is an
orthogonal coordinate system with simple expressions for the
gradient, divergence and Laplacian,
the equations (\ref{mz}) and (\ref{alpha}) become
\begin{eqnarray}
\ \bigg[(\frac{ \rho_{A}-\rho_{E}}{\rho_{E}}) + \frac{1}{1-m_{z}^{2} }\bigg]
(\partial_{\eta}^{2}m_{z} +\partial_{\phi}^{2}m_{z}) +\frac
{m_{z}({\partial_{\eta}m_{z} +\partial_{\phi}m_{z}})^{2}}{({1-m_{z}^{2}})^{2}}
+m_{z}(({\partial_{\eta}\alpha +\partial_{\phi}\alpha})^{2}
\nonumber \\
- \frac{2\beta}{\rho_{E}} \ \ Q^{2} \ (\eta, \phi) = 0 \label{mz1}\end{eqnarray}
\begin{equation} (1-m_{z}^{2})(\partial_{\eta}^{2}\alpha +\partial_{\phi}^{2}\alpha)
-2m_{z}({\partial_{\eta}m_{z} \partial_{\eta}\alpha +\partial_{\phi}m_{z}
\partial_{\phi}\alpha}). =0 \label{alpha1} \end{equation}
where
\begin{equation} Q^{2} \ (\eta, \phi) \ = \frac{a^{2}}{({\cosh{\eta}-\cos{\phi}})^{2}}
\end{equation}
is the Jacobian of this coordinate transformation.
Now let us analyse these equations as different terms are included in stages.
(a) In the NLSM limit, our exact solution has $\alpha = \phi$.
Then (\ref{alpha1}) forces $m_z$ to be a function of $\eta$ alone ,
$m_z = m_z \ (\eta)$. Upon inserting this into the other equation (\ref{mz1})
it becomes an {\it ordinary} non-linear differential equation . This
is the advantage of this choice of coordinates. The
solution can be verified to be $m_{z} = tanh(\eta)$.
(b) Next let us include anisotropy $( \rho_A \neq \rho_E)$ ,
while still keeping the capacitance term zero $(\beta = 0)$.
Once again we can set $\alpha = \phi$, and consequently $m_z = \
m_z (\eta) $, which will obey again an ordinary differential
equation given by
\begin{equation}
\ \bigg[(\frac{ \rho_{A}-\rho_{E}}{\rho_{E}}) + \frac{1}{1-m_{z}^{2} }\bigg]
(\partial_{\eta}^{2} \ m_{z} ) +\frac
{m_{z}(\partial_{\eta}m_{z} )^{2}}{(1-m_{z}^{2})^{2}}
+ m_{z} \\ = 0 \label{mz2}\end{equation}
This has no analytic solution, but can be solved relatively
easily numerically, being just an ordinary differential equation in
the variable $\eta$. As boundary conditions we impose
$ m_z = o$ \ \ \ at $\eta = 0$ and $m_z = 1$ at $\eta = \infty$,
(Note that the equation above
is symmetric under $\eta \rightarrow \ - \eta$, so that we can choose the
solution to be antisymmetric, i.e. $ \ m_z (- \eta) = - m_z( \eta)$).
The resulting numerical solutions for different values of layer
separation $d$ (on which the anisotropy depends), are shown in fig 1.
One can see that with increasing the layer separation, and hence
increasing anisotropy in the stiffness, the pseudospin component
$m_{z}$ reaches its asymptotic value more slowly.
(c) Finally let us also include the capacitance term and consider
the equations
(\ref{mz1} and \ref{alpha1}) in full. Now the ansatz $\alpha = \phi$ is
no longer sustainable, in contrast to what has been suggested in
ref (\cite{Ogil}). The substitution of the ansatz $\alpha = \phi $ in
equation (\ref{alpha1}) would again force $\partial_{\phi} m_z = 0$ ,i.e.
$ m_{z} = m_{z}(\eta) $. But now this is in contradiction with
equation (\ref {mz1}) which has an explicit $\phi $ dependence
through the last (capacitance) term
$\frac{2\beta}{\rho_{E}} \ \ Q^{2} \ (\eta, \phi)$. Therefore ,once
one includes the capacitance term in equation(\ref{mz1})
both $\alpha$ and $m_{z}$ become functions of both $\eta$ and $\phi$.
One has unavoidably to
solve the coupled non-linear PDE for $m_z = m_{z}(\eta,\phi)$ and
$\alpha= \alpha(\eta,\phi)$.
We do this by employing what we believe is a
good ansatz for $\alpha$ which approximately satisfies \ref{alpha1}).
We then solve the other equation (\ref{mz1}) numerically after
inserting that ansatz for $\alpha$.
Our ansatz is been motivated by the following arguments.
One can see from equation(\ref{mz1}) that the troublesome $\phi$ dependent
term $Q^2$ is negligibly small in the large $\eta$ region
$(Q \sim sech (\eta))$
and is most dominant in the small $\eta$ region.
Hence $\alpha$ will still approach $\phi$ as $\eta
\rightarrow \infty$ but needs to be modified substantially
in the small $\eta$ region where however $m_{z} \ll 1$.
When $m_{z} \ll 1$ equation (\ref{alpha1}) can be approximated by
\begin{equation} \bigtriangledown^{2}\alpha=0 \label{laplace}\end{equation}
This is just Laplace's equation in two dimension whose solutions
are all harmonic functions. With this in mind we choose our ansatz for
$\alpha$ as follows :
\begin{equation}
\alpha = \phi -B \kappa exp(-|\eta|)sin(\phi)
\label{alpha2}\end{equation}
where
\begin{equation}
\kappa \equiv (\frac{2\beta}{\rho_{E}})^{1\over 2} \ \ a \ \end{equation}
This solves Laplace's equation
and satisfies all the required boundary conditions and asymptotic behaviour,
namely
\begin{eqnarray}
\alpha \rightarrow \phi \ \ \ \ &as \ \ \ \ \eta \rightarrow
\pm \infty \nonumber \\
\alpha = 0 \ \ \ &when \ \ \ \ \ \phi =0 \nonumber \\
\alpha = \pi \ \ \ &when \ \ \ \ \phi =\pi \nonumber \\
\alpha = \phi \ \ \ &when \ \ \ \kappa =0 \label{boundary} \end{eqnarray}.
Note that the ansatz has a cusp at $\eta = 0$. This need not cause
concern. Some such cusps can be
expected on physical grounds and are familiar in soliton physics. The
point is that each meron feels some force due to the other (Coulomb
plus a logarithmic force) at arbitrary separation. We would expect them
to move because of this force, and cannot strictly
speaking expect a static bimeron solution to exist at arbitrary
separation. But a cusp, like the
one in the above ansatz, amounts to a delta function in the second
derivative and can be interpreted as a external force just at $\eta = 0$
which can "hold the two merons together" at arbitrary separation. For
more discussion of this point see Rajaraman and Perring and Skyrme
\cite{RR} where this technique was used to get intersoliton forces betwen
one dimensional solitons.
The constant B is chosen by minimising the energy.
Substituting this ansatz in equation (\ref{mz1}) we then solved it
numerically subject to the boundary condition
\begin{eqnarray}
m_{z} \ = 0 \ \ \ \ &at \ \ \ \eta =0\nonumber \\
m_{z} \ = \pm 1 \ \ \ &when \ \ \ \eta = \pm \infty \label{kboundary}
\end {eqnarray}.
It is sufficient to solve the equation in the first quadrant .i.e.
$(\eta [0,\infty] and \phi [0,\pi])$ . For the rest of the
quadrants solutions can be obtained by writing
\begin{eqnarray}
m_{z}(-\eta,\phi)=-m_{z}(\eta,\phi) = -m_{z}(\eta,-\phi)\nonumber \\
\alpha(-\eta,\phi)=\alpha(\eta,\phi)=-\alpha(\eta,-\phi) \nonumber \\
\end{eqnarray}
which is consistent with the invariance of equations
(\ref{mz1})and(\ref{alpha1}) under the transformation $\eta
\rightarrow -\eta$ and $ \phi \rightarrow -\phi$.
\section{Numerical Procedure}
Before proceeding to solve this PDE (\ref{mz1}) we must take note of the
fact that the last term of the equation
(\ref{mz1}) is singular at the point
$(\eta=0,\phi=0)$. This point corresponds to spatial infinity
on the parent x-y plane.
As one moves near this point the leading
singularity in the equation, coming from the $Q^2$ term,
goes like $\frac{4\kappa^2}{(\eta^{2} +\phi^{2})^{2}}$
with other subleading singuarities of the form
$\frac{1}{\sqrt(\eta^{2} +\phi^{2})}$. It can be seen that this leading
singularity can be offset by requiring that $m_{z}$ behave as
$\bigg[exp-\bigg(\frac{2\kappa}{\sqrt{\eta^{2}+\phi^{2}}}\bigg)
\bigg] \ g (\eta,\phi)$, where this $g (\eta,\phi)$,
is a more smooth function for which one solves numerically.
This corresponds, in more familiar polar coordinates $(r,\theta)$ to
writing $m_z$ in the form $[exp-(\frac{\kappa r}{a})] $
\ ${\tilde g} \ (r,\theta)$. That $m_z$ will suffer
such an exponential fall-off
as $r \rightarrow \infty $ can also be inferred directly from the
"mass term " $2\beta m_z$ in the original field equation (\ref{mz}).
Similarly one can also verify that the
cancellation of the subleading singular terms can be achievedd by
requiring that $g$ has to behave like
$\sqrt(\eta^{2}+\phi^{2})$ as $\eta, \ \phi \ \rightarrow 0$.
Given this functional form of $m_{z}$ near the origin of the $ \eta, \phi$
plane, the boundary conditions (\ref{kboundary}), and the
ansatz (\ref{alpha2} )for $\alpha$ we solved equation
(\ref{mz1}) through an iterative procedure. We start with the solution
for $\kappa$=0 but with full anisotropy, which can be obtained relatively
easily from the ordinary differential equation (\ref{mz2}).
We then use this solution as input to obtain the
solution for $\kappa $ equal to a small number $\epsilon$
through the Newton-Raphson method
\cite{numer}. The solution for $\epsilon$ is then used as input
to obtain the solution for $2\epsilon$ and so on. This procedure
is repeated until one reaches the desired value of $\kappa$. The
advantage of this
procedure is that one can make $\epsilon$ arbritrarily small
to make the Newton-Raphson method converge. In this way we obtained
solutions for different values of the ansatz parameter B
for each value of $\kappa$.
\section{RESULTS and DISCUSSION}
Our solutions of equations(\ref{mz1}) and along with the ansatz
(\ref{alpha2}) give us the value of the pseudospin vector ${\vec m}$ as a
function of $\eta$ and $\phi$, or equivalently, the value of the vector-field
${\vec m}$ on a lattice of points on the parent x-y plane. We repeated
this calculation for a set of values
of the paramenter B in the ansatz (\ref{alpha2}).
We found that as one varies B starting from 0, the energy does not vary much
as B goes 0 to 0.1, but then it increases sharply after
B=0.1. This behaviour is seen to be common to all $\kappa$ and all $a$.
Hence we take B to equal 0.1. and solve the PDE for a variety of
values of layer separation {\it d}, and bimeron separation {\it a}
({\it a} is actually half of the meron-antimeron separation) . Together
all these solutions represnt a large body of calculated data. But it is
neither feasible nor very interesting to try to display it all in this
paper. Instead we will try to bring out salient features of our solutions
through examples.
Recall from (\ref{mz2})that in the absence of the capacitance term $m_z$
had no $\phi$ -
dependence. To give some feel for how the $m_z$ varies with $\phi$ in the
presence of the capacitance term, we plot in fig. 2 the solution
$m_{z}(\eta)$ of equation (\ref{mz1}) for a set of values for $\phi$. This
solution corresponds to $ d= 0.7$ and $a=3.158$. The sequence of curves shown
correspond to $\phi$ equal to 0, 0.2$\pi$, 0.47$\pi$, and 0.94$\pi$
respectively with the outermost one belonging to $\phi$ equal to 0. As we
have discussed earlier, as $\eta$ and $\phi$ tend to zero, the solution
should damp exponentially as
$exp(-\frac{\kappa}{\sqrt{\eta^{2}+\phi^{2}}})$. Correspondingly we see in
fig.2 that the low $\phi$ curves rise very slowly as $\eta$ increases away
from zero. We also give for comparison, in the form of the dotted curve,
the function tanh ($\eta$) which is the solution in the NLSM limit. The
comparison shows that the restructuring of the pseudospin texture due to
the capacitance and term and anisotropy is considerable.
As an alternate representation of our results, we show in fig. 3
the projection of ${\vec m}$ on the x-y plane, for the example of
$d$ equal to
0.7 and $\kappa$ equal to 4.4. (All lengths throughout this article are
in units of the magnetic length ${\it l}$). The length of each
arrow gives the magnitude of its easy-plane projection
$\sqrt{m_{x}^{2}+m_{y}^2}$
and its direction gives the azimuthal angle
of the projected vector , namely, $\alpha = \
tan^{-1}\bigg(\frac{m_{y}}{m_{x}}\bigg)$.
The plot clearly shows that our " bimeron" solution is indeed
a meron-antimeron pair. Note that, as desired,
$\vec m$ lies along the x-axis asymptotically. This picture
closely resembles the general structure obtained by Brey {\it et.al.}
\cite{Brey}. The data corresponding to all other values of $d$ and $a$
we studied have a similar behavior.
In fig.4 we plot the topological charge density given in eq.(\ref{topo})
as a function of $\eta$ and $\phi$ in the presence of all the local terms
in the field equations, including anisotropic ones. In viewing
this figure it may be
helpful to remember that large $|\eta|$ corresponds to the meron centers
while $\eta = 0, \phi=0$ corresponds to spatial infinity. $ \phi=
\pi $ corresponds to the line joining the two merons.As topological
charge density is symmetric when either of the co-ordinate variable
changes sign we show the contours only in the first quadrant where both
$\eta$ and $\phi$ are positive.
Next let us turn to the energetics of these bimeron solutions. In fig.5.
we show how the "local" energy i.e. the contribution from the local
terms in the energy functional (all terms in eq(\ref{Eff}) except
for $C_1$ and $C_2$) varies when one changes the separation $2a$
between the meron and antimeron centres. The appearance of a minimum
is quite conspiquous and generic to all the layer separations for
which the energy is calculated. The example in fig. 5 corresponds
to a layer separation of 0.7.
In fig.6 we plot the Coulomb energy $C_{1}$ evaluated using our solution
of the equation (\ref{mz1}), as a function of the bimeron separation. The
continuous curve is the best fit to our calculated points . Sometimes in
the literature, a phenomenological estimate of bimeron energetics is made
assuming that it can be viewed as a bound pair of two merons, each
symmetrical, undistorted by the other and carrying a charge of
$\frac{e}{2}$ . Such a pair would have a Coulomb energy of $\frac{1}{8a}$
(in units of $\frac{e^2}{\epsilon {\it l}}$ that we are using). To see how
good an approximation this simple picture is, we give in the same fig.6,
in the form of a broken line the plot of this function $\frac{1}{8a}$
. We see that the value of the Coulomb energy we get from the actual bimeron
solution is much larger than what the simple two-charge picture would
give. This is presumably because each meron is considerably squashed
(polarised) by the close proximity of the other. In our earlier work on
single merons \cite{Ghosh}, we had found that at the layer separation
($d=0.7$) used in fig.6, the core-radius of individual merons is about 2,
which is of the same order as the meron-separation in fig 6. In fact we
can see that the gap between the two curves in fig.6 is higher for smaller
$a$ where the individual merons are squeezed together more. Of course our
results, while indicative , may not be quantitatively unambiguous. For
instance, recall that our solution was obtained using only the local terms
in the differential equation and the Coulomb energy was calculated by
substituting this solution into the integral $C_1$. The non-local Coulomb
term's influence ${\underline on}$ the solution has no been included.
In fig.7 we plot the variation of three terms in the energy fuctional
namely the contribution from the local terms (capacitance+gradient energy)
,$C_{1}$ and $C_{2}$ as a function of the bimeron separation.The data
presented here corresponds to layer separation $d$ equal to 0.7$\it l$
but this behaviour is representative of almost all the layer separations
( 0.5, 0.6, 0.7 and 0.8 for which we have found solutions.
The trend of all three contributions is the same
for the other layer separations also with only slight changes in the slope of
the curves.
Our calculations were done for different bimeron separations
$a$, for each layer separation $d$. In reality, the exact
solution should exist only for some optimal bimeron separation
$a$ for each value of $d$. One can ask if our calculations
would reveal this by minimising the total energy at some
particular $a$. To see this, we have shown in Fig. 8 the total
energy at d=0.7 (i.e. the sum of all three contributions plotted
in the fig.7) as a function of bimeron separation $a$. As we
can see from fig.7, the total energy keeps decreasing with $a$,
all the way to about $ a = 3.2$, which is the highest value
upto which we could calculate , given limitations of our
computing facilties. However, the decrease is clearly levelling
off and is indicative that a minimum may exist at around a=4 or
5. What we have done, in drawing fig.8, is to obtain a
best-fit-curve of the data points upto a=3.2 and extrapolate
that curve upto a=4.5. For what it is worth such extrapolation indicates a
minimum at about a=4. This corresponds to a meron-antimeron separation of
of about 8, larger than what Yang and
MacDonald found by entirely different methods (see their fig. 2)
\cite{Yang}. Their value of the meron separation for $d=0.7$ is
about 4.5. We attribute this discrepency to the fact , noted
already in our discussion of fig.6, that the Coulomb energy in our
explicit calculation of the bimeron solution is higher than
the undistorted meron pair estimate used in ref(\cite{Yang}).
The actual larger Coulomb repulsion is, we believe responsible for the
larger optimal meron separation that we get.
We saw that the Coulomb interaction energy between the two merons as given
by the term $C_1$ in the present calculation differs quite a bit from the
simple picture of the bimeron as a pair of undistorted merons of charge $
\frac{e}{2}$ each. One can ask if there is a similar discrepency in the
non-Coulombic energy as well. This is the subject of Table 1.
In the picture of a bimeron as a pair of merons \cite{Moon} ,
\cite{Yang}, \cite{Ghosh}, it will have energy equal to
\begin{equation} E_{prev} \equiv \ 2E_{mc} + \ 2\pi \rho_{E} \ \ ln \bigg(
\frac{2a}{R_{mc}}\bigg)
\end{equation}
where $E_{mc}$ and $R_{mc}$ are respectively the core energy
and radius of a single merons, which have a logarithmic interaction
with each other because of the logarithmic divergence of the self energy
of single merons. (As stated already we are leaving out their
Coulomb interaction in the comparison being done in this table.)
This $E_{prev}$ has been calculated in our previous work
\cite{Ghosh}. It can be compared with the local part of the
energy in the present calculation. Such a comparison is given in
Table 1 for different values of $d$, using the optimal value of
the meron separation $a$ which minimises $E_{local}$. We see
that the comparison is not bad considering the completely
different ways of estimating this energy in this paper and in
earlier literature.
In conclusion, our solution for the bimeron obtained
by directly solving the coupled partial differential equations that
the bimeron texture obeys provides an alternate way of
obtaining the profiles and energies of these objects. As far as
the local part of the energy is concerned, the results are in
broad agremment with microscopic derivations earlier. But the
Coulomb energy we obtain is higher by a factor of about 2 from earlier
simple estimates because in actuality, the two merons in close
proximity will not behave like undistorted symmetrical merons.
\section{Acknowledgements} We are indebted to Awadesh Prasad for
his unstinting help on many fronts. SG would also like to thank Sujit
Biswas and Anamika Sarkar for helpful discussions on the numerical work.
SG acknowledges the support of a CSIR Grant no.
9/263(225)/94-EMR-I.dt.2.9.1994.
\begin{figure}
\label{fig1}
\caption{The solution $m_{z}(\eta)$ of equation (\ref{mz2}).
The three continuous curves correspond, as you go outwards, to three different
values of layer separation $d$ equal to 0.5, 0.6 and 0.7 respectively
in the unit of magnetic length ${\it l}$.
The dotted curve corresponds to the exact solution of NLSM i.e.
$m_z =tanh(\eta)$.}
\end{figure}
\begin{figure}
\label{fig2}
\caption{The solution $m_{z}(\eta)$ of equation (\ref{mz1})
for a set of values for $\phi$.The curves correspond, as you go inwards, to
$\phi = 0 , 0.2\pi, 0.47\pi, 0.94\pi$ respectively with
the outermost one corresponds to $\phi$ equal to 0.The layer separation
$d$ is equal to 0.7{\it l} and bimeron separation $a$ is equal to
3.158{\it l}.
The dotted curve at the top again corresponds to
$m_z =tanh(\eta)$.}
\end{figure}
\begin{figure}
\label{fig3}
\caption{This figure gives the magnitude and direction
of x-y projection of $\bf m$ at different points on the plane.
The layer separation and the bimeron separation are same as in
fig. 2.}
\end{figure}
\begin{figure}
\label{fig4}
\caption{A contour plot of the topological charge density of
the bimeron when both the capacitace term and the anisotropy
term is incorporated.This particular plot corresponds to a
layer separation $d$equal to 0.7 and bimeron separation $a$
equal to 3.158 both in the unit of magnetic length {\it l}.
The number against each contour(shown by broken curves)
denotes the corresponding charge density.}
\end{figure}
\begin{figure}
\label{fig5}
\caption{This figures gives the plot of the energy($E_{local}$)
coming from the
local terms in the action as a function of bimeron separation $a$
in the unit of magnetic length ${\it l}$. The unit of energy is
$\frac{e^{2}}{\epsilon {\it l}}$. The points correspond to the actually
computed values of the energy while the continuous curve is the best
fitted curve to it. The form of the best-fit curve is $E = A + B
(a-C)^{2}$ where A and B and C are found out to be
.223,.008 and 2.76 respectively.
This data corresponds to a layer separation $d$ equal to $0.7{\it l}$}
\end{figure}
\begin{figure}
\label{fig6}
\caption{This figure gives the plot of the coulomb energy as a function of
bimeron separation $a$ in units of magnetic length ${\it l}$.
The unit of energy is $\frac{e^2}{\epsilon\it l}$. The upper
curve is our computed value of the coulomb energy integral $C_1$
using the solution of equation (\ref{mz1})(points). The contious
line is the best curve to these points. The form of the best fitted
curve is $E = \frac{A}{a^{B}}$ where A and B are found out to be 0.847
and 0.821 respectively.
The dotted curve
at the bottom corresponds to the Coulomb energy
that the bimeron would have, if viewed as a bound pair of two
point charges of $\frac{e}{2}$ each, separated by a distance
$2a$ . This data corresponds to a layer separation
$d$ equal to 0.7}
\end{figure}
\begin{figure}
\label{fig7}
\caption{this figure gives a relative estimate of the contribution
of the three type of terms in the action, namely, the local terms,
$C_{1}$ and $C_{2}$, as a function of bimeron separation $a$. The units
are as specified in the earlier figures.This data also corresponds
to a layer separation $0.7{\it l}$}
\end{figure}
\begin{figure}
\label{fig8}
\caption{A plot of the total energy $E(total)$
as a function of bimeron separation $a$, for a
layer separation of 0.7.
This curve was obtained by extrapolating the curve fitted to the
calculated values going upto $a = 3.2$ . }
\end{figure}
\newpage
\noindent Table 1: The optimal bimeron separation
($a$), the bimeron local energy( $E_{local}$ ) and meron
pair energy ($E_{prev}$) from our previous work \cite{Ghosh} as
a function of the layer separation $d$.
The unit of energy is $\frac{e^{2}}{\epsilon l}$ and the unit of length
is ${\it l}$
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
d & $a$ & $E_{local}$ & $E_{prev}$ \\
\hline
0.5 & 3.30 & .270 & .217 \\
\hline
0.6 & 3.16 & .248 & .226 \\
\hline
0.7 &2.72 & .223 & .224 \\
\hline
0.8 & 2.39 & .201 & .214 \\
\hline
\end{tabular}\\
\end{center}
|
\section{Introduction}\label{sec1}
\IEEEPARstart{M}{assive multiple-input multiple-output} (mMIMO) is a well-recognized radio frequency (RF) technology for highly
spectrum-efficient communications \cite{6375940}. Current mMIMO technology has two main architectures, i.e., the
analogue-digital hybrid architecture and the fully digital (FD) architecture (see \cite{1519678}).
In the FD-mMIMO system, every antenna element is connected
to a dedicated RF chain. It has been revealed that the energy consumption of each RF chain grows exponentially with the resolution of signal quantizers
\cite{6457363, 761034}. This has recently motivated the use of low-resolution (mainly $1$-$3$ bit) quantizers for FD-mMIMO
(e.g. \cite{5351659, DBLP:journals/corr/RisiPL14, 6891254}). The information-theoretic study of low-resolution quantized FD-mMIMO can be found in the literature
(e.g. \cite{6891254, 7106472, 7080890}).
In the scope of estimation and detection theory, digital signal processing problems such as channel estimation, synchronization and
signal detection can be fundamentally changed due to the use of low-resolution quantizers \cite{6987288, 7088639, 9311778 }. This is because wireless systems
become non-linear and non-Gaussian, and such violates the hypothesis of linear and Gaussian process that is commonly adopted in
the conventional MIMO systems. Specifically for the signal detection problem, the maximum-likelihood detection becomes even more
complicated as it is no longer equivalent to the integer least-squares problem \cite{4475570,9145094}. This is particularly true for FD-mMIMO
systems with $1$-bit quantizers.
In order to improve the computational efficiency, a number of near-maximum-likelihood algorithms have been reported in the literature (e.g. \cite{7439790, 8240630, 8345169}). However, they are still too complex to implement in practice.
Approximate message passing (AMP) algorithms could offer near Bayesian-optimum solutions with much lower computational complexities (e.g. \cite{7355388, 7426735, 8234637}).
Nevertheless, linear algorithms are practically more appealing for their lower complexities and simple architectures.
The foundation of linear detection algorithms lies in a linear system model. Therefore, the central task of linear algorithm design is to find
a good linear approximation of the non-linear communication system. In the literature, one of the widely used linear approximation models is
the additive quantization noise model (AQNM) originally proposed in \cite{mezghani11}. It assumes the quantization distortion to be
additive white Gaussian noise (AWGN) and correlated with the input of the quantizer. With the AQNM model, the LMMSE channel equalization and symbol-by-symbol detection algorithm has been extensively studied.
Moreover, the AQNM model has been employed for the information-theoretic
study of the achievable rate, capacity bounds or spectral efficiencies in \cite{7307134, 7308988, 7876856, 7896590, 7420605}.
In \cite{Mezghani2012}, a modified-AQNM model has been proposed by making the quantization noise uncorrelated with the input signal
through Wiener-Hopf filtering. This modified version renders the derivation of auto-covariances and cross-covariances involved in the
LMMSE analysis much easier. When the input signal is white Gaussian, we show in Section \ref{sec2b2} that the modified-AQNM is equivalent
to the original AQNM model. Using the Bussgang's theory\footnote{Please
see \cite{Bussgang52} for the detail of Bussgang's theory.}, the modified-AQNM model has been further generalized.
Specifically for the $1$-bit quantization, the quantization noise is actually far from the Gaussian assumption. Then, the Bussgang's theory
has been used in \cite{Mezghani2012} to derive an exact form of the relevant auto-covariances and cross-covariances for the LMMSE
channel equalizer. Other relevant works that use the Bussgang's theory for linear receiver design
or performance analysis can be found in \cite{nguyen2019linear,7931630,7894211}.
The hypothesis of Gaussian quantization noise renders the AQNM model and its variations not sufficiently accurate for some
cases (see the detailed discussion in \cite{8337813}). Moreover, it has been observed that the AQNM-based LMMSE channel equalizer can introduce
a scalar ambiguity in the signal amplitude. This scalar ambiguity is not a problem for constant-modulus modulations such as M-ary
phase-shift-keying (M-PSK). However, it is detrimental to non-constant-modulus modulations such as M-ary quadrature-amplitude-modulation (M-QAM),
and thus it must be appropriately handled for instance through the energy normalization \cite{7439790,nguyen2019linear,tsefunda}.
After all, the major concern is that the inaccuracy of the AQNM models could disadvantage the receiver optimization as far as
non-constant-modulus modulations are concerned \cite{9144509,7247358}. Arguably, the generalized-AQNM model does take into account the scaling
ambiguities. However, we find the current studies rather intuitive, and that a more rigorous analytical study is needed to develop a deeper
understanding of the quantization distortion as well as its impact on the LMMSE channel equalizer. This forms the major motivation of our work.
The major contribution of our work lies in the employment of Hermite polynomials to develop the aforementioned deeper understanding.
This study results in a novel linear approximation model using the second-order Hermite expansion (SOHE).
In brief, the SOHE model can be described by the following vector form (see the detail in Section \ref{sec3})
\begin{equation}\label{eqn001}
\mathbf{y}=\mathcal{Q}_b(\mathbf{r})\approx\lambda_b\mathbf{r}+\mathbf{q}_b,
\end{equation}
where $\mathcal{Q}_b(\cdot)$ is the $b$-bit uniform quantizer introduced in \cite{Proakis2007},
$\mathbf{r}, \mathbf{y}\in\mathbb{C}^{K\times 1}$ are the input and output of the quantizer, respectively,
$\lambda_b$ is the coefficient of the first-order Hermite kernel which is a function of the resolution of the quantizer ($b$),
and $\mathbf{q}_b\in\mathbb{C}^{K\times 1}$ is the quantization distortion with its characteristics related to the resolution of the
quantizer ($K$: the size of relevant vectors). The SOHE model differs from the existing AQNM models mainly in two folds:
{\em 1)} The Hermite coefficient ($\lambda_b$) in the SOHE model describes how the signal energy changes with respect to the
resolution of the quantizer. The relationship between $\lambda_b$ and the resolution $b$ is mathematically formulated,
based on which the characteristics of $\lambda_b$ are exhibited through our analytical work.
{\em 2)} The quantization distortion ($\mathbf{q}_b$) is modeled as the second-order Hermite polynomial of the input signal
($\mathbf{r}$). There is no imposed assumption for the quantization distortion to be white Gaussian as well as their correlation behavior
with the input signal. It will be shown in Section \ref{sec3}, through mathematical analysis, that the cross-correlation between $\mathbf{q}_b$
and the input $\mathbf{r}$ depends on the stochastic behavior of the input signal. When the input is an independent white Gaussian process,
the quantization distortion can be considered to be uncorrelated with the input signal.
With the above distinctive features, we find that the SOHE model can be used to explain almost all interesting phenomena observed so far
in the research of low-resolution quantized MIMO signal detection. When using the SOHE model for the LMMSE analysis, our analytical work shows
that the current LMMSE algorithm should be enhanced by incorporating a symbol-level normalization mechanism, and thereby resulting in an
enhanced-LMMSE (e-LMMSE) channel equalizer. The performance gain of e-LMMSE is demonstrated through extensive computer simulations in
Rayleigh fading channels.
In addition, as response to the reviewers' comments, we enrich our technical contribution with the SOHE-based LMMSE channel estimation approach.
It is found that the SOHE-LMMSE channel estimator can offer comparable sum spectral efficiency (SE) with the state-of-the-art (SOTA) because the performance is limited by the channel estimation error.
The rest of this paper is organized as follows. Section II presents the system model, preliminaries and problem statement.
Section III presents the Hermite expansion model. Section IV presents the LMMSE analysis. Section V presents the simulation results,
and finally Section VI draws the conclusion.
\subsubsection*{Notations}
Regular letter, lower-case bold letter, and capital bold letter represent scalar, vector, and matrix, respectively.
$\Re(\cdot)$ and $\Im(\cdot)$ represent the real and imaginary parts of a complex number, respectively.
The notations $[\cdot]^T$, $[\cdot]^H$, $[\cdot]^*$, $[\cdot]^{-1}$, $\left \| \cdot \right \|$, $\mathrm{trace}(\cdot)$ and
$\mathbb{D}(\cdot)$ represent the transpose, Hermitian, conjugate, inverse, Euclidean norm, trace and a matrix formed by the diagonal of a matrix
(a vector or a scalar if appropriate), respectively. $\mathbb{E}\left [ \cdot \right ]$ denotes the expectation, $\mathbf{I}$ denotes the identity matrix,
and $\otimes$ denotes the Kronecker product.
\section{System Model, Preliminaries and\\ Problem Statement}\label{sec2}
This section introduces the mathematical model of the uplink MIMO signal reception with low-resolution quantizers. This is then followed by
a review of current linear approximation models as well as their related LMMSE channel equalizers. This review is important in the sense that it can
help to understand the SOTA as well as their differences from the SOHE model. It is perhaps worth noting that we do not put an
emphasis on the mMIMO system mainly to keep our work as generic as possible.
\subsection{System Model}\label{sec2a}
Similar to many other works in the SOTA analysis (e.g. \cite{7307134, 7876856, 7896590, 7420605}), we also consider a narrowband FD-mMIMO network, where a set of single-antenna transmitters $(N)$ simultaneously send their messages to
a receiver having a large number of receive antennas $(K)$. Denote $s_n$ to be the information-bearing symbol sent by the $n^\mathrm{th}$ transmitter ($n=0,...,N-1$). It is commonly assumed that $s_n$ is drawn from a finite alphabet-set with equal probability and fulfills:
$\mathbb{E}(s_n)=0$, $\mathbb{E}(s_ns_n^*)=1$, $\mathbb{E}(s_ns_m^*)=0$, $_{\forall n\neq m}$.
With the ideal quantization, the received discrete-time signal at the baseband ($\mathbf{r}$) is expressible as
\begin{equation}\label{eqn002}
\mathbf{r}=\sum_{n=0}^{N-1}\mathbf{h}_ns_n+\mathbf{v},
\end{equation}
where $\mathbf{h}_n\in\mathbb{C}^{K\times1}$ is the channel vector corresponding to the $n^\mathrm{th}$ transmitter to the receiver link,
and $\mathbf{v}\in\mathbb{C}^{K\times1}$ is the white Gaussian thermal noise with zero mean and auto-covariance $N_0\mathbf{I}$.
Define $\mathbf{H}\triangleq[\mathbf{h}_0, ..., \mathbf{h}_{N-1}]$ and
$\mathbf{s}\triangleq[s_0,...,s_{N-1}]^T$. The linear model \eqref{eqn002} can be rewritten into the following matrix form
\begin{equation}\label{eqn003}
\mathbf{r}=\mathbf{H}\mathbf{s}+\mathbf{v}.
\end{equation}
Feeding $\mathbf{r}$ into the $b$-bit low-resolution quantizer results in
\begin{equation}\label{eqn004}
\mathbf{y}=\mathcal{Q}_b(\Re(\mathbf{r}))+j\mathcal{Q}_b(\Im(\mathbf{r})),
\end{equation}
where the quantization is individually performed in the real and imaginary domains.
To reconstruct the signal block $\mathbf{s}$ at the receiver (i.e., the signal detection), the channel knowledge $\mathbf{H}$ is usually assumed
in the literature (e.g. \cite{5592653, 8320852, 8610159, 7155570}). There are also quite a few published works discussing
about the channel estimation as well as the signal
reconstruction based upon various channel knowledge imperfections (e.g. \cite{7439790, 7355388, 7247358, 5501995, 708938}).
Those are indeed very interesting research issues. However,
in order to make our work well focused on the signal reconstruction, we assume the availability of $\mathbf{H}$ throughout the paper
and describe the signal reconstruction procedure as the following input-output relationship
\begin{equation}\label{eqn005}
\hat{\mathbf{s}}=g(\mathbf{y}, \mathbf{H}),
\end{equation}
where $\hat{\mathbf{s}}$ is the reconstructed version of $\mathbf{s}$. In the following contents, our discussion will be focused on
the linear approximation models and LMMSE analysis. Optimum and near-optimum approaches are not relevant to our discussion
and therefore skipped.
\subsection{Linear Approximation Models and LMMSE Analysis}\label{sec2b}
Our SOTA analysis shows that there are mainly three linear models to approximate the non-linear model \eqref{eqn004}, and they can
lead to different LMMSE formulas.
\subsubsection{The AQNM Model}\label{sec2b1}
this model can be mathematically described by (see \cite{mezghani11, 5351659})
\begin{equation}\label{eqn006}
\mathbf{y}\approx\mathbf{z}_A\triangleq\mathbf{r}+\mathbf{q}_A.
\end{equation}
There are two assumptions for the AQNM model:
\begin{itemize}
\item[A1)] The quantization distortion $\mathbf{q}_A$ is AWGN;
\item[A2)] $\mathbf{q}_A$ is correlated with the input signal $\mathbf{r}$.
\end{itemize}
With this linear approximation model, the LMMSE channel equalizer ($\mathbf{G}^\star$) can be obtained by solving the following MMSE objective function
\begin{IEEEeqnarray}{ll}
\mathbf{G}^\star&=\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{y}\|^2,\label{eqn007}\\
&\approx\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{z}_A\|^2\label{eqn008}.
\end{IEEEeqnarray}
The solution to \eqref{eqn008} is provided in \cite{mezghani11}, i.e.,
\begin{equation}\label{eqn009}
\mathbf{G}^\star=\mathbf{H}^H(N_0\mathbf{I}+\mathbf{HH}^H+\mathrm{nondiag}(\rho_b\mathbf{HH}^H))^{-1},
\end{equation}
where $\rho_b$ is the distortion factor indicating the relative amount of quantization noise
generated, and it is a function of $b$; see the specific discussion in the relevant literature \cite{1057548, 6891254, 7106472}.
\subsubsection{The Modified-AQNM Model}\label{sec2b2}
the mathematical form of this linear model is given by (see \cite{Mezghani2012}):
\begin{equation}\label{eqn010}
\mathbf{y}\approx\mathbf{z}_B\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{r}+\mathbf{q}_B,
\end{equation}
where $\mathbf{C}_{yr}$ is the cross-covariance matrix between $\mathbf{y}$ and $\mathbf{r}$, $\mathbf{C}_{rr}$ is the
auto-covariance matrix of $\mathbf{r}$, and $\mathbf{q}_B$ is the quantization distortion. Different from the AQNM model in \eqref{eqn006},
the assumption here is:
\begin{itemize}
\item[A3)] the quantization distortion $(\mathbf{q}_B)$ is uncorrelated with the input signal $\mathbf{r}$.
\end{itemize}
Moreover, the condition A1) is not always assumed.
Define $\overline{\mathbf{H}}\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{H}$ and
$\mat{\varepsilon}\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{v}+\mathbf{q}_B$. The modified-AQNM model \eqref{eqn010}
can be represented by the following canonical form
\begin{equation}\label{eqn011}
\mathbf{z}_B=\overline{\mathbf{H}}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
The auto-covariance matrix of $\mat{\varepsilon}$ is given by \cite[(9)]{Mezghani2012},
\begin{equation}\label{eqn012}
\mathbf{C}_{\varepsilon\varepsilon}=\mathbf{C}_{yy}-\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{C}_{ry}
+N_0\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{C}_{rr}^{-1}\mathbf{C}_{ry}.
\end{equation}
This is however too complex for the LMMSE analysis.
Applying the assumption A1) onto the quantization distortion $\mathbf{q}_B$, it has been shown that the following approximation of
covariance matrices applies
\begin{equation}\label{eqn013}
\mathbf{C}_{ry}\approx(1-\rho_b)\mathbf{C}_{rr}\approx\mathbf{C}_{yr},
\end{equation}
\begin{equation}\label{eqn014}
\mathbf{C}_{\varepsilon\varepsilon}\approx(1-\rho_b)^2N_0\mathbf{I}+(1-\rho_b)\rho_b\mathbb{D}(\mathbf{C}_{rr}).
\end{equation}
Applying \eqref{eqn013} into \eqref{eqn011} results in
\begin{equation}\label{eqn015}
\mathbf{z}_B\approx(1-\rho_b)\mathbf{H}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
Then, the LMMSE objective function reads as
\begin{equation}\label{eqn016}
\mathbf{G}^\star=\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{z}_B\|^2.
\end{equation}
Solving \eqref{eqn016} results in
\begin{equation}\label{eqn017}
\mathbf{G}^\star=(1-\rho_b)^{-1}\mathbf{H}^H\Big(\mathbf{C}_{rr}+\frac{\rho_b}{1-\rho_b}\mathbb{D}(\mathbf{C}_{rr})
\Big)^{-1},
\end{equation}
where $\mathbf{C}_{rr}=\mathbf{HH}^H+N_0\mathbf{I}$. This equation seems to be different from \eqref{eqn009}. However, if
we incorporate the term $(1-\rho_b)^{-1}$ into the auto-covariance term inside the bracket, \eqref{eqn017} immediately turns
into \eqref{eqn009}. Arguably, we can consider the linear approximations \eqref{eqn006} and \eqref{eqn010} to be equivalent
when the assumption A1) is adopted.
\subsubsection{The Generalized-AQNM Model}\label{sec2b3}
the modified-AQNM model can be extended to the following generalized version (see \cite{Mezghani2012})
\begin{equation}\label{eqn018}
\mathbf{z}_C=\mathbf{\Lambda}_b\mathbf{r}+\mathbf{q}_C,
\end{equation}
where the quantization distortion $\mathbf{q}_C$ is assumed to be uncorrelated with $\mathbf{r}$ (i.e., the assumption A3),
and $\mathbf{\Lambda}_b$ is a diagonal matrix with its characteristics related to the low-resolution quantizer.
Consider the quantizer $y=\mathcal{Q}_b(x),~x\in(-\infty, \infty)$, to be a stair function with its input range being divided into $M=2^b$
sub-ranges \footnote{
The dynamic of sub-ranges is promised by the automatic gain control (AGC), which aims at keeping the amplitude of the output signal $y$ substantially constant or to vary only within a small range \cite{664234, 1092057}. In order to focus on the analysis of the low-resolution quantization process, the ideal AGC is assumed in this paper.} .
Define $(\tau_m, \tau_{m+1})$ to be the $m^\mathrm{th}$ sub-range. The quantizer can be represented by
\begin{equation}\label{eqn019}
\mathcal{Q}_b(x)=x_m,~x\in(\tau_m, \tau_{m+1}),~_{m=0, ..., M-1,}
\end{equation}
where in general $x_m$ can be an appropriately chosen value within the range of $(\tau_m, \tau_{m+1})$ depending on the design
specification \cite{1057548,Liu2021vtc}; and $\tau_0=-\infty$, $\tau_{M}=\infty$. Then, the diagonal matrix $\mathbf{\Lambda}_b$ is
expressed by
\begin{IEEEeqnarray}{ll}\label{eqn020}
\mathbf{\Lambda}_b&=\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}
\sum_{m=0}^{M-1}\frac{x_m}{\sqrt{\pi}}\Big(
\exp(-\tau_m^2\mathbb{D}(\mathbf{C}_{rr})^{-1})\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad-\exp(-\tau_{m+1}^2\mathbb{D}(\mathbf{C}_{rr})^{-1})\Big).
\end{IEEEeqnarray}
Generally, the analysis of $\mathbf{C}_{\varepsilon\varepsilon}$ is highly complex, and it does not result in a closed-form solution.
Specifically for the special case of symmetric $1$-bit quantization, the assumption of Gaussian quantization noise is not suitable.
Using the Bussgang's theorem, the approximations \eqref{eqn013}-\eqref{eqn014} can now be replaced by the
exact forms (a slightly alternated version from \cite{Mezghani2012, nguyen2019linear})
\begin{equation}\label{eqn021}
\mathbf{C}_{yr}=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr},
\end{equation}
\begin{IEEEeqnarray}{ll}\label{eqn022}
\mathbf{C}_{\varepsilon\varepsilon}=&\frac{2}{\pi}\Big[\arcsin\Big(\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\Big)-\nonumber\\
&\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}+
N_0\mathbb{D}(\mathbf{C}_{rr})^{-1}\Big].
\end{IEEEeqnarray}
Applying the above results for the LMMSE analysis leads to
\begin{equation}\label{eqn023}
\mathbf{z}_B=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{H}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
With \eqref{eqn018}-\eqref{eqn020}, it is rather trivial to obtain the following form of LMMSE
\begin{equation}\label{eqn024}
\mathbf{G}^\star=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{H}
\Big(\mathbf{C}_{\varepsilon\varepsilon}+\frac{2}{\pi}\mathbb{D}(\mathbf{C}_{rr})^{-1}\mathbf{HH}^H\Big)^{-1}.
\end{equation}
Here, we emphasize that \eqref{eqn024} holds only for the $1$-bit quantizer.
\subsection{Statement of The Research Problem}
Section \ref{sec2b} shows already intensive research and appealing contributions on the linear approximation models as well as their relevant LMMSE analaysis.
Nevertheless, there is still a need for a more extensive and rigorous study on this issue, which can make the linear approximation
research more comprehensive and accurate. Moreover, a more comprehensive study could help to develop novel understanding of the behavior of
LMMSE channel equalizer in the context of low-resolution MIMO signal reception. The following sections are therefore motivated.
\section{Hermite Polynomial Expansion for Linear Approximation}\label{sec3}
This section presents the Hermite polynomial expansion of the low-resolution quantization function as well as key characteristics of the
SOHE model.
\subsection {Hermite Polynomial Expansion and The SOHE Model}
We start from the Laplace's Hermite polynomial expansion (see the definition in \cite[Chapter 22]{Poularikas_1999}) which is employed to
represent the quantization function $y=\mathcal{Q}_b(x),~x\in(-\infty, \infty)$. The Hermite transform of $\mathcal{Q}_b(x)$ is given by (see
\cite{60086})
\begin{equation}\label{eqn025}
\omega_l=\frac{1}{\sqrt{\pi}2^ll!}\int_{-\infty}^{\infty}\mathcal{Q}_b(x)\exp(-x^2)\beta_l(x)\mathrm{d}x,
\end{equation}
where $\beta_l(x)$ is the Rodrigues' formula specified by
\begin{equation}\label{eqn026}
\beta_l(x)=(-1)^l\exp(x^2)\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big].
\end{equation}
With this result, the Hermite polynomial expansion of $\mathcal{Q}_b(x)$ is given by
\begin{equation}\label{eqn027}
\mathcal{Q}_b(x)=\lim_{L\rightarrow\infty}\sum_{l=1}^{L}\omega_l\beta_l(x).
\end{equation}
The expression of $\omega_l$ can be simplified by plugging \eqref{eqn026} into \eqref{eqn025}, i.e.,
\begin{equation}\label{eqn028}
\omega_l=\frac{(-1)^l}{\sqrt{\pi}2^ll!}\int_{-\infty}^{\infty}\mathcal{Q}_b(x)\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big]\mathrm{d}x.
\end{equation}
Applying \eqref{eqn019} into \eqref{eqn028} results in
\begin{equation}\label{eqn029}
w_l=\frac{(-1)^l}{\sqrt{\pi}2^ll!}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}
\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big]\mathrm{d}x.
\end{equation}
The SOHE model is based on the second-order Hermite expansion as below (i.e., $L=2$ in \eqref{eqn027})
\begin{IEEEeqnarray}{ll}\label{eqn030}
\mathcal{Q}_b(x)&=\sum_{l=1}^{2}w_l\beta_l(x)+O(w_3\beta_3(x)),\\
&=\lambda_bx+q_b(x),\label{eqn031}
\end{IEEEeqnarray}
where $\lambda_b$ is the coefficient corresponding to the first-order Hermite kernel, and $q_b$ is the second-order
approximation of the quantization noise. Their mathematical forms are specified by
\begin{equation}\label{eqn032}
\lambda_b=2\omega_1,
\end{equation}
\begin{equation}\label{eqn033}
q_b(x)=4\omega_2x^2-2\omega_2+O(\omega_3\beta_3(x)).
\end{equation}
The derivation from \eqref{eqn030} to \eqref{eqn031} is by means of computing \eqref{eqn026} for $l=1,2$.
The mathematical work is rather trivial and thus omitted.
{\em Remark 1:}
The SOHE model in \eqref{eqn031} is certainly not accurate enough to capture the exact characteristics of the low-resolution quantizer.
This is also true for all other existing linear approximation models. An accurate Hermite model often requires $L=100$ or more, and this is however
too complex for an analytical study. Nevertheless, we will show that the SOHE model can already reflect key characteristics of the low-resolution
quantizer.
\subsection{The Scalar-SOHE Model and Characteristics}\label{3b}
The SOHE model is a linear approximation of the low-resolution quantizer, and thus it is not very different from other existing linear models
if solely based on their mathematical forms. On the other hand, the key parameters of SOHE (i.e., $\lambda_b$ and $q_b(x)$) show
different characteristics from the others.
{\em 1)} Characteristics of the Hermite coefficient $\lambda_b$ can be summarized by the following statement.
\begin{thm}\label{thm01}
Consider the case of symmetric $b$-bit quantization with the following setup in \eqref{eqn029}
\begin{equation}\label{eqn034}
x_m=\left\{\begin{array}{ll}
\tau_{m+1},&\tau_{m+1}>0\\
\tau_m,&\tau_{m+1}<0
\end{array}\right.
\end{equation}
The Hermite coefficient $\lambda_b$
has the following properties:
\begin{equation}\label{eqn035}
\lambda_b\geq 1~\mathrm{and} \lim_{b\rightarrow\infty}\lambda_b=1.
\end{equation}
\end{thm}
\begin{IEEEproof}
See Appendix \ref{A}.
\end{IEEEproof}
With the ideal AGC, we assume that the input and output signals can be optimally scaled to meet the quantization boundaries.
{\em Theorem \ref{thm01}} provides two implications: {\em 1)} low-resolution quantizers can introduce a scalar ambiguity $\lambda_b$,
which often amplifies the input signal in the digital domain. The principle on how the signal is amplified is analytically explained in
Appendix \ref{A}; {\em 2)} In the SOHE model, the scalar ambiguity vanishes with the increase of resolution ($b$ or $M$). This is in line
with the phenomenon that can be observed in reality. In other words, the SOHE model as well as the proof in Appendix \ref{A} can well
explain the phenomenon of scalar ambiguity occurred in our practice.
{\em 2)} Unlike other linear approximation models, the SOHE model does not impose the assumptions A1) and A2) (see Section \ref{sec2b})
onto the quantization noise $q_b$. Instead, $q_b$ is described as a function of the input signal $x$, with their statistical behaviors being
analytical studied here.
\begin{thm}\label{thm02}
Suppose: C1) the input signal $x$ follows $\mathbb{E}(x)=0$. The cross-correlation between $x$ and $q_b$ depends on the third-order central moments of $x$.
When the input signal $(x)$ is AWGN, the quantization noise can be considered to be uncorrelated with the input signal. Moreover, for the
case of $b\rightarrow\infty$, the following result holds
\begin{equation}\label{eqn036}
\lim_{b\rightarrow\infty}q_b(x)=0.
\end{equation}
\end{thm}
\begin{IEEEproof}
See Appendix \ref{B}.
\end{IEEEproof}
The implication of {\em Theorem \ref{thm02}} lies in two folds: {\em 1)} the quantization noise cannot be easily assumed to be uncorrelated with the input signal. {\em Theorem \ref{thm02}} provides sufficient conditions for the hypothesis of uncorrelated quantization noise; {\em 2)} due to the use of second-order expansion for quantization function, it is possible that the SOHE-based quantization noise cannot well represent the characteristics of ideal quantization like \eqref{eqn036}. However, {\em Theorem \ref{thm02}} confirms that with the increasing of resolutions, the quantization noise which is a function of the input signal would approximate to zero.
{\em Remark 2:}
It is worthwhile to note that, for complex-valued signals, the quantization process is applied individually in the real and imaginary domains.
Therefore, {\em Theorems \ref{thm01}-\ref{thm02}} apply straightforwardly to the complex-valued input signal.
\subsection{The Vector-SOHE Model and Characteristics}
The vector representation of the SOHE model has no fundamental difference from the scalar-SOHE model presented
in \eqref{eqn031}. It can be obtained by applying \eqref{eqn031} into \eqref{eqn004}
\begin{IEEEeqnarray}{ll}\label{eqn037}
\mathbf{y}&=\lambda_b\mathbf{r}+\mathbf{q}_b,\\
&=\lambda_b\mathbf{H}\mathbf{s}+\underbrace{\lambda_b\mathbf{v}+\mathbf{q}_b}_{\triangleq\mat{\varepsilon}_b}.\label{eqn038}
\end{IEEEeqnarray}
The vector form of the quantization noise is specified by
\begin{equation}\label{eqn039}
\mathbf{q}_b=4\omega_2\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)-2\omega_2,
\end{equation}
where $\Re(\mathbf{r})^2$ or $\Im(\mathbf{r})^2$ denotes the corresponding real-vector with the Hadamard power of $2$.
With {\em Theorem \ref{thm02}}, we can reach the following conclusion about the vector-SOHE model.
\begin{cor}\label{cor1}
Suppose that C2) each element of $\mathbf{H}$ is independently
generated; and C3) the number of transmit antennas ($N$) is sufficiently large. The following cross-covariance matrix can be obtained
\begin{equation}\label{eqn040}
\mathbf{C}_{qv}=\mathbb{E}(\mathbf{q}_b\mathbf{v}^H)=\mathbf{0}.
\end{equation}
\end{cor}
\begin{IEEEproof}
The condition C2) ensures that each element of the vector $[\mathbf{Hs}]$ is a sum of $N$ independently generated random variables.
With the condition C3), the central limit theorem tells us that each element of $[\mathbf{Hs}]$ is
asymptotically AWGN. Since the thermal noise $\mathbf{v}$ is AWGN and independent from $[\mathbf{Hs}]$,
the received signal $\mathbf{r}$ is approximately AWGN. In this case, {\em Theorem \ref{thm02}} tells us
\begin{equation}\label{eqn041}
\mathbf{C}_{qr}=\mathbb{E}(\mathbf{q}_b\mathbf{r}^H)=\mathbf{0}.
\end{equation}
Plugging \eqref{eqn003} into \eqref{eqn041} results in
\begin{IEEEeqnarray}{ll}\label{eqn042}
\mathbf{C}_{qr}&=\mathbb{E}(\mathbf{q}_b(\mathbf{Hs}+\mathbf{v})^H),\\
&=\mathbb{E}(\mathbf{q}_b(\mathbf{Hs})^H)+\mathbf{C}_{qv}=\mathbf{0}.\label{eqn043}
\end{IEEEeqnarray}
Since $\mathbf{v}$ is independent from $[\mathbf{Hs}]$, the only case for \eqref{eqn043} to hold is that both cross-covariance terms are zero.
\eqref{eqn040} is therefore proved.
\end{IEEEproof}
\begin{cor}\label{cor2}
Given the conditions C2) and C3), the auto-covariance matrix of the quantization noise ($\mathbf{C}_{qq}$) has the following
asymptotical form
\begin{equation}\label{eqn044}
\mathbf{C}_{qq}=4\omega_2^2\Big(4\sigma_r^4\mathbf{I}+(2\sigma_r^4-\sigma_r^2+1)(\mathbf{1}\otimes\mathbf{1}^T)\Big),
\end{equation}
where $\sigma_{r}^2$ denotes the variance of $r_k, _{\forall k}$ when $N\rightarrow\infty$.
\end{cor}
\begin{IEEEproof}
See Appendix \ref{C}.
\end{IEEEproof}
\begin{thm}\label{thm03}
Suppose that C4) the information-bearing symbols $s_n, _{\forall n},$ have their third-order central moments fulfilling the condition:
$\mathbb{E}(\Re(s_n)^3)=0$; $\mathbb{E}(\Im(s_n)^3)=0$. Then, the following cross-covariance holds
\end{thm}
\begin{equation}\label{eqn045}
\mathbf{C}_{\varepsilon s}=\mathbb{E}(\mat{\varepsilon}_b\mathbf{s}^H)=\mathbf{0}.
\end{equation}
\begin{IEEEproof}
The cross-covariance in \eqref{eqn045} can be computed as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{\varepsilon s}&=\mathbb{E}((\lambda_b\mathbf{v}+\mathbf{q}_b)\mathbf{s}^H),\label{eqn046}\\
&=\lambda_b\mathbf{C}_{vs}+\mathbb{E}(\mathbf{q}_b\mathbf{s}^H),\label{eqn047}\\
&=\mathbf{C}_{qs}\label{eqn048}.
\end{IEEEeqnarray}
The derivation from \eqref{eqn047} to \eqref{eqn048} is due to the mutual independence between $\mathbf{s}$ and $\mathbf{v}$.
Appendix \ref{D} shows
\begin{equation}\label{eqn049}
\mathbf{C}_{qs}=\mathbf{0}.
\end{equation}
The result \eqref{eqn045} is therefore proved.
It is perhaps worthwhile to note that in wireless communications,
$s_n$ is normally centrosymmetric (such as M-PSK and M-QAM) and equally probable. In this case, it is not hard to find that the
condition C4) does hold in reality.
\end{IEEEproof}
In summary, {\em Corollary \ref{cor1}} exhibits the conditions for the quantization noise to be uncorrelated with the thermal noise as well as
the noiseless part of the received signal. The condition C3) indicates the need for a sufficiently large number of transmit-antennas ($N$). However,
this does not mean to require a very large $N$ in practice. Let us take an example of $N=8$. Each element of $\mathbf{r}$ is a
superposition of $(2N)=16$ independently generated real random-variables, and this can already lead to a reasonable asymptotical result.
{\em Corollary \ref{cor2}} exhibits the auto-covariance matrix of $\mathbf{q}_b$, which is an asymptotical result for $N\rightarrow\infty$.
The exact form of $\mathbf{C}_{qq}$ is very tedious and we do not have the closed-form. Nevertheless, \eqref{eqn045} already provides
sufficient physical essence for us to conduct the LMMSE analysis.
Finally, {\em Theorem \ref{thm03}} shows that the quantization noise is uncorrelated with the information-bearing symbols. All of
these results are useful tools to our LMMSE analysis in Section \ref{sec4}.
\section{LMMSE Analysis with The Vector-SOHE Model}\label{sec4}
The primary aim of this section is to employ the vector-SOHE model \eqref{eqn037}-\eqref{eqn038} to conduct the LMMSE analysis, with which those interesting phenomena observed in the current LMMSE algorithm can be well explained. In addition, a better understanding of the behavior of the current LMMSE algorithm helps to find us an enhanced version particularly for signals with non-constant modulus modulations.
\subsection{The SOHE-Based LMMSE Analysis}\label{sec4a}
Vector-SOHE is still a linear model. It does not change the classical form of the LMMSE, i.e., $\mathbf{G}^\star=\mathbf{C}_{sy}\mathbf{C}_{yy}^{-1}$ still holds. Despite, the cross-covariance matrix $\mathbf{C}_{sy}$ can now be computed by
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{sy}&=\mathbb{E}\Big(\mathbf{s}(\lambda_b\mathbf{H}\mathbf{s}+\mat{\varepsilon}_b)^H\Big),\label{eqn050}\\
&=\lambda_b\mathbf{C}_{ss}\mathbf{H}^H+\mathbf{C}_{s\varepsilon},\label{eqn051}\\
&=\lambda_b\mathbf{H}^H.\label{eqn052}
\end{IEEEeqnarray}
The derivation from \eqref{eqn051} to \eqref{eqn052} is due to the fact $\mathbf{C}_{s\varepsilon}=\mathbf{0}$ (see {\em Theorem \ref{thm03}}) as well as the assumption that $x_n, \forall n,$ are uncorrelated with respect to $n$ (see the assumption above \eqref{eqn002}).
The auto-covariance matrix $\mathbf{C}_{yy}$ can be represented by
\begin{equation}
\mathbf{C}_{yy}=\lambda_b^2\mathbf{HH}^H+\mathbf{C}_{\varepsilon\varepsilon},\label{eqn053}
\end{equation}
where
\begin{IEEEeqnarray}{ll}\label{eqn054}
\mathbf{C}_{\varepsilon\varepsilon}&=\lambda_b^2N_0\mathbf{I}+\mathbf{C}_{qq}+\lambda_b(\mathbf{C}_{qv}+\mathbf{C}_{vq}),\\
&=\lambda_b^2N_0\mathbf{I}+\mathbf{C}_{qq}+2\lambda_b\Re(\mathbf{C}_{qv}).\label{eqn055}
\end{IEEEeqnarray}
Then, the LMMSE formula can be represented by
\begin{equation}\label{eqn056}
\mathbf{G}^\star=\lambda_b^{-1}\mathbf{H}^H(\mathbf{HH}^H+\lambda_b^{-2}\mathbf{C}_{\varepsilon\varepsilon})^{-1}.
\end{equation}
Provided the conditions C2) and C3), \eqref{eqn056} turns into
(see {\em Corollary \ref{cor1}})
\begin{equation}\label{eqn057}
\mathbf{G}^\star=\lambda_b^{-1}\mathbf{H}^H(\mathbf{HH}^H+N_0\mathbf{I}+\lambda_b^{-2}\mathbf{C}_{qq})^{-1},
\end{equation}
where $\mathbf{C}_{qq}$ can be substituted by \eqref{eqn044} in {\em Corollary \ref{cor2}}.
\subsection{Comparison between Various LMMSE Formulas}\label{sec4b}
Given that the generalized-AQNM model (see Section \ref{sec2b3}) was only studied for the $1$-bit quantizer, we mainly conduct
the LMMSE comparison between the SOHE model and the (modified) AQNM model. As shown in Section \ref{sec2b2},
the modified-AQNM model does not give a different LMMSE formula from the AQNM model when the Gaussian quantization noise is assumed.
Therefore, our study is quickly focused on the comparison with the AQNM model.
Basically, there are two major differences in their LMMSE forms:
{\em 1)} The SOHE-LMMSE formula has a scaling factor $\lambda_b^{-1}$, which plays the role of equalizing the scalar ambiguity
inherent in the SOHE model (see \eqref{eqn037}-\eqref{eqn038}). As shown in {\em Theorem \ref{thm01}}, this scalar ambiguity is introduced
in the low-resolution quantization procedure. It amplifies the signal energy in the digital domain and vanishes with the increase of resolutions.
This theoretical conclusion coincides well with the phenomenon observed in the literature (e.g. \cite{nguyen2019linear,9144509}).
{\em 2)} In the AQNM-LMMSE formula \eqref{eqn009}, the impact of the quantization noise is described by the term
$\mathrm{nondiag}(\rho\mathbf{HH}^H)$. This implies that the quantization noise is modeled as a linear distortion.
However, such is not the case for the SOHE-LMMSE formula. As shown in \eqref{eqn045} and \eqref{eqn057}, the auto-covariance matrix
$\mathbf{C}_{qq}$ involves the terms $\sigma_r^2$ and $\sigma_r^4$; and higher-order components are approximated in the SOHE model.
Although \eqref{eqn045} is only an asymptotical and approximate result, it carries a good implication in the sense that the quantization noise
would introduce non-linear effects to the LMMSE. Due to the modeling mismatch, the AQNM-LMMSE algorithm can suffer additional
performance degradation.
Denote $\mathbf{G}^\star_{\eqref{eqn009}}$ and $\mathbf{G}^\star_{\eqref{eqn057}}$ to be the corresponding LMMSE formulas with
respect to the AQNM and SOHE models. Section \ref{sec2a} indicates that they share the same size, i.e., $(N)\times(K)$.
Assuming that $\mathbf{G}^\star_{\eqref{eqn009}}$ has the full row rank, we are able to find a $(N)\times(N)$ matrix $\mathbf{\Theta}$
fulfilling
\begin{equation}\label{eqn058}
\mathbf{\Theta}\mathbf{G}^\star_{\eqref{eqn009}}=\mathbf{G}^\star_{\eqref{eqn057}}.
\end{equation}
Denote $(\mathbf{G}^\star_{\eqref{eqn009}})^\dagger$ to be the pseudo inverse of $\mathbf{G}^\star_{\eqref{eqn009}}$.
The matrix $\mathbf{\Theta}$ can be obtained through
\begin{equation}\label{eqn059}
\mathbf{\Theta}=\mathbf{G}^\star_{\eqref{eqn057}}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\Big)^\dagger.
\end{equation}
Therefore, the impact of the modeling mismatch inherent in the AQNM-LMMSE can be mitigated through a linear transform.
Suppose that the matrix $\mathbf{G}^\star_{\eqref{eqn009}}$ has the full row rank. The modeling-mismatch induced
performance degradation inherent in the AQNM-LMMSE algorithm can be mitigated through the linear transform specified in
\eqref{eqn058}, where the scaling factor $\lambda_b$ is incorporated in the matrix $\mathbf{\Theta}$.
\subsection{Enhancement of The AQNM-LMMSE Algorithm}
The SOHE-LMMSE formula describes more explicitly the impact of non-linear distortion in the channel equalization.
However, the SOHE-LMMSE formula cannot be directly employed for the channel equalization mainly due to two reasons:
{\em 1)} the auto-covariance matrix $\mathbf{C}_{qq}$ does not have a closed-form in general; and {\em 2)} the scalar
$\lambda_b$ defined in \eqref{eqn032} comes only from the first-order Hermite kernel. However, other odd-order Hermite
kernels also contribute to $\lambda_b$. The omission of the third- and higher-order Hermite kernels can make the computation of
$\lambda_b$ inaccurate. Fortunately, the analysis in \eqref{eqn058} and\eqref{eqn059} show that the SOHE-LMMSE formula can be translated into
the AQNM-LMMSE formula through a linear transform. In other words, there is a potential to enhance the AQNM-LMMSE algorithm
by identifying the linear transform $\mathbf{\Theta}$.
Denote $\hat{\mathbf{s}}_{\eqref{eqn057}}\triangleq\mathbf{G}^\star_{\eqref{eqn057}}\mathbf{y}$
and $\hat{\mathbf{s}}_{\eqref{eqn009}}\triangleq\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}$ to be the outputs of the SOHE-LMMSE
channel equalizer and the AQNM-LMMSE channel equalizer, respectively. Applying the result \eqref{eqn058}-\eqref{eqn059} yields
\begin{equation}\label{eqn060}
\hat{\mathbf{s}}_{\eqref{eqn009}}=\mathbf{\Theta}^{-1}\hat{\mathbf{s}}_{\eqref{eqn057}}.
\end{equation}
Generally, it is not easy to identify $\mathbf{\Theta}$ and remove it from $\hat{\mathbf{s}}_{\eqref{eqn009}}$. On the other hand,
if $\mathbf{G}^\star_{\eqref{eqn057}}$ and $\mathbf{G}^\star_{\eqref{eqn009}}$ are not too different, \eqref{eqn059} implies that
$\mathbf{\Theta}$ can be considered to be approximately diagonal. In this case, the linear transform reduces to symbol-level scalar ambiguities.
Assume that the channel-equalized result $\hat{\mathbf{s}}_{\eqref{eqn057}}$ does not have such scalar ambiguities. It is easy to understand
the scalar ambiguities of $\hat{\mathbf{s}}_{\eqref{eqn009}}$ come from $\lambda_b\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}$. In other
words, we can have the following approximation
\begin{equation}\label{eqn061}
\mathbf{\Theta}^{-1}\approx\lambda_b\mathbb{D}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}\Big).
\end{equation}
In \eqref{eqn061}, $\lambda_b$ is the only unknown notation which must be determined. {\em Theorem \ref{thm01}} shows that
the effect of $\lambda_b$ is the block-level energy amplification, of which the value can be computed using \eqref{appa6}. Finally, we conclude the following form of enhanced LMMSE channel equalizer (eLMMSE)
\begin{equation}\label{eqn063}
\mathbf{G}_e=\frac{1}{\lambda_b}\mathbb{D}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}\Big)^{-1}\mathbf{G}^\star_{\eqref{eqn009}}.
\end{equation}
\section{Simulation Results and Discussion}\label{sec5}
Computer simulations were carried out to elaborate our theoretical work in Section \ref{sec3} and Section \ref{sec4}.
Similar to the AQNM models, the SOHE model cannot be directly evaluated through computer simulations.
Nevertheless, their features can be indirectly demonstrated through the evaluation of their corresponding LMMSE channel equalizers.
Given various LMMSE channel equalizers discussed in Section \ref{sec2} and Section \ref{sec4}, it is perhaps useful to provide a brief summary here for the sake of clarification:
\begin{itemize}
\item AQNM-LMMSE: this is the LMMSE channel equalizer shown in \eqref{eqn009}.
As shown in Section \ref{sec2b2}, the LMMSE channel equalizer \eqref{eqn017} is equivalent to \eqref{eqn009}; and thus it is not demonstrated in our simulation results.
\item B-LMMSE: this is the LMMSE channel equalizer shown in \eqref{eqn024}. This channel equalizer is specially designed and optimized for the $1$-bit quantizer. Therefore, it will only be demonstrated in our simulation results for the $1$-bit quantizer.
\item N-LMMSE: this is the AQNM-LMMSE channel equalizer normalized by the term $\|\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}\|$.
\item NB-LMMSE: this is the B-LMMSE channel equalizer normalized by the term $\|\mathbf{G}^\star_{\eqref{eqn024}}\mathbf{y}\|$.
Both the N-LMMSE and NB-LMMSE channel equalizers have been studied in \cite{7439790,nguyen2019linear,tsefunda}.
\item e-LMMSE: this is the e-LMMSE channel equalizer proposed in \eqref{eqn063}. As shown in Section \ref{sec4}, this e-LMMSE channel equalizer is driven by the SOHE model.
\end{itemize}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.25]{1bit_MSE_comparisons_dB.eps}
\caption{
The MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\label{fig01}
\end{figure}
In our computer simulations, the e-LMMSE channel equalizer is compared to the SOTA (i.e., AQNM-LMMSE, B-LMMSE, N-LMMSE and NB-LMMSE) in terms of their MSE as well as bit-error-rate (BER) performances. The MSE is defined by
\begin{equation}\label{eqn064}
\mathrm{MSE}\triangleq\frac{1}{(N)(I)}\sum_{i=0}^{I-1}\|\mathbf{G}_i^\star\mathbf{y}_i-\mathbf{s}_i\|^2,
\end{equation}
where $I$ denotes the number of Monte Carlo trials.
All the simulation results were obtained by taking average of sufficient number of Monte Carlo trials. For each trial, the wireless MIMO narrowband channel was generated according to independent complex Gaussian distribution (Rayleigh in amplitude), and this is the commonly used simulation setup in the literature \cite{7458830, 6987288}. In addition, the signal-to-noise ratio (SNR) is defined by the average received bit-energy per receive antenna to the noise ratio (Eb/N0), and the transmit power for every transmit antenna is set to be identical. The low-resolution quantization process follows the design in \cite{7037311}, which for 1-bit quantizer, binary quantization is taken; for quantizer other than 1-bit (i.e., 2, 3-bit), the ideal AGC is assumed and the quantization is determined by quantization steps \cite{1057548}.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{MSE_23bit_comparisons_dB.eps}
\caption{The MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\label{fig02}
\end{figure*}
According to the measures used in computer simulations, we divide the simulation work into two experiments.
One is designed to examine the MSE performance, and the other is for the BER performance.
In our simulation results, we demonstrate the performances mainly for $16$-QAM. This is due to two reasons:
{\em 1)} all types of LMMSE channel equalizers offer the same performances for M-PSK modulations.
This phenomenon has already been reported in the literature and also discussed in Section \ref{sec1};
and {\em 2)} higher-order QAM modulations exhibit almost the same basic features as those of $16$-QAM.
On the other hand, they perform worse than $16$-QAM due to their increased demand for the resolution of quantizers.
Those observations are not really novel and thus abbreviated.
\subsubsection*{Experiment 1}\label{exp1}
The objective of this experiment is to examine the MSE performance of various LMMSE channel equalizers.
For all simulations, we keep the transmit antenna to receive antenna ratio to be a constant (e.g. $N/K=1/16$).
\figref{fig01} depicts the MSE performances of various LMMSE channel equalizers as far as the $1$-bit quantizer is concerned.
Generally, it can be observed that all the MSE performances get improved by increasing the size of MIMO.
This phenomenon is fully in line with the principle of mMIMO.
It can also be observed that both the AQNM-LMMSE and the B-LMMSE channel equalizers perform poorly throughout the whole SNR range.
This is because the AQNM models do not capture the scaling ambiguity as described in the SOHE model. When the normalization operation is applied,
the AQNM-LMMSE and the B-LMMSE channel equalizers turn into their corresponding N-LMMSE and NB-LMMSE equalizers, respectively.
Interestingly, their performances get significantly improved, and thereby outperforming the e-LMMSE channel equalizer for most of cases.
On one hand, this is the additional evidence showing the missing of scaling ambiguity in the AQNM models; and on the other hand,
it is shown that the NB-LMMSE is indeed the optimized LMMSE channel equalizer for the $1$-bit quantizer.
Nevertheless, we can see that the e-LMMSE approach still offers very comparable MSE performances with the N-LMMSE and NB-LMMSE approaches.
This provides the indirect evidence showing that the SOHE model offers a good approximation for the $1$-bit quantizer.
Then, we carry on our simulations for $2$- and $3$-bit low-resolution quantizers, respectively, and illustrate their MSE performances in \figref{fig02}.
It is perhaps worth emphasizing that the B-LMMSE and NB-LMMSE channel equalizers are not examined here since they are devised only for the $1$-bit quantizer.
The first thing coming into our sight is that the e-LMMSE shows the best MSE performance for almost all the demonstrated cases.
This is a very good evidence to support our theoretical work about the SOHE model as well as the SOHE-based LMMSE analysis.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{two_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 2$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=16$ receive antennas.}\label{fig03}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{four_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 4$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\label{fig04}
\end{figure*}
When going down to the detail, specifically for the $2$-bit quantizer, the N-LMMSE approach demonstrates very comparable performance
with the e-LMMSE approach in the case of larger MIMO (i.e. $(N/K)=(8/128)$). However, its performance gets quickly degraded with the decrease of
the MIMO size. Take the example of Eb/N0$=5$ dB. For the case of $(N/K)=(8/128)$,
both the e-LMMSE and the N-LMMSE approaches have their MSEs at around $-22.6$ dB,
while the AQNM-LMMSE has its MSE at around $-16.8$ dB. Both the e-LMMSE and the N-LMMSE outperform the AQNM-LMMSE by around $6$ dB.
When the size of MIMO reduces to $(N/K)=(4/64)$, the e-LMMSE shows the best MSE (around $-21.2$ dB).
The MSE for N-LMMSE and AQNM-LMMSE becomes $-18.9$ dB and $-17.7$ dB, respectively.
The N-LMMSE underperforms the e-LMMSE by around $2.3$ dB, although it still outperforms the AQNM-LMMSE by around $1.2$ dB.
By further reducing the size of MIMO to $(N/K)=(2/32)$, the e-LMMSE has its MSE performance degraded to $-19.6$ dB.
The MSE for N-LMMSE and AQNM-LMMSE now becomes $-14.9$ dB and $-17.4$ dB, respectively.
The e-LMMSE outperforms the AQNM-LMMSE by around $2.2$ dB and the N-LMMSE by around $4.7$ dB.
The major reason for this phenomenon to occur is that the AQNM model assumes the quantization distortion and the input signal to be Gaussian.
This assumption becomes less accurate with the use of less transmit antennas. Moreover, the use of less receive antennas has the spatial de-noising ability reduced. The term used for normalization gets more negatively influenced by the noise as well as the quantization distortion.
The SOHE model does not assume the input signal and the quantization distortion to be Gaussian, and thus it gets the least negative impact.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{eight_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 8$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=128$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas.}\label{fig05}
\end{figure*}
Due to the same rationale, the similar phenomenon can also be observed for the $3$-bit quantizer.
Again, the e-LMMSE approach shows the best performance for almost all the cases. Apart from that, there are two notable differences that worth a mention:
{\em 1)} the performance of AQNM-LMMSE is quite close to that of e-LMMSE for all sizes of MIMO. This is because the $3$-bit quantizer is of reasonably good resolution for $16$-QAM modulations, and this largely mitigates the discrimination between the AQNM model and the SOHE model;
and 2) the N-LMMSE performs really poorly when compared with the others. This implies the inaccuracy of using the term $\|\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}\|$ for the normalization.
After all, the experiment about the MSE evaluation confirms our theoretical work in Sections \ref{sec2}-\ref{sec4} and demonstrates the major
advantages of the SOHE model as well as the e-LMMSE channel equalizer from the MSE perspective.
\subsubsection*{Experiment 2}\label{exp2}
It is common knowledge that an MMSE-optimized approach is not necessarily optimized for the detection performance.
This motivates us to examine the average-BER performance for various LMMSE channel equalizers in this experiment.
Basically, this experiment is divided into three sub-tasks, with each having a fixed number of transmit antennas.
\figref{fig03} depicts the case of $N=2$ transmit antennas. Generally, the use of more receive antennas can largely improve the BER performance.
This conclusion is true for all types of examined low-resolution quantizers. In other words, all LMMSE channel equalizers can enjoy the
receiver-side spatial diversity.
Specifically for the $1$-bit quantizer, the AQNM-based LMMSE approaches (i.e., AQNM-LMMSE and B-LMMSE) generally underperform their corresponding normalized version (i.e., N-LMMSE and NB-LMMSE). This phenomenon fully coincides with their MSE behaviors shown in
{\em Experiment 1}- \figref{fig01}. The e-LMMSE approach does not demonstrate remarkable advantages in this special case. It offers the best BER
at the SNR range around Eb/N0 $=2$ dB, and then the BER grows with the increase of SNR. Such phenomenon is not weird, and this occurs quite
often in systems with low-resolution quantizers and other non-linear systems due to the physical phenomenon called stochastic resonance \cite{RevModPhys.70.223}. Similar phenomenon also occurs in the AQNM-LMMSE approach. It means that, for low-resolution quantized systems, additive noise could be constructive to the signal detection at certain SNRs, especially for the QAM constellations (e.g. \cite{7247358, 7894211, 9145094, jacobsson2019massive, She2016The}).
The theoretical analysis of constructive noise in the signal detection can be found in Kay's work \cite{809511}
( interested readers please see Appendix \ref{E} for the elaboration of the phenomenon of constructive noise.)
Interestingly, the normalized approaches do not show
considerable stochastic resonance phenomenon within the observed SNR range.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{four_123bit_SE_comparison.eps}
\caption{The sum SE as a function of Eb/N0 for $N= 4$ transmitters for systems with different resolutions of quantizers, different LMMSE based channel estimators and ZF channel equalizer.
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\label{fig06}
\end{figure*}
When the resolution of quantizer increases to $b=2$ bit, the e-LMMSE approach demonstrates the significant performance gain for most of cases.
For instance, the e-LMMSE significantly outperforms the AQNM-LMMSE for the higher SNR range (i.e., Eb/N0 $>0$ dB).
The N-LMMSE approach performs the worst in all the cases. This observation is well in line with our observation in the MSE performance
(see \figref{fig02}), and they share the same rationale.
When the resolution of quantizer increases to $b=3$ bit, both the e-LMMSE and the AQNM-LMMSE approaches offer excellent BER performances.
Their performances are very close to each other, and the e-LMMSE only slightly outperforms the AQNM-LMMSE for the case of $K=16$.
This reason for this phenomenon is the same as that for the MSE performance, which has also been explained in {\em Experiment 1}.
In a short summary, the e-LMMSE approach shows significant advantages for $2$-bit quantizers. This is the case where the SOHE model offers a
better mathematical description than the AQNM models, and at the same time the resolution is not high enough to support higher-order modulations.
This is particularly true for the case of $N=2$ transmit antennas, where the input signal and quantization distortion can be assumed to be white Gaussian.
Now, we increase the number of transmit antennas $(N)$ to investigate how the BER performance will be influenced.
Accordingly, the number of receive antennas $(K)$ is also increased. The BER results for the case of $N=4$ are plotted in \figref{fig04}.
Let us begin with the $3$-bit quantizer. We have almost the same observation as for the case of $N=2$ transmit antennas.
The e-LMMSE approach performs slightly better than the AQNM-LMMSE approach. The performance difference is not really considerable.
When it comes to the case of $2$-bit quantizer, their difference in BER gets largely increased, and the e-LMMSE approach
demonstrates significant advantages. It is worth noting that the N-LMMSE approach offers comparable performances
with the AQNM-LMMSE approach. This is because the increase of transmit antennas brings the input signal and quantization distortion closer to
the white Gaussian. This rationale is also explained in the MSE analysis. For the case of $1$-bit quantizer, there is no much new phenomenon observed
in comparison with the case of $N=2$ transmit antennas; apart from that the stochastic resonance phenomenon becomes less significant.
When the number of transmit antennas increases to $N=8$, the BER results are plotted in \figref{fig05}.
For the case of $3$-bit quantizer, the e-LMMSE approach demonstrates slightly more considerable gain,
and the N-LMMSE approach gets its performance even closer to the others.
Similar phenomenon can be observed for the case of $2$-bit quantizer, where the N-LMMSE offers considerably close performance to
the e-LMMSE approach. The AQNM-LMMSE approach performs the worst. This phenomenon is also observed in the MSE analysis.
Again, for the $1$-bit quantizer, the NB-LMMSE approach offers the best BER performance, as it is devised and optimized for this special case.
Similar to the phenomenon observed in {\em Experiment 1}, the performance of e-LMMSE is not the best for the $1$-bit quantized system. This is because, for the $1$-bit quantized system, there exists an optimum LMMSE channel equalizer using the arcsine law \cite{Mezghani2012,Papoulis_2002}.
Despite, the proposed e-LMMSE approach can still provide comparable performance over the closed-form approach. When it comes to the $3$-bit quantizer, it can be found that e-LMMSE has only a slight BER gain over the AQNM-LMMSE. It is known that one of the characteristics of SOHE model is that it is not based on the Gaussian quantization noise assumption. However, when the resolution of quantizer rises to $3$-bit, the distribution of quantization noise very approximates to the Gaussian distribution and such results in similar performances between e-LMMSE and AQNM-LMMSE.
\subsubsection*{Experiment 3}\label{exp3}
As response to the reviewers' comments, we add this experiment to examine the SOHE-based channel estimation and its corresponding channel equalization.
For this experiment, SOTA approaches can include those results reported in \cite{7931630, 7894211,rao2021massive}.
It is perhaps worth noting that the literature \cite{rao2021massive} considers the use of sigma-delta quantizer, which takes advantage of oversampling to achieve an enhanced performance.
This is however not the case for our work as well as those in \cite{7931630, 7894211}.
For the sake of fair comparison, we only conduct the performance comparison between our work and the result in \cite{7931630, 7894211}.
In this experiment, the performance is evaluated through the sum SE defined by \cite{rao2021massive}
\begin{equation}\label{eqn067}
\mathrm{SE} =\frac{T-P}{T}\sum_n^NR_n,
\end{equation}
where $T$ is the length of the coherence interval, and $R_n$ the achievable rate for each transmitter-to-receiver link defined in \cite{7931630, 7894211}.
This is because the sum SE is the widely considered metric in the SOTA \cite{7931630, 7894211,rao2021massive}, where $T$ is commonly set to $200$.
Similar to \eqref{eqn003}, the mathematical model for low-resolution quantized mMIMO channel estimation is given in the vectorized form
\begin{equation}\label{eqn065}
\mathbf{r}_p = \bar{\mathbf{\Phi}}\bar{\mathbf{h}}+\bar{\mathbf{v}}_p,
\end{equation}
where $\bar{\mathbf{h}}=\mathrm{vec}(\mathbf{H})$, $\bar{\mathbf{\Phi}}=(\mathbf{\Phi} \otimes \mathbf{I}_K)$ and $\mathbf{\Phi}\in \mathbb{C}^{N\times P}$ is the pairwise orthogonal pilot matrix, which is composed of submatrices of the discrete Fourier transform (DFT) operator \cite{Biguesh_1bit}. During training, all $N$ users simultaneously transmit their pilot sequences of $P$ symbols to the BS.
Feeding \eqref{eqn065} to the low-resolution quantizer, we should have the output $\mathbf{y}_p \in \mathbb{C}^{KP\times 1}$, which is similar to \eqref{eqn004}.
Regarding the LMMSE channel estimation algorithms, we should have the closed-form B-LMMSE estimator for 1-bit quantized model in \cite{7931630} and AQNM-LMMSE and N-LMMSE estimators for other resolutions.
Those channel estimators are compared with the SOHE-LMMSE channel estimator in \eqref{eqn063}.
Given the LMMSE estimator $\mathbf{W}^*$, the channel estimate can be expressed as $\hat{\mathbf{H}}=\mathrm{unvec}(\mathbf{W}^*\mathbf{y}_b)$. For the sake of fairness, we employ the zero-forcing (ZF) algorithm for the channel equalization as it has been used by the SOTA, i.e.,
$\mathbf{G}_{\text{ZF}} = \mathbf{\hat{\mathbf{H}}}^H(\hat{\mathbf{H}}\hat{\mathbf{H}}^H)^{-1}$.
\figref{fig06} depicts the sum SE performance of various LMMSE channel estimators while $N=4$ transmitters and $K= 32, 64$ receive antennas are considered. The length of the pilot is considered as $P=N$. Similar to the phenomenon observed in above experiments, the rising up of the number of receive antennas and resolution of quantizers can offer significant SE gain.
When the resolution of quantizer is $b=1$ bit, the B-LMMSE algorithm has the best sum SE over other LMMSE channel estimators, and the gap can be approximately 4 bit/s/Hz. This phenomenon is not wired as the B-LMMSE is the closed-form algorithm for 1-bit quantized model \cite{7931630}. SOHE-LMMSE and AQNM-LMMSE channel estimators do not demonstrate advantages in this special scenario, but it can be found that SOHE-LMMSE can achieve almost the same sum SE as the N-LMMSE channel estimator, while AQNM-LMMSE approach performs the worst in such model.
When the resolution of quantizer increases to $b=2$ bit, all three types (i.e., SOHE-LMMSE, AQNM-LMMSE and N-LMMSE) of channel estimators share the similar sum SE. For instance, they can have their sum SE reaching at 16 bit/s/Hz for $K=32$ and 20 bit/s/Hz for $K=64$ for the four-user system. When it comes to the case of 3-bit quantizer, we have almost the same observation as for the case of $b=2$ bit quantizer. The performance difference between all three types of channel estimators is not really considerable for high Eb/N0. When the Eb/N0 $>$ 0dB, for $K=64$, the AQNM-LMMSE channel estimator can slightly outperform the N-LMMSE and SOHE-LMMSE channel estimators. As it is discussed in Section \ref{sec4}-\ref{sec5}, the scalar ambiguity will be detrimental for QAM modulations. However, the setup of each element of the pilot matrix $\mathbf{\Phi}$ follows unit power and all pilot sequences are pairwise orthogonal; similar to the analysis for LMMSE channel equalization for PSK constellations, the scalar ambiguity does not show any side effect on such case. This explains the reason why the SOHE-LMMSE channel estimator has the same sum SE compared with current version LMMSE algorithms.
\section{Conclusion}
In this paper, a novel linear approximation method, namely SOHE, has been proposed to model the low-resolution quantizers.
The SOHE model was then extended from the real-scalar form to the complex-vector form, and the latter was applied and extensively studied in
the low-resolution quantized multiuser-MIMO uplink signal reception. It has been shown that the SOHE model does not require
those assumptions employed in the AQNM model as well as its variations. Instead, it uses the first-order Hermite kernel to model the
signal part and the second-order Hermite kernel to model the quantization distortion. Such equipped us with sufficient flexibility and
capacity to develop deeper and novel understanding of the stochastic behavior and correlation characteristics of the quantized signal
as well as the non-linear distortion. Through our intensive analytical work, it has been unveiled that the low-resolution quantization could result
in a scalar ambiguity. In the SOHE model, this scalar ambiguity is characterized by the coefficient of the first-order Hermite kernel.
However, it is not accurately characterized in other linear approximation models due to the white-Gaussian assumption.
When applying the SOHE model for the LMMSE analysis,
it has been found that the SOHE-LMMSE formula carries the Hermite coefficient, which equips the SOHE-LMMSE channel equalizer with
the distinct ability to remove the scalar ambiguity in the channel equalization. It has been shown that the SOHE-LMMSE formula involves
higher-order correlations, and this prevents the implementation of the SOHE-LMMSE channel equalizer. Nevertheless, it was also found that
the SOHE-LMMSE formula could be related to the AQNM-LMMSE formula through a certain linear transform. This finding motivated the
development of the e-LMMSE channel equalizer, which demonstrated significant advantages in the MSE and BER performance evaluation. All of
the above conclusions have been elaborated through extensive computer simulations in the independent Rayleigh-fading channels.
\appendices
\section{Proof of Theorem \ref{thm01}}\label{A}
With the equations \eqref{eqn028} and \eqref{eqn032}, the coefficient $\lambda_b$ can be computed as follows
\begin{IEEEeqnarray}{ll}\label{appa1}
\lambda_b&=\frac{-1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}
\Big[\frac{\partial}{\partial x}\exp(-x^2)\Big]\mathrm{d}x,\\
&=\frac{-1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}(-2x)\exp(-x^2)\mathrm{d}x,\label{appa2}\\
&=\frac{1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)
\Big).\label{appa3}
\end{IEEEeqnarray}
We first examine the limit of $\lambda_b$ when $b\rightarrow\infty$. It is equivalent to the following case
\begin{IEEEeqnarray}{ll}
\lim_{b\rightarrow\infty}\lambda_b
&=\frac{1}{\sqrt{\pi}}\lim_{M\rightarrow\infty}\sum_{m=0}^{M-1}x_m\Big(
\exp(-\tau_m^2)\nonumber
\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\exp(-\tau_{m+1}^2)\Big). \label{appa4}
\end{IEEEeqnarray}
For $M\rightarrow\infty$, the discrete-time summation in \eqref{appa4} goes back to the integral in \eqref{eqn028}.
Since it is an ideal quantization, we have $x_m=x$, and thereby having
\begin{equation}\label{appa5}
\lim_{b\rightarrow\infty}\lambda_b=\frac{2}{\sqrt{\pi}}\int_{-\infty}^{\infty}x^2\exp(-x^2)\mathrm{d}x=1.
\end{equation}
The derivation of \eqref{appa5} can be found in \cite[p. 148]{Papoulis_2002}.
For the symmetric quantization, \eqref{appa3} can be written into
\begin{equation}\label{appa6}
\lambda_b
=\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}x_m\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)
\Big).
\end{equation}
Consider the particular range of $x\in(\tau_m, \tau_{m+1}]$ and $\tau_m>0$, in which $\exp(-x^2)$ is a monotonically
decreasing function of $x$. Then, we have
\begin{equation}\label{appa7}
\exp(-\tau_m^2)\geq\exp(-x^2),~x\in(\tau_m, \tau_{m+1}].
\end{equation}
and consequently have
\begin{equation}\label{appa8}
(\tau_{m+1})\exp(-\tau_m^2)\geq\int_0^{\tau_{m+1}}\exp(-x^2)\mathrm{d}x.
\end{equation}
Applying \eqref{eqn034} and \eqref{appa8} into \eqref{appa6} results in
\begin{IEEEeqnarray}{ll}\label{appa9}
\lambda_b&=\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}\tau_{m+1}\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)\Big),\\
&\geq\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}\Big[\int_0^{\tau_{m+1}}\exp(-x^2)\mathrm{d}x\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-(\tau_{m+1})\exp(-\tau_{m+1}^2)\Big],\\
&\geq\frac{2}{\sqrt{\pi}}\int_0^{\infty}\exp(-x^2)\mathrm{d}x=1.\label{appa11}
\end{IEEEeqnarray}
{\em Theorem \ref{thm01}} is therefore proved.
\section{Proof of Theorem \ref{thm02}}\label{B}
With the quantization noise model \eqref{eqn033}, the cross-correlation between $x$ and $q_b$ can be computed by
\begin{IEEEeqnarray}{ll}\label{appb1}
\mathbb{E}(xq_b)&\approx\mathbb{E}(x(4\omega_2x^2-2\omega_2)),\\
&\approx4\omega_2\mathbb{E}(x^3)-2\omega_2\mathbb{E}(x).\label{appb2}
\end{IEEEeqnarray}
With the condition C1), \eqref{appb2} is equivalent to
\begin{equation}\label{appb3}
\mathbb{E}(xq_b)\approx 4\omega_2\mathbb{E}(x^3).
\end{equation}
When $x$ is AWGN, the third-order term $\mathbb{E}(x^3)$ in \eqref{appb3} equals to 0 (see \cite[p. 148]{Papoulis_2002}).
This leads to the observation that $\mathbb{E}(xq_b)=0$
and then the first part of {\em Theorem \ref{thm02}} is therefore proved.
To prove the limit \eqref{eqn036}, we first study the coefficient $\omega_2$ in \eqref{eqn033}. For $b\rightarrow\infty$,
$\omega_2$ goes back to the formula specified in \eqref{eqn028}. Then, we can compute $\omega_2$ as follows
\begin{IEEEeqnarray}{ll}
\omega_2&=\frac{1}{8\sqrt{\pi}}\int_{-\infty}^{\infty}x\Big[\frac{\partial^2}{\partial x^2}\exp(-x^2)\Big]\mathrm{d}x,\label{appb4}\\
&=\frac{1}{8\sqrt{\pi}}\int_{-\infty}^{\infty}x\Big[(-2+4x^2)\exp(-x^2)\Big]\mathrm{d}x,\label{appb5}\\
&=-\frac{1}{4\sqrt{\pi}}\int_{-\infty}^{\infty}x\exp(-x^2)\mathrm{d}x\nonumber\\
&\quad\quad\quad\quad\quad\quad+\frac{1}{2\sqrt{\pi}}\int_{-\infty}^{\infty}x^3\exp(-x^2)\mathrm{d}x.\label{appb6}
\end{IEEEeqnarray}
It is well known that (also see \cite[p. 148]{Papoulis_2002})
\begin{equation}\label{appb7}
\int_{-\infty}^{\infty}x^l\exp(-x^2)\mathrm{d}x=0, ~l=1, 3;
\end{equation}
and thus we can obtain $\omega_2=0$ for the case of $b\rightarrow\infty$. Applying this result into \eqref{eqn033} leads to
the conclusion in \eqref{eqn036}.
\section{Proof of {\em Corollary \ref{cor2}}}\label{C}
With \eqref{eqn039}, we can compute $\mathbf{C}_{qq}$ as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}
&=\mathbb{E}(\mathbf{q}_b\mathbf{q}_b^H),\label{app08}\\
&=4\omega_2^2\Big(4\underbrace{\mathbb{E}\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)\Big(\Re(\mathbf{r})^2-j\Im(\mathbf{r})^2\Big)^T}_{\triangleq\mathbf{C}_{qq}^{(1)}}-\nonumber\\
&\quad2\underbrace{\mathbb{E}\Big(\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)\otimes\mathbf{1}^T\Big)}_{\triangleq\mathbf{C}_{qq}^{(2)}}-\nonumber\\
&\quad2\underbrace{\mathbb{E}\Big(\mathbf{1}\otimes\Big(\Re(\mathbf{r})^2-j\Im(\mathbf{r})^2\Big)^T\Big)}_{\triangleq\mathbf{C}_{qq}^{(3)}}+\mathbf{1}\otimes\mathbf{1}^T.\Big)
\label{app09}
\end{IEEEeqnarray}
We start from $\mathbf{C}_{qq}^{(2)}$ in \eqref{app09}. Given the conditions C3) and C4), the proof in {\em Corollary \ref{cor1}} shows
that $\mathbf{r}$ is asymptotically zero-mean complex
Gaussian with the covariance to be approximately $\sigma_r^2\mathbf{I}$.
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}^{(2)}&=\Big(\mathbb{E}\Big(\Re(\mathbf{r})^2\Big)+j\mathbb{E}\Big(\Im(\mathbf{r})^2\Big)\Big)\otimes\mathbf{1}^T,\label{app10}\\
&=\frac{\sigma_r^2}{2}(\mathbf{1}+j\mathbf{1})\otimes\mathbf{1}^T,\label{app11}
\end{IEEEeqnarray}
Analogously, the following result holds
\begin{equation}
\mathbf{C}_{qq}^{(3)}=\frac{\sigma_r^2}{2}\mathbf{1}\otimes(\mathbf{1}-j\mathbf{1})^T.\label{app12}
\end{equation}
Then, we can obtain
\begin{equation}\label{app13}
2\Big(\mathbf{C}_{qq}^{(2)}+\mathbf{C}_{qq}^{(3)}\Big)=\sigma_r^2\mathbf{1}\otimes\mathbf{1}^T.
\end{equation}
Now, we come to the last term $\mathbf{C}_{qq}^{(1)}$, which can be computed as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}^{(1)}&=\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)
+\mathbb{E}\Big(\Im(\mathbf{r})^2\Im(\mathbf{r}^T)^2\Big)+\nonumber\\
&\quad j\Big(\mathbb{E}\Big(\Im(\mathbf{r})^2(\Re(\mathbf{r}^T)^2\Big)-\mathbb{E}\Big(\Re(\mathbf{r})^2(\Im(\mathbf{r}^T)^2\Big)\Big).
\label{app14}
\end{IEEEeqnarray}
Since $\Re(\mathbf{r})$ and $\Im(\mathbf{r})$ follow the identical distribution, we can easily justify
\begin{IEEEeqnarray}{ll}\label{app15}
\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)&=\mathbb{E}\Big(\Im(\mathbf{r})^2\Im(\mathbf{r}^T)^2\Big), \\
\mathbb{E}\Big(\Im(\mathbf{r})^2(\Re(\mathbf{r}^T)^2\Big)&=\mathbb{E}\Big(\Re(\mathbf{r})^2(\Im(\mathbf{r}^T)^2\Big).
\label{app16}
\end{IEEEeqnarray}
Applying \eqref{app15} into \eqref{app14} results in
\begin{equation}\label{app17}
\mathbf{C}_{qq}^{(1)}=2\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big).
\end{equation}
Plugging \eqref{app17} and \eqref{app13} into \eqref{app09} yields
\begin{equation}\label{app17a}
\mathbf{C}_{qq}=4\omega_2^2\Big(8\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)+(1-\sigma_r^2)(\mathbf{1}\otimes\mathbf{1}^T)\Big).
\end{equation}
It is not hard to derive (see \cite[p. 148]{Papoulis_2002})
\begin{equation}\label{app18}
\mathbb{E}\Big(\Re(r_k)^4\Big)=\frac{3\sigma_r^4}{4}.
\end{equation}
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(\Re(r_k)^2\Re(r_m)^2\Big)&=\mathbb{E}\Big(\Re(r_k)^2\Big)\mathbb{E}\Big(\Re(r_m)^2\Big), _{\forall k\neq m,}\label{app19}\\
&=\frac{\sigma_r^4}{4}.\label{app20}
\end{IEEEeqnarray}
Applying \eqref{app18} and \eqref{app20} into \eqref{app17} yields
\begin{equation}\label{app21}
\mathbf{C}_{qq}^{(1)}=\frac{\sigma_r^4}{2}(2\mathbf{I}+\mathbf{1}\otimes\mathbf{1}^T).
\end{equation}
Further applying \eqref{app21} into \eqref{app17a} yields the result \eqref{eqn044}. {\em Corollary \ref{cor2}} is therefore proved.
\section{Proof of \eqref{eqn049}}\label{D}
Consider the element-wise cross-correlation between the $m^\mathrm{th}$ element of $\mathbf{q}_b$ (denoted by $q_m$) and the
$k^\mathrm{th}$ element of $\mathbf{s}$, i.e.,
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(q_ms_k^*\Big)&=\mathbb{E}\Big(\Re(s_k)\Re(q_m)+\Im(s_k)\Im(q_m)\Big)+\nonumber\\
&\quad j\mathbb{E}\Big(\Re(s_k)\Im(q_m)-\Im(s_k)\Re(q_m)\Big),\label{app01}\\
&=2\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big).\label{app02}
\end{IEEEeqnarray}
Using \eqref{eqn033}, we can obtain
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big)&=\mathbb{E}\Big(\Re(s_k)(4\omega_2\Re(r_m)^2-2\omega_2)\Big),\nonumber\label{app03}\\
&=4\omega_2\mathbb{E}\Big(\Re(s_k)\Re(r_m)^2\Big).\label{app04}
\end{IEEEeqnarray}
The term $\Re(r_m)$ can be represented by
\begin{equation}\label{app05}
\Re(r_m)=\Re(s_k)\Re(h_{m,k})+\gamma_m+\Re(v_m),
\end{equation}
where $\gamma_m$ is the sum of all corresponding terms that are uncorrelated with $\Re(s_k)$, and $h_{m,k}$ is the $(m,k)^\mathrm{th}$
entry of $\mathbf{H}$. Define $\epsilon_m\triangleq\gamma_m+\Re(v_m)$. We apply \eqref{app05} into \eqref{app04} and obtain
\begin{IEEEeqnarray}{ll}
\mathbb{E}&\Big(\Re(s_k)\Re(r_m)^2\Big)=\Re(h_{m,k})^2\mathbb{E}\Big(\Re(s_k)^3\Big)+\nonumber\\
&\quad\quad\underbrace{2\Re(h_{m,k})\mathbb{E}\Big(\Re(s_k)^2\epsilon_m\Big)+\mathbb{E}\Big(\Re(s_k)\epsilon_m^2\Big)}_{=0}.\label{app06}
\end{IEEEeqnarray}
Plugging \eqref{app06} into \eqref{app04} yields
\begin{equation}\label{app07}
\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big)=4\omega_2\Re(h_{m,k})^2\mathbb{E}\Big(\Re(s_k)^3\Big).
\end{equation}
The condition C4) ensures that the third-order central moments $\mathbb{E}\Big(\Re(s_k)^3\Big)=0$. Hence, we can conclude
$\mathbb{E}\Big(q_ms_k^*\Big)=0, \forall m,k$. The result \eqref{eqn049} is therefore proved.
\section{Elaborative Explanation of the Phenomenon of Constructive Noise}\label{E}
As response to the review comment, we find it important to elaborate the phenomenon of constructive noise in the low-resolution signal detection. To better explain the phenomenon, we consider the case where two different information-bearing symbol blocks termed $\mathbf{s}^{(1)}$ and $\mathbf{s}^{(2)}$, $\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)}$, are transmitted to the receiver separately.
In the case of very high SNR or perhaps more extremely the noiseless case, their corresponding received blocks can be expressed by
\begin{equation}\label{appe1}
\mathbf{r}^{(1)}=\mathbf{H}\mathbf{s}^{(1)},~\mathbf{r}^{(2)}=\mathbf{H}\mathbf{s}^{(2)},
\end{equation}
where the noise $\mathbf{v}$ is omitted for now because it is negligibly small.
In this linear system, there exists a perfect bijection between $\mathbf{s}$ and $\mathbf{r}$ and we have $\mathbf{r}^{(1)}\neq \mathbf{r}^{(2)}$.
For this reason, the receiver can reconstruct the information-bearing symbol block from $\mathbf{r}$ without error.
The noise will only introduce the detrimental impact to the signal detection.
However, such is not the case for the system with low-resolution ADC.
To make the concept easy to access, we consider the special case of $1$-bit ADC, the output of which is
\begin{equation}\label{appe2}
\mathbf{y}^{(1)}=\mathcal{Q}_b(\mathbf{H}\mathbf{s}^{(1)}),~\mathbf{y}^{(2)}=\mathcal{Q}_b(\mathbf{H}\mathbf{s}^{(2)}).
\end{equation}
The nonlinear function $\mathcal{Q}_b(\cdot)$ can destroy the input-output bijection as hold in the linear system.
Here, we use a simple numerical example to explain the phenomenon.
To fulfill the condition $\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)}$, we let $\mathbf{s}^{(1)}=[-1+3j, 3-j]^T$ and $\mathbf{s}^{(2)}=[-3+1j, 1-3j]^T$.
Moreover, to simply our discussion, we let $\mathbf{H}=[\mathbf{I}_{2\times2}, \mathbf{I}_{2\times2}]^T$.
Then, the output of the $1$-bit ADC is given by
\begin{equation}\label{appe3}
\mathbf{y}^{(1)}=\mathbf{y}^{(2)}=[-1+j, 1-j, -1+j, 1-j]^T.
\end{equation}
In the probability domain, we have
\begin{equation}\label{appe4}
\mathrm{Pr}(\mathbf{y}^{(1)}\neq\mathbf{y}^{(2)}|\mathbf{H}, \mathbf{x}^{(1)}, \mathbf{x}^{(2)})=0.
\end{equation}
It means that there is no bijection between $\mathbf{y}$ and $\mathbf{s}$ in this case; and for this reason, the receiver is not able to successfully reconstruct $\mathbf{s}$ from $\mathbf{y}$ even in the noiseless case.
Now, we increase the noise power (or equivalently reduce the SNR).
Due to the increased randomness, we understand that a positive-amplitude signal can become a negative-amplitude signal.
Denote $s$ to be a real scalar drawn from the discrete finite-set $\{-3, -1, 1, 3\}$ and $v$ the Gaussian noise. It is trivial to have
\begin{equation}\label{appe5}
\mathrm{Pr}(s+v>0|s=-1)>\mathrm{Pr}(s+v>0|s=-3).
\end{equation}
As shown in \cite{9145094}, with the decrease of SNR from a large value (e.g., the noiseless case), the difference between these two probabilities will quickly increase at the beginning, and then converge to a certain non-zero value.
It means that the noise helps to discriminate the ADC output $\mathbf{y}^{(1)}$ and $\mathbf{y}^{(2)}$, i.e.,
\begin{equation}\label{appe6}
\mathrm{Pr}(\mathbf{y}^{(1)}\neq\mathbf{y}^{(2)}|\mathbf{H}, \mathbf{x}^{(1)}, \mathbf{x}^{(2)})\neq 0,
\end{equation}
and the probability increases with the decrease of SNR, and this helps the signal detectability \cite{809511}.
Since the probability converges to a certain value at some SNR, further reducing the SNR will not improve the signal detectability but will only degrade the detection performance. For the general case, the converged probability of \eqref{appe6} can be found in \cite{9145094}, i.e.,
\begin{equation}\label{rev01}
\mathrm{Pr}(\mathbf{y}^{(1)}=\mathbf{y}^{(2)}|\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)})
=\frac{(\mathcal{L}^N)(\mathcal{L}^N-1)}{2^{(2K+1)}},
\end{equation}
where $\mathcal{L}$ is the modulation order. Finally, when the resolution of the quantizer increases, the communication system becomes closer to linear, for which the noise becomes less constructive.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
\section{Introduction}
A covering array $CA(n,k,g)$ is a $k\times n$ array on $\mathbb{Z}_g$ with the property that any two rows are qualitatively independent. The number $n$ of columns
in such array is called its size. The smallest possible size of a covering array is denoted
\begin{equation*}
CAN(k,g)=\min_{n\in \mathbb{N}}\{n~:~ \mbox{there exists a } CA(n,k,g)\}
\end{equation*}
Covering arrays are generalisations of both orthogonal arrays and Sperner systems. Bounds and constructions of covering arrays have been derived from algebra, design theory, graph theory, set systems
and intersecting codes \cite{chatea, kleitman, sloane, stevens1}. Covering arrays have industrial applications in many disparate applications in which factors or components interact, for example, software and circuit testing, switching networks, drug screening and data compression \cite{korner,ser,Cohen}. In \cite{karen}, the definition of a covering array has been extended to include a graph structure.
\begin{definition}\rm (Covering arrays on graph). A covering array on a graph $G$ with alphabet size $g$ and $k=|V(G)|$ is a $k\times n$ array on $\mathbb{Z}_g$.
Each row in the array corresponds to a vertex in the graph $G$. The array has the property that any two rows which correspond to adjacent vertices in $G$ are qualitatively independent.
\end{definition}
\noindent A covering array on a graph $G$ will be denoted by $CA(n,G,g)$. The smallest possible covering array on a graph $G$ will be denoted
\begin{equation*}
CAN(G,g)=\min_{n\in \mathbb{N}}\{n~:~ \mbox{there exists a } CA(n,G,g)\}
\end{equation*}
Given a graph $G$ and a positive integer $g$, a covering array on $G$ with minimum size is called {\it optimal}. Seroussi and Bshouly proved that determining the existence of an optimal binary
covering array on a graph is an NP-complete problem \cite{ser}. We start with a review of some definitions and results from product graphs in Section \ref{productgraph}. In Section \ref{bound},
we show that for all graphs $G_1$ and $G_2$,
$$\max_{i=1,2}\{CAN(G_i,g)\}\leq CAN(G_1\Box G_2,g)\leq CAN( \max_{i=1,2}\{\chi(G_i)\},g).$$ We look for graphs $G_1$ and $G_2$ where the lower bound on $CAN(G_1\Box G_2)$ is
achieved. In Section \ref{Cayley}, we give families of Cayley graphs that achieves this lower bound on covering array number on graph product. In Section \ref{Approx}, we present a polynomial time
approximation algorithm with approximation ratio $\log(\frac{V}{2^{k-1}})$ for constructing covering array on
graph $G=(V,E)$ having more than one prime factor with respect to the Cartesian product.
\section{Preliminaries} \label{productgraph}
In this section, we give several definitions from product graphs that we use in this article.
A graph product is a binary operation on the set of all finite graphs. However among all possible associative graph products
the most extensively studied in literature are the Cartesian product, the direct product,
the strong product and the lexicographic product.
\begin{definition}\rm
The Cartesian product of graphs $G$ and $H$, denoted by $G\Box H$, is the graph with
\begin{center}
$V(G\Box H) = \{(g, h) \lvert g\in V(G) \mbox{ and } h \in V(H)\}$,
\\ $E(G\Box H) = \{ (g, h)(g', h') \lvert g = g', hh' \in E(H), \mbox{ or } gg' \in E(G), h=h' \}$.
\end{center}
The graphs $G$ and $H$ are called the {\it factors} of the product $G \Box H$.
\end{definition}
\noindent In general, given graphs $G_1,G_2,...,G_k$, then $G_1 \Box G_2 \Box \cdots \Box G_k$, is the graph with vertex set
$V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and two vertices $(x_1,x_2,\ldots, x_k)$ and
$(y_1, y_2,\ldots,y_k)$ are adjacent if and only if $x_iy_i \in E(G_i)$ for exactly one index $1\leq i\leq k$ and $x_j = y_j$ for each index $j \not= i$.\\
\begin{definition}\rm
The direct product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\times G_2\times \cdots \times G_k$, is the graph with vertex
set $V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and for which vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are
adjacent precisely if $x_iy_i \in E(G_i)$ for each index $i$.
\end{definition}
\begin{definition}\rm
The strong product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\boxtimes G_2\boxtimes \cdots \boxtimes G_k$, is the graph with vertex set
$V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and distinct vertices $(x_1,x_2,\ldots,x_k)$ and $(y_1,y_2,\ldots,y_k)$ are adjacent if and only if
either $x_iy_i\in E(G_i)$ or $x_i=y_i$ for each $1\leq i\leq k$. We note that in general $E(\boxtimes_{i=1}^k {G_i}) \neq E(\Box_{i=1}^k G_i) \cup E(\times_{i=1}^k G_i)$, unless $k=2$.
\end{definition}
\begin{definition}\rm
The lexicographic product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\circ G_2\circ \cdots \circ G_k$, is the graph with
vertex set $V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and two vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are
adjacent if and only if for some index $j\in \{1,2,...,k\}$ we have $x_jy_j \in E(G_j)$ and $x_i =y_i$ for each index $1\leq i < j$.
\end{definition}
Let $G$ and $H$ be graphs with vertex sets $V(G)$ and $V(H)$, respectively. A {\it homomorphism} from $G$ to $H$ is a map
$\varphi~:~V(G)\rightarrow V(H)$ that preserves adjacency: if $uv$ is an edge in $G$, then $\varphi(u)\varphi(v)$ is an edge in $H$.
We say $G\rightarrow H$ if there is a homomorphism from $G$ to $H$, and $G \equiv H$ if $G\rightarrow H$ and $H\rightarrow G$.
A {\it weak homomorphism} from $G$ to $H$ is a map $\varphi~:~V(G)\rightarrow V(H)$ such that if $uv$ is an edge in $G$, then either
$\varphi(u)\varphi(v)$ is an edge in $H$, or $\varphi(u)=\varphi(v)$. Clearly every homomorphism is automatically a weak homomorphism.
Let $\ast$ represent either the Cartesian, the direct or the strong product of graphs, and consider a product $G_1\ast G_2\ast \ldots\ast G_k$.
For any index $i$, $1\leq i\leq k$, a {\it projection map} is defined as:
$$p_i~:~G_1\ast G_2\ast \ldots\ast G_k \rightarrow G_i ~\mbox{where} ~p_i(x_1,x_2,\ldots,x_k)=x_i.$$ By the definition of the Cartesian, the direct, and the strong product of
graphs, each $p_i$ is a weak homomorphism. In the case of direct product, as $(x_1,x_2,\ldots,x_k)(y_1,y_2,\ldots,y_k)$ is an an edge of $G_1\times G_2 \times,\ldots,\times G_k$
if and only if $x_iy_i\in E(G_i)$ for each $1\leq i\leq k$., each projection $p_i$ is actually a homomorphism. In the case of lexicographic product, the first projection map that is projection on first component is a weak homomorphism, where as in general the projections to the other
components are not weak homomorphisms. \\
A graph is {\it prime} with respect to a given graph product if it is nontrivial and cannot be represented as the product of two nontrivial
graphs. For the Cartesian product,
it means that a nontrivial graph $G$ is prime if $G=G_1\Box G_2$ implies that either $G_1$ or $G_2$ is $K_1$. Similar observation is
true for other three products. The uniqueness of the prime factor decomposition of connected graphs with respect to the
Cartesian product was first shown by Subidussi $(1960)$, and independently by Vizing $(1963)$. Prime factorization is not unique
for the Cartesian product in the class of possibly disconnected simple graphs \cite{HBGP}. It is known that any connected graph factors
uniquely into prime graphs with respect to the Cartesian product.
\begin{theorem}(Sabidussi-Vizing)
Every connected graph has a unique representation as a product of prime graphs, up to isomorphism and the order of the factors. The number of prime factors is
at most $\log_2 {V}$.
\end{theorem}
\noindent For any connected graph $G=(V,E)$, the prime factors of $G$ with respect to the Cartesian product can be computed in $O(E \log V) $ times and $O(E)$ space. See Chapter 23, \cite{HBGP}.
\section{Graph products and covering arrays}\label{bound}
Let $\ast$ represent either the Cartesian, the direct, the strong, or the lexicographic product operation.
Given covering arrays $CA(n_1,G_1,g)$ and $CA(n_2,G_2,g)$, one can construct covering array on $G_1 \ast G_2$ as follows: the row corresponds
to the vertex $(a,b)$ is obtained by horizontally concatenating the row corresponds to the vertex $a$ in $CA(n_1,G_1,g)$ with the row
corresponds to the vertex $b$ in $CA(n_2,G_2,g)$. Hence an obvious upper bound for the covering array number is given by
\begin{center}
$CAN(G_1 \ast G_2, g) \leq CAN(G_1, g) + CAN(G_2, g) $
\end{center}
We now propose some improvements of this bound. A column of a covering array is {\it constant} if, for some symbol $v$, every entry in the
column is $v$. In a {\it standardized } $CA(n,G,g)$ the first column is constant. Because symbols within each row can be permuted independently,
if a $CA(n,G,g)$ exists, then a standardized $CA(n,G,g)$ exists.
\begin{theorem}
Let $G=G_1\boxtimes G_2\boxtimes \cdots \boxtimes G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$, then there exists a
$CA(n,G,g)$ where $n=\underset{i=1}{\overset{k}\sum} n_i -k$. Hence,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k$.
\end{theorem}
\begin{proof} Without loss of generality, we assume that for each $1\leq i\leq g$, the first column of $CA(n_i,G_i,g)$
is a constant column on symbol $i$ and for each $g+1\leq i\leq k$, the first column of $CA(n_i,G_i,g)$ is a constant
column on symbol 1.
Let $C_i$ be the array
obtained from $CA(n_i,G_i,g)$ by removing the first column. Form an array $A$ with
$\underset{i=1}{\overset{k}\prod} |V(G_i)|$ rows and
$\underset{i=1}{\overset{k}\sum} n_i -k$ columns, indexing rows as $(v_1,v_2,...,v_k)$, where $v_i\in V(G_i)$.
Row $(v_1,v_2,...,v_k)$ is
obtained by horizontally concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\leq i\leq k$.
Consider two distinct rows $(u_1,u_2,\ldots,u_k)$ and $(v_1,v_2,\ldots,v_k)$ of $A$ which correspond to adjacent vertices in $G$.
Two distinct vertices $(u_1,u_2,\ldots,u_k)$ and $(v_1,v_2,\ldots,v_k)$ are adjacent if and only if
either $u_iv_i\in E(G_i)$ or $u_i=v_i$ for each $1\leq i\leq k$. Since the vertices are distinct, $u_iv_i\in E(G_i)$ for at least one index $i$.
When $u_i=v_i$, all pairs of the form $(a,a)$ are covered. When $u_iv_i\in E(G_i)$ all remaining pairs are covered because two different rows of $C_i$ correspond to adjacent vertices in $G_i$ are selected.
\end{proof}
\noindent Using the definition of strong product of graphs we have following result as a corollary.
\begin{corollary}
Let $G=G_1\ast G_2\ast \cdots \ast G_k$, $k\geq 2$ and $g$ be a positive integer, where $\ast\in\{\Box,\times\}$. Then,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k$.
\end{corollary}
\noindent The lemma given below will be used in Theorem \ref{product}.
\begin{lemma}\label{karenlemma} (Meagher and Stevens \cite{karen})
Let $G$ and $H$ be graphs. If $G\rightarrow H$ then $CAN(G,g)\leq CAN(H,g)$.
\end{lemma}
\begin{theorem}\label{product}
Let $G=G_1\times G_2\times \cdots \times G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a
$CA(n,G,g)$ where $n=\min\limits_{i} n_i$. Hence, $CAN(G,g)\leq \underset{i}{\overset{}\min}$ $ CAN(G_i,g)$.
\end{theorem}
\begin{proof}
Without loss of generality assume that $n_1 = \min\limits_{i} {n_i} $. It is known that $G_1\times G_2\times \cdots \times G_k\rightarrow G_1$. Using Lemma \ref{karenlemma}, we have $CAN(G,g)\leq CAN(G_1,g)$.
\end{proof}
\begin{theorem}
Let $G=G_1\circ G_2\circ \cdots \circ G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a
$CA(n,G,g)$ where $n=\underset{i=1}{\overset{k}\sum} n_i -k+1$. Hence,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k+1$.
\end{theorem}
\begin{proof} We assume that for each $1\leq i\leq k$, the first column of $CA(n_i,G_i,g)$
is a constant column on symbol $1$.
Let $C_1= CA(n_1,G_1,g)$. For each $2\leq i\leq k$ remove the first column of $CA(n_i,G_i,g)$ to form $C_i$ with $n_i-1$ columns. Without loss of generality assume first column of each $CA(n_i,G_i,g)$ is constant
vector on symbol 1 while for each $2\leq i\leq k$, $C_i$ is the array obtained from $CA(n_i,G_i,g)$ by removing the first
column. Form an array $A$ with $\underset{i=1}{\overset{k}\prod} |V(G_i)|$ rows and $\underset{i=1}{\overset{k}\sum} n_i -k+1$ columns, indexing
rows as $(v_1,v_2,..,v_k)$, $v_i\in V(G_i)$. Row $(v_1,v_2,\ldots,v_k)$ is obtained by horizontally
concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\leq i\leq k$. If two vertices
$(v_1,v_2,...,v_k)$ and $(u_1,u_2,...,u_k)$ are adjacent in $G$ then either $v_1u_1\in E(G_1)$ or $v_ju_j\in E(G_j)$ for
some $j\geq 2$ and $v_i=u_i$ for each $i< j$. In first case rows from $C_1$ covers each ordered pair of symbols while in second case
rows from $C_j$ covers each ordered pair of symbol probably except $(1,1)$. But this pair appears in each $C_i$ for $i<j$. Hence $A$
is a covering array on $G$.
\end{proof}
\begin{definition} \rm A {\it proper colouring} on a graph is an assignment of colours to each vertex such that adjacent vertices receive a different colour. The chromatic number of a graph $G$, $\chi(G)$,
is defined to be the size of the smallest set of colours such that a proper colouring exists with that set.
\end{definition}
\begin{definition}\rm
A {\it maximum clique} in a graph $G$ is a maximum set of pairwise adjacent vertices. The maximum clique number of a graph $G$, $\omega(G)$, is defined to be the size of a maximum clique.
\end{definition}
\noindent Since there are homomorphisms $K_{\omega(G)}\rightarrow G\rightarrow K_{\chi(G)}$, we can
find bound on the size of a covering array on a graph from the graph's chromatic number and clique number. For all graphs $G$,
$$CAN(K_{\omega(G)},g)\leq CAN(G,g)\leq CAN(K_{\chi(G)},g).$$
\noindent We have the following results on proper colouring of product graphs \cite{chromatic}
$$\chi(G_1 \Box G_2) = \max \{ \chi(G_1), \chi(G_2)\}.$$
For other graph products there are no explicit formulae for chromatic number but following bounds are mentioned in \cite{HBGP}.
$$\chi(G_1 \times G_2) \leq \min \{ \chi(G_1), \chi(G_2)\}$$
$$\chi(G_1 \boxtimes G_2) \leq \chi(G_1 \circ G_2) \leq \chi(G_1) \chi(G_2).$$
A proper colouring of $G_1 \ast G_2$ with $\chi(G_1 \ast G_2)$ colours is equivalent to a homomorphism from
$G_1 \ast G_2$ to $K_{\chi(G_1 \ast G_2)}$ for any $\ast \in\{\Box, \times, \boxtimes, \circ \}$.
Hence $$CAN(G_1 \Box G_2, g) \leq CAN(K_{\max\{ \chi(G_1), \chi(G_2)\}},g)$$
$$CAN(G_1 \times G_2, g) \leq CAN(K_{\min\{ \chi(G_1), \chi(G_2)\}},g) $$
$$CAN(G_1 \boxtimes G_2, g) \leq CAN(K_{\chi(G_1)\chi(G_2)},g) $$
$$CAN(G_1 \circ G_2, g) \leq CAN(K_{\chi(G_1)\chi(G_2)},g) .$$
\noindent Note that $G_1\rightarrow G_1 \ast G_2$ and $G_2\rightarrow G_1 \ast G_2$ for $\ast \in\{\Box,\boxtimes,\circ\}$
which gives
$$max\{CAN(G_1, g), CAN(G_2, g)\}\leq CAN(G_1 \ast G_2, g).$$
We now describe colouring construction of covering array on graph $G$. If $G$ is a $k$-colourable graph then build a covering array $CA(n, k, g)$ and without loss of generality associate
row $i$ of $CA(n, k, g)$ with colour $i$ for $1\leq i\leq k$. In order to construct $CA(n,G,g)$, we assign row $i$ of $CA(n, k, g)$ to all the vertices having colour $i$ in $G$.
\begin{definition}\rm An orthogonal array $OA(k,g)$ is a $k\times g^2$ array with entries from $\mathbb{Z}_g$ having the properties that
in every two rows, each ordered pair of symbols from $\mathbb{Z}_g$ occurs exactly once.
\end{definition}
\begin{theorem}\label{OA} \cite{Colbourn} If $g$ is prime or power of prime, then one can construct $OA(g+1,g)$.
\end{theorem}
The set of rows in an orthogonal array $OA(k,g)$ is a set of $k$ pairwise qualitatively independent vectors from
$\mathbb{Z}_g^{g^2}$. For $g=2$, by Theorem \ref{OA}, there are three qualitatively independent vectors from
$\mathbb{Z}_2^{4}$. Here we give some examples where the lower bound
on $CAN(G_1\Box G_2,g)$ is achieved, that is, $CAN(G_1\Box G_2,g)=max\{CAN(G_1,g), CAN(G_2,g)\}. $
\begin{example} \rm If $G_1$ and $G_2$ are bicolorable graphs, then $\chi(G_1 \Box G_2)=2$. Let $x_1$ and $x_2$ be two qualitatively independent vectors
in $\mathbb{Z}_g^{g^2}$. Assign vector $x_i$ to all the vertices of $G_1 \Box G_2$ having colour $i$ for $i=1,2$ to get a covering array with $CAN(G_1 \Box G_2, g) = g^2.$
\end{example}
\begin{example}\rm If $G_1$ and $G_2$ are complete graphs, then $CAN(G_1 \Box G_2, g) = max\{CAN(G_1, g), CAN(G_2, g)\}. $
\end{example}
\begin{example} \rm If $G_1$ is bicolorable and $G_2$ is a complete graph on $k\geq 3$ vertices, then
$CAN(G_1 \Box G_2, g) = CAN(G_2, g)$. In general, if $\chi(G_1) \leq \chi(G_2)$ and $G_2$ is a complete graph, then
$CAN(G_1 \Box G_2, g) = CAN(G_2, g)$.
\end{example}
\begin{example} \rm If $P_m$ is a path of length $m$ and $C_n$ is an odd cycle of length $n$, then $\chi(P_m \Box C_n)=3$. Using Theorem \ref{OA}, we
get a set of three qualitatively independent vectors in $\mathbb{Z}_g^{g^2}$ for $g\geq 2$. Then the colouring construction of covering arrays gives us a covering
array on $P_m\Box C_n$ with $CAN(P_m\Box C_n, g) = g^2$.
\end{example}
\begin{lemma}\cite{HBGP} Let $G_1$ and $G_2$ be graphs and $Q$ be a clique of $G_1\boxtimes G_2$. Then
$Q= p_1(Q)\boxtimes p_2(Q)$, where $p_1(Q)$ and $p_2(Q)$ are cliques of $G_1$ and $G_2$, respectively.
\end{lemma}
Hence a maximum size clique of $G_1\boxtimes G_2$ is product of maximum size cliques from $G_1$ and $G_2$. That is,
$\omega(G_1\boxtimes G_2)= \omega(G_1)\omega(G_2)$. Using the graph homomorphism, this results into another lower bound on
$CAN(G_1\boxtimes G_2,g)$ as $CAN(K_{\omega(G_1)\omega(G_2)},g)\leq CAN(G_1\boxtimes G_2,g)$. Following are some examples
where this lower bound can be achieved.
\begin{example} \rm If $G_1$ and $G_2$ are nontrivial bipartite graphs then
$\omega(G_1 \boxtimes G_2)= \chi(G_1\boxtimes G_2)$ which is 4. Hence $CAN(G_1\boxtimes G_2,g)= CAN(K_4,g)$, which is of optimal
size.
\end{example}
\begin{example}\rm If $G_1$ and $G_2$ are complete graphs, then $G_1\boxtimes G_2$ is again a complete graph. Hence
$CAN(G_1\boxtimes G_2,g)= CAN(K_{\omega(G_1\boxtimes G_2)},g)$.
\end{example}
\begin{example}\rm If $G_1$ is a bipartite graph and $G_2$ is a complete graph on $k\geq 2$ vertices, then
$\omega(G_1\boxtimes G_2)= \chi(G_1\boxtimes G_2)= 2k$. Hence $CAN(G_1\boxtimes G_2,g)= CAN(K_{2k},g)$.
\end{example}
\begin{example}\rm If $P_m$ is a path of length $m$ and $C_n$ is an odd cycle of length $n$, then
$\omega(P_m\boxtimes C_n)=4$ and $\chi(P_m \boxtimes C_n)=5$. Here we have $CAN(K_4,g)\leq CAN(G,g)\leq CAN(K_5,g)$. For $g\geq 4$, using Theorem \ref{OA}, we
get a set of five qualitatively independent vectors in $\mathbb{Z}_g^{g^2}$. Then the colouring construction of
covering arrays gives us a covering array on $P_m\boxtimes C_n$ with $CAN(P_m\boxtimes C_n, g) = g^2$.
\end{example}
\section{Optimal size covering arrays over the Cartesian product of graphs } \label{Cayley}
\begin{definition} \rm Two graphs $G_1=(V,E)$ and $G_2=(V^{\prime},E^{\prime})$ are said to be isomorphic if there is a bijection mapping $\varphi$ from the vertex set $V$ to the vertex set $V^{\prime}$ such that $(u,v)\in E$ if and only if $(\varphi(u),\varphi(v))\in E^{\prime}$. The mapping $\varphi$ is called an isomorphism. An automorphism of a graph is an isomorphism from the graph to itself.
\end{definition}
\noindent The set of all automorphisms of a graph $G$ forms a group, denoted $Aut(G)$, the automorphism group of $G$.
\begin{theorem}\label{A}
Let $G_1$ be a graph having the property that $Aut(G_1)$ contains a fixed point free automorphism which maps every vertex to its neighbour.
Then for any bicolourable graph $G_2$, $$CAN(G_1 \square G_2,g)=CAN(G_1,g).$$
\end{theorem}
\begin{proof} Consider the set $\Gamma=\{\phi \in Aut(G_1)~|~ \phi(u)\in N(u)-\{u\} \mbox{ for all } u\in V(G_1)\}$ where $N(u)$ denotes the set of neighbours of $u$.
From the assumption, $\Gamma$ is not empty.
Consider a 2-colouring of $G_2$ with colours $0$ and $1$. Let $W_0=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=0\}$ and $W_1=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=1\}$. Note that $W_0 $ and $W_1$ partition $V(G_1\square G_2)$ in two two parts.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$.
Form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_0$, row $(u,v)$ is row $u$ of $CA(G_1,g)$; otherwise if
$(u,v)\in W_1$, row $(u,v)$ is row $\phi(u)$ of $CA(G_1,g)$. We verify that $C$ is a $CA(G_1\Box G_2, g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$
of $C$. \\ (i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$, then $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 \sim u_2$ and $v_1=v_2$.
When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_0$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $u_1$ and $u_2$ of $CA(G_1,g)$ respectively.
As $u_1\sim u_2$, rows $u_1$ and $u_2$ are
qualitatively independent in $CA(G_1,g)$. When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_1$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $\phi(u_1)$ and $\phi(u_2)$ of $CA(G_1,g)$ respectively. As $\phi(u_1)\sim \phi(u_2)$, rows $\phi(u_1)$ and $\phi(u_2)$ are
qualitatively independent in $CA(G_1,g)$. Therefore, rows $(u_1,v_1)$ and $(u_2,v_2)$ are
qualitatively independent in $C$.\\
(ii) Let $(u_1,v_1)\in W_0$ and $(u_2,v_2)\in W_1$. In this case, $ (u_1,v_1) \sim (u_2,v_2)$ if and only if $u_1=u_2$ and $v_1\sim v_2$. Let $u_1=u_2=u$. Rows $(u,v_1)$ and $(u,v_2)$ are rows $u$ and $\phi(u)$ of $CA(G_1,g)$.
As $\phi $ is a fixed point free automorphism that maps every vertex to its neighbour, $u$ and $\phi(u)$ are adjacent in $G_1$. Therefore, the rows indexed by $u$ and $\phi(u)$ are qualitatively independent
in $CA(G_1,g)$; therefore, rows $(u_1,v_1)$ and $(u_2,v_2)$ are
qualitatively independent in $C$.\\
\end{proof}
\begin{definition}\rm
Let $H$ be a finite group and $S$ be a subset of $H\smallsetminus \{id\}$ such that $S = -S$ (i.e., $S$ is closed under inverse). The Cayley graph of $H$ generated by $S$, denoted
$Cay(H,S)$, is the undirected graph $G=(V,E)$ where $V=H$ and $E=\{(x,sx)~|~x\in H, s\in S\}$. The Cayley graph is connected if and only if $S$ generates $H$.
\end{definition}
\noindent Through out this article by $S = -S$ we mean, $S$ is closed under inverse for a given group operation
\begin{definition}\rm
A circulant graph $G(n,S)$ is a Cayley graph on $\mathbb{Z}_n$. That is, it is a graph whose vertices are labelled $\{0,1,\ldots,n-1\}$, with two vertices labelled $i$ and
$j$ adjacent iff $i-j ~(\mbox{mod}~n)\in S$, where $S\subset \mathbb{Z}_n$ with $S=-S$ and $0\notin S$.
\end{definition}
\begin{corollary}
Let $G_1(n,S)$ be a circulant graph and $G_2$ be a bicolorable graph, then $CAN(G_1(n,S) \Box G_2, g) = CAN(G_1(n,S), g)$.
\end{corollary}
\begin{proof} Let $i$ and $j$ be any two adjacent vertices in $G_1(n,S)$. We define a mapping $\phi$ from $\mathbb{Z}_n$ as follows:
\begin{center}
$\phi(k) = k+j-i ~(\mbox{mod}~ n)$
\end{center}
It is easy to verify that $\phi$ is an automorphism
and it sends every vertex to its neighbour. Hence $\phi \in \Gamma$ and the result
follows.
\end{proof}
For a group $H$ and $S \subseteq H$, we denote conjugation of $S$ by elements of itself as
\begin{center}
$S^S = \{ ss's^{-1} | s, s'\in S\}$
\end{center}
\begin{corollary}
Let $H$ be a finite group and $S \subseteq H\smallsetminus \{id\}$ is a generating set for $H$ such that $S = -S$ and
$S^S = S$. Then for
$G_1 = Cay(H, S)$ and any bicolorable graph $G_2$,
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1, g)$
\end{center}
\end{corollary}
\begin{proof}
We will show that there exists a $\phi \in Aut(G_1)$ such that $\phi$ is stabilizer free.
Define $\phi : H \rightarrow H$ as $\phi(h) = sh$ for some $s\in S$.
It it easy to check that $\phi$ is bijective and being $s \neq id$ it is stabilizer free. Now to prove it is a
graph homomorphism we need to show it is an adjacency preserving map. It is sufficient to prove that $(h, s'h)\in E(G_1)$
implies $(sh, ss'h) \in E(G_1)$. As $ss'h = ss's^{-1}sh$ and $ss's^{-1} \in S$, we have $(sh, ss'h)\in E(G_1)$.
Hence $\phi \in \Gamma $ and Theorem \ref{A} implies the result.
\end{proof}
\begin{example}\rm
For any abelian group $H$ and $S$ be a generating set such that $S = -S$ and $id \notin S$, we always get $S^S = S$.
\end{example}
\begin{example}\rm
For $H = Q_8 = \{\pm1, \pm i, \pm j, \pm k\}$ and $S = \{\pm i, \pm j\}$, we have $S^S = S$ and $S = -S$.
\end{example}
\begin{example}\rm
For $H= D_8 = \langle a, b | a^2 = 1 = b^4, aba = b^3\rangle$ and $S= \{ab, ba\}$, we have $S^S = S$ and $S = -S$.
\end{example}
\begin{example}\rm
For $H = S_n$ and $S =$ set of all even cycles, we have $S^S = S$ and $S = -S$
\end{example}
\begin{theorem}
Let $H$ be a finite group and $S$ be a generating set for $H$ such that
\begin{enumerate}
\item $S = -S$ and $id \notin S$
\item $S^S = S$
\item there exist $s_1$ and $s_2$ in $S$ such that $s_1 \neq s_2$ and $s_1s_2 \in S$
\end{enumerate}
then for $G_1= Cay(H, S)$ and any three colourable graph $G_2$
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1,g)$
\end{center}
\end{theorem}
\begin{proof} Define three distinct automorphisms of $G_1$, $\sigma_{i} : H\rightarrow H$, for $i=0,1,2$, as $\sigma_0(u)=u$, $\sigma_1(u)=s_1u$, $\sigma_2(u)=s_2^{-1}u$.
Consider a three colouring of $G_2$ using the colours $0, 1$ and $2$. Let $W_i=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=i\}$ for $i=0,1,2$.
Note that $W_0 $, $W_1$, and $W_2$ partition $V(G_1\square G_2)$ into three parts.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$. Using $CA(G_1,g)$, form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_i$, row $(u,v)$ is row $\sigma_i(u)$ of $CA(G_1,g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$ of $C$. \\
(i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 \sim u_2$ and $v_1=v_2$.
When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_0$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $u_1$ and $u_2$ of $CA(G_1,g)$.
As $u_1 \sim u_2$ in $G_1$, the rows $u_1$ and $u_2$ are qualitatively independent in $CA(G_1,g)$. Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_1$ (res. $W_2$). Similarly, as $s_1u_1\sim s_1u_2$ (res. $s_2^{-1}u_1 \sim s_2^{-1}u_1$)
the rows indexed by
$s_1u_1$ and $s_1u_2$ (res. $s_2^{-1}u_1$ and $s_2^{-1}u_2$) are qualitatively independent in $CA(G_1,g)$.
Hence the rows
$(u_1,v_1)$ and $(u_2,v_2)$ are qualitatively independent in $C$.\\
(ii) Let $(u_1,v_1)\in W_i$ and $(u_2,v_2)\in W_j$ for $0\leq i\neq j\leq 2$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 = u_2$ and $v_1\sim v_2$.
Let $u_1=u_2=u$.\\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_1$, then rows $(u,v_1)$ and $(u,v_2)$ are rows $u$ and $s_1u$ of $CA(G_1,g)$ respectively. Then as $u\sim s_1u$ the rows indexed by $(u,v_1)\in W_0$ and $(u,v_2)\in W_1$ are qualitatively independent in $C$. \\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_2$. Then, as $u\sim s_2^{-1}u$, the rows indexed by $(u,v_1)\in W_0$ and $(u,v_2)\in W_2$ are qualitatively independent in $C$. \\
Let $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$. Then, as $s_1u\sim s_2^{-1}u$, the rows indexed by $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$ are qualitatively independent in $C$.
\end{proof}
\begin{theorem}
Let $H$ be a finite group and $S$ is a generating set for $H$ such that
\begin{enumerate}
\item $S = -S$ and $id \notin S$
\item $S^S = S$
\item $\exists s_1$ and $s_2$ in $S$ such that $s_1 \neq s_2$ and $s_1s_2, s_1s_2^{-1}\in S$
\end{enumerate}
then for $G_1 = Cay(H, S)$ and any four colourable graph $G_2$
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1, g)$
\end{center}
\end{theorem}
\begin{proof}
Define four distinct automorphisms of $G_1$, $\sigma_i:H\rightarrow H$, $ i=0,1,2,3$ as $\sigma_0(u)=u$, $\sigma_1(u)=s_1u$, $\sigma_2(u)=s_2u$ and
$\sigma_3(u)=s_1s_2 u$. Consider a four colouring of $G_2$ using the colours $0, 1, 2$ and $3$. Let $W_i=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=i\}$ for $i=0,1,2,3$.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$. Form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_i$, row $(u,v)$ is row $\sigma_i(u)$ of $CA(G_1,g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$ of $C$. \\
(i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$. It is easy to verify that $(u_1,v_1)$ and $(u_2,v_2)$ are qualitatively independent.\\
(ii) Let $(u_1,v_1)\in W_i$ and $(u_2,v_2)\in W_j$ for $0 \leq i\neq j\leq 3$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 = u_2$ and $v_1\sim v_2$.
Let $u_1=u_2=u$.\\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_i$ for $i=1,2,3$, then row $(u,v_1)$ and $(u,v_2)$ are
rows $u$ and $\sigma_i(u)$ of $CA(G_1,g)$ respectively.
Then as $u\sim \sigma_i(u)$ the rows $(u,v_1)$ and $(u,v_2)$ are qualitatively independent. \\
\noindent Let $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$. Then rows $(u,v_1)$ and $(u,v_2)$ are rows $s_1u$ and $s_2u$ of $CA(G_1,g)$. As $s_1u = s_1s_2^{-1}s_2u$ and $s_1s_2^{-1}\in S$, we get $s_1u\sim s_2u$. Hence the rows $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$ are qualitatively independent. Similarly, as $s_1u=s_1 s_2^{-1}s_1^{-1}s_1s_2u$ and $s_1 s_2^{-1}s_1^{-1}\in S$ being $S^S=S$, we have
$s_1u\sim s_1s_2u$. Hence the rows $(u,v_1)\in W_1$ and $(u,v_2)\in W_3$ are qualitatively independent. \\
Let $(u,v_1)\in W_2$ and $(u,v_2)\in W_3$. As $s_2u=s_1^{-1}s_1s_2u$ and $s_1^{-1}\in S$, we get $s_2u\sim s_1s_2u$.
Hence the rows $(u,v_1)\in W_2$ and $(u,v_2)\in W_3$ are qualitatively independent.
\end{proof}
\begin{example}
$G = Q_8$ and $S= \{\pm i, \pm j, \pm k\}$. Here $s_1=i$ and $s_2=j$.
\end{example}
\begin{example}
$G = Q_8$ and $S= \{-1,\pm i, \pm j\}$. Here $s_1=-1$ and $s_2=i$.
\end{example}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\small{
\matrix[matrix of math nodes, anchor=south west,
nodes={circle, draw, minimum size = 0.4cm},
column sep = {0.5cm},
row sep={0.35cm}]
{
& |(0)| & & |(2)| & & \\
& & & & & |(3)| \\
|(4)| & & & & & & |(-4)| \\
& &|(5)| & & |(6)| & & & & |(00)| & & |(02)| & & \\
& & &|(7)| & & & & & & & & & |(03)| \\
& & & & & & & |(04)| & & & & & & |(-04)| \\
& & |(1)| & & |(i)| & & & & &|(05)| & & |(06)| & \\
& & & & & & |(j)| & & & &|(07)| & & \\
&|(-k)| & & & & & & |(k)| \\
& & &|(-j)| & & |(-1)| & &\\
& & & &|(-i)| & & &\\
};}
\begin{scope}[style=thick]
\foreach \from/\to/\weight/\where
in { 4/2/1/above, 4/3/1/above, 4/-4/1/right, 4/7/1/above, 4/5/1/right,
0/5/1/above, 0/7/1/above, 0/6/1/above, 0/3/1/above, 0/2/1/above,
2/7/1/above, 2/6/1/right, 2/-4/1/above,
3/5/1/right, 3/6/1/below, 3/-4/1/right, -4/7/1/above, -4/5/1/above,
6/7/1/below, 6/5/1/below,
-k/i/1/above, -k/j/1/above, -k/k/1/right, -k/-i/1/above, -k/-j/1/right,
1/-j/1/above, 1/-i/1/above, 1/-1/1/above, 1/j/1/above, 1/i/1/above,
i/-i/1/above, i/-1/1/right, i/k/1/above,
j/-j/1/right, j/-1/1/below, j/k/1/right, k/-i/1/above, k/-j/1/above,
-1/-i/1/below, -1/-j/1/below,
04/02/1/above, 04/03/1/above, 04/-04/1/right, 04/07/1/above, 04/05/1/right,
00/05/1/above, 00/07/1/above, 00/06/1/above, 00/03/1/above, 00/02/1/above,
02/07/1/above, 02/06/1/right, 02/-04/1/above,
03/05/1/right, 03/06/1/below, 03/-04/1/right, -04/07/1/above, -04/05/1/above,
06/07/1/below, 06/05/1/below}
\draw (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in { 4/-k/1/above, 0/1/1/above, 2/i/1/right, 3/j/1/above, -4/k/1/right,
6/-1/1/above, 5/-j/1/above, 7/-i/1/above}
\draw[gray] (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in { 04/-k/1/above, 00/1/1/above, 02/i/1/right, 03/j/1/above, -04/k/1/right,
06/-1/1/above, 05/-j/1/above, 07/-i/1/above}
\draw[red] (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in{ 4/04/1/above, 0/00/1/above, 2/02/1/right, 3/03/1/above, -4/-04/1/right,
6/06/1/above, 5/05/1/above, 7/07/1/above}
\draw[blue] (\from) to [->] (\to);
\end{scope}
\end{tikzpicture}
\caption{$Cay(Q_8, \{-1,\pm i, \pm j\})\Box K_3$}
\end{center}
\end{figure}
\section{Approximation algorithm for covering array on graph}\label{Approx}
In this section, we present an approximation algorithm for construction of covering array on a given graph $G=(V,E)$ with
$k>1$ prime factors with respect to the Cartesian product.
In 1988, G. Seroussi and N H. Bshouty proved that the decision problem whether there exists a binary
covering array of strength $t\geq 2$ and size $2^t$ on a given $t$-uniform hypergraph is NP-complete \cite{VS}.
Also, construction of
an optimal size covering array on a graph is at least as hard as finding its optimal size.
\noindent We give an approximation algorithm for the Cartesian product with approximation ratio $O(\log_s |V|)$, where $s$ can be obtained from the
number of symbols corresponding to each vertex. The following result by Bush is used in our approximation algorithm.
\begin{theorem}\rm{\cite{GT}}\label{B} Let $g$ be a positive integer. If $g$ is written in standard form: $$g=p_1^{n_1}p_2^{n_2}\ldots p_l^{n_l}$$ where $p_1,p_2,\ldots,p_l$ are distinct primes, and if
$$r=\mbox{min}(p_1^{n_1},p_2^{n_2},\ldots, p_l^{n_l}),$$ then one can construct $OA(s,g)$ where
$s =1+ \max{(2,r)}$.
\end{theorem}
We are given a wighted connected graph $G=(V,E)$ with each vertex having the same weight $g$.
In our approximation algorithm, we use a technique from \cite{HBGP} for prime factorization of $G$ with respect to the Cartesian product.
This can be done in $O(E \log V$) time. For details see \cite{HBGP}. After obtaining prime factors of $G$, we construct
strength two covering array $C_1$ on maximum size prime factor. Then
using rows of $C_1$, we produce a covering array on $G$.\\
\noindent\textbf{APPROX $CA(G,g)$:}
\\\textbf{Input:} A weighted connected graph $G=(V,E)$ with $k>1$ prime factors with respect to the Cartesian product. Each vertex has weight $g$; $g=p_1^{n_1}p_2^{n_2}\ldots p_l^{n_l}$ where
$p_1$, $p_2, \ldots, p_l$ are primes.
\\\textbf{Output:} $CA(ug^2,G,g)$.
\\\textbf{Step 1:} Compute $s = 1 + \mbox{max}\{2,r\}$ where $r=\mbox{min}(p_1^{n_1},p_2^{n_2},\ldots, p_l^{n_l})$.
\\\textbf{Step 2:} Factorize $G$ into prime factors with respect to the Cartesian product;
say $G = \Box_{i=1} ^{k} G_i$ where $G_i= (V_i,E_i)$ is a prime factor.
\\\textbf{Step 3:} Suppose $V_1\geq V_2\geq \ldots\geq V_k$. For prime factor $G_1=(V_1, E_1)$ \textbf{do}
\begin{enumerate}
\item Find the smallest positive integer $u$ such that $s^u\geq V_1$. That is, $u=\lceil \mbox{log}_s V_1\rceil$.
\item Let $OA(s,g)$ be an orthogonal array and denote its $i$th row by $R_i$ for $i=1,2,\ldots,s$. Total $s^u$ many row vectors $(R_{i_1}, R_{i_2},\ldots R_{i_u})$, each of length $ug^2$, are formed by horizontally concatenating $u$ rows
$R_{i_1}$, $ R_{i_2}$, $\ldots,$ $ R_{i_u}$ where $1\leq i_1, \ldots, i_u\leq s$.
\item Form an $V_1 \times ug^2$ array $C_1$ by choosing any $V_1$ rows out of $s^u$ concatenated row vectors.
Each row in the array corresponds to a vertex in the graph $G_1$. \end{enumerate}
\textbf{Step 4:}
From $C_1$ we can construct an $V\times ug^2$ array $C$. Index the rows of $C$ by $(u_1,u_2,\ldots,u_k)$, $u_i\in V(G_i)$.
Set the row $(u_1,u_2,\ldots,u_k)$ to be identical to the row corresponding to $u_1+u_2+\ldots+u_k ~ \mbox{mod } V_1$ in $C_1$. Return $C$.
\vspace{1cm}\begin{theorem}
Algorithm APPROX $CA(G,g)$ is a polynomial-time $\rho(V)$ approximation algorithm for covering array on graph problem, where
$$\rho(V) \leq \lceil \log_s \frac{V}{2^{k-1}} \rceil.$$
\end{theorem}
\begin{proof}
\textbf{Correctness:} The verification that $C$ is a $CA(ug^2,G,g)$ is straightforward. First, we show that $C_1$ is a covering array of strength two with $ |V_1|$ parameters.
Pick any two distinct rows of $C_1$ and consider the sub matrix induced by these two rows. In the sub matrix, there must be a column $(R_i, R_j)^T$ where $i \neq j$.
Hence each ordered pair of values appears at least once.
Now to show that $C$ is a covering array on $G$, it is sufficient to show that the rows in $C$ for any pair of adjacent vertices $u=(u_1,u_2,\ldots,u_k)$ and $v=(v_1,v_2,\ldots,v_k)$ in $G$ will be qualitatively
independent. We know $u$ and $v$ are adjacent if and only if $(a_i,b_i)\in E(G_i)$ for exactly one index $1\leq i\leq k$ and
$a_j=b_j$ for $j\neq i$.
Hence $ u_1+u_2+ \ldots+u_k \neq v_1+v_2+\ldots+v_k ~ \mbox{mod } V_1$ and in Step 6,
two distinct rows from $C_1$ are assigned to the vertices $u$ and $v$.\\
\textbf{Complexity :} The average order of $l$ in Step 1 is $\ln\ln g$ \cite{Riesel}. Thus, the time to find $s$ in Step 1 is $O(\ln \ln g)$.
The time to factorize graph $G=(V,E)$ in Step 2 is $O(E \log V)$. In Step 3(1), the smallest positive integer $u$ can be found in
$O(\mbox{log}_s V_1)$ time. In Step 3(2), forming one row vector requires $\mbox{log}_sV_1$ assignments; hence, forming $V_1$ row vectors require $O(V_1\mbox{log}V_1)$ time.
Thus the total running time of APPROX $CA(G,g)$ is $O(E \log V+\ln \ln g)$. Observing that, in practice, $\ln \ln g \leq E \log V$, we can restate the running time of
APPROX $CA(G,g)$ as $O(E \log V)$. \\
\textbf{Approximation ratio:} We show that APPROX $CA(G,g)$ returns a covering array that is at most $\rho(V)$ times the size of an optimal covering array on $G$.
We know the smallest $n$ for which a $CA(n,G,g)$ exists is $g^2$, that is, $CAN(G,g)\geq g^2$. The algorithm returns a covering array on $G$ of size $ug^2$ where
$$u=\lceil \log_s V_1\rceil.$$ As $G$ has $k$ prime factors, the maximum number of vertices in a factor can be $\frac{V}{2^{k-1}}$, that is, $V_1\leq \frac{V}{2^{k-1}}$.
Hence $$u= \lceil \log_s V_1\rceil \leq \lceil \log_s \frac{V}{2^{k-1}}\rceil.$$ By relating to the size of the covering array returned to the optimal size, we obtain our approximation ratio
$$\rho(V)\leq \lceil \log_s \frac{V}{2^{k-1}}\rceil.$$ \end{proof}
\section{Conclusions} One motivation for introducing a graph structure was to optimise covering arrays for their use in testing software and networks based on internal structure. Our primary
concern in this paper is with constructions that make optimal covering arrays on large graphs from smaller ones. Large graphs are obtained by considering either the Cartesian, the direct, the strong, or the Lexicographic product of small graphs. Using graph homomorphisms, we have
$$\max_{i=1,2}\{CAN(G_i,g)\}\leq CAN(G_1\Box G_2,g)\leq CAN( \max_{i=1,2}\{\chi(G_i)\},g).$$ We gave several classes of Cayley graphs where the lower bound on covering array number $CAN(G_1\Box G_2)$ is achieved. It is an interesting problem to find out other classes of graphs for which lower bound on covering array number of product graph can be achieved. We gave an approximation algorithm
for construction of covering array on a graph $G$ having more than one factor with respect to the Cartesian product. Clearly, another area to explore is to consider in details the other graph products, that is, the direct, the strong, and the Lexicographic product.
|
\section{Introduction}\label{sec:introduction}
Galaxy clusters are the ultimate result of the hierarchical bottom-up process of cosmic structure formation. Hosted in massive dark matter haloes that formed through subsequent phases of mass accretion and mergers, galaxy clusters carry information on the underlying cosmological scenario as well as the astrophysical processes that shape the properties of the intra-cluster medium (ICM) \citep[for a review, see e.g.][]{2005RvMP...77..207V,2011ARA&A..49..409A,2012ARA&A..50..353K}.
Being at the top of the pyramid of cosmic structures, galaxy clusters are mostly found in the late-time universe. These can be observed using a variety a techniques that probe either the distribution of the hot intra-cluster gas through its X-ray emission \citep[see e.g.][]{2005ApJ...628..655V,2010MNRAS.407...83E,2016A&A...592A...1P,2021A&A...650A.104C}, the scattering of the Cosmic Microwave Background radiation (CMB) due to the Sunyaev-Zeldovich effect \citep[see e.g.][]{2009ApJ...701...32S,2013ApJ...765...67M,2013ApJ...763..127R,2014A&A...571A..29P,2015ApJS..216...27B}, through measurement of galaxy overdensities or the gravitational lensing effect caused by the cluster's gravitational mass on background sources \citep{2016ApJS..224....1R,2019MNRAS.485..498M,2011ApJ...738...41U,2012ApJS..199...25P}.
The mass distribution of galaxy clusters primarily depends on the dynamical state of the system. Observations of relaxed clusters have shown that the matter density profile at large radii is consistent with the universal Navarro-Frenk-White profile \citep[NFW,][]{NFW1997}, while deviations have been found in the inner regions \citep[][]{2013ApJ...765...24N,2017ApJ...851...81A,2017ApJ...843..148C,2020A&A...637A..34S}. In relaxed systems, the gas falls in the dark matter dominated gravitational potential and thermalises through the propagation of shock waves. This sets the gas in a hydrostatic equilibrium (HE) that is entirely controlled by gravity. Henceforth, aside astrophysical processes affecting the baryon distribution in the cluster core, the thermodynamic properties of the outer ICM are expected to be self-similar \citep[see e.g.][]{2019A&A...621A..39E,2019A&A...621A..41G,2021ApJ...910...14G}. This is not the case of clusters undergoing major mergers for which the virial equilibrium is strongly altered \citep[see e.g.][]{2016ApJ...827..112B}. Such systems exhibit deviations from self-similarity such that scaling relations between the ICM temperature, the cluster mass and X-ray luminosity differ from that of relaxed clusters \citep[see e.g.][]{2009MNRAS.399..410P,2011ApJ...729...45R,2019MNRAS.490.2380C}.
A direct consequence of merger events is that the mass estimates inferred assuming the HE hypothesis or through scaling relations may be biased. This may induce systematic errors on cosmological analyses that rely upon accurate cluster mass measurements. On the other hand, merging clusters can provide a unique opportunity to investigate the physics of the ICM \citep{2007PhR...443....1M,2016JPlPh..82c5301Z} and test the dark matter paradigm \citep[as in the case of the Bullet Cluster][]{2004ApJ...604..596C,2004ApJ...606..819M}. This underlies the importance of identifying merging events in large cluster survey catalogues.
The identification of unrelaxed clusters relies upon a variety of proxies specifically defined for each type of observations \citep[for a review see e.g.][]{2016FrASS...2....7M}. As an example, the detection of radio haloes and relics in clusters is usually associated with the presence of mergers. Similarly, the offset between the position of the brightest central galaxy and the peak of the X-ray surface brightness, or the centroid of the SZ signal are used as proxy of merger events. This is because the merging process alters differently the distribution of the various matter constituents of the cluster.
The growth of dark matter haloes through cosmic time has been investigated extensively in a vast literature using results from N-body simulations. \citet{2003MNRAS.339...12Z} found that haloes build up their mass through an initial phase of fast accretion followed by a slow one. \citet{2007MNRAS.379..689L} have shown that the during the fast-accretion phase, the mass assembly occurs primarily through major mergers, that is mergers in which the mass of the less massive progenitor is at least one third of the more massive one. Moreover, they found that the greater the mass of the halo the later the time when the major merger occurred. In contrast, slow accretion is a quiescent phase dominated by minor mergers. Subsequent studies have mostly focused on the relation between the halo mass accretion history and the concentration parameter of the NFW profile \citep[see e.g.][]{2007MNRAS.381.1450N,2009ApJ...707..354Z,2012MNRAS.427.1322L,2016MNRAS.460.1214L,2017MNRAS.466.3834L,2019MNRAS.485.1906R}. Recently, \citet{Wang2020} have shown that major mergers have a universal impact on the evolution of the median concentration. In particular, after a large initial response, in which the concentration undergoes a large excursion, the halo recovers a more quiescent dynamical state within a few dynamical times. Surprisingly, the authors have also found that even minor mergers can have a non-negligible impact on the mass distribution of haloes, contributing to the scatter of the concentration parameter.
The use of concentration as a proxy of galaxy mergers is nevertheless challenging for multiple reasons. Firstly, the concentration exhibits a large scatter across the merger phase and the value inferred from the analysis of galaxy cluster observations may be sensitive to the quality of the NFW-fit. Secondly, astrophysical processes may alter the mass distribution in the inner region of the halo, thus resulting in values of the concentration that differ from those estimated from N-body simulations \citep[see e.g.][]{2010MNRAS.406..434M,2011MNRAS.416.2539K}, which could be especially the case for merging clusters.
Alternatively, a non-parametric approach to characterise the mass distribution in haloes has been proposed by \citet{Balmes2014} in terms of simple mass ratios, dubbed halo {\it sparsity}:
\begin{equation}\label{sparsdef}
s_{\Delta_1,\Delta_2} = \frac{M_{\Delta_1}}{M_{\Delta_2}},
\end{equation}
where $M_{\Delta_1}$ and $M_{\Delta_2}$ are the masses within spheres enclosing respectively the overdensity $\Delta_1$ and $\Delta_2$ (with $\Delta_1<\Delta_2$) in units of the critical density (or equivalently the background density). This statistics presents a number of interesting properties that overcome many of the limitations that concern the concentration parameter. First of all, the sparsity can be estimated directly from cluster mass estimates without having to rely on the assumption of a specific parametric profile, such as the NFW profile. Secondly, for any given choice of $\Delta_1$ and $\Delta_2$, the sparsity is found to be weakly dependent on the overall halo mass with a much reduced scatter than the concentration \citep{Balmes2014,Corasaniti2018,Corasaniti2019}. Thirdly, these mass ratios retain cosmological information encoded in the mass profile, thus providing an independent cosmological proxy. Finally, the halo ensemble average sparsity can be predicted from prior knowledge of the halo mass functions at the overdensities of interests, which allows to infer cosmological parameter constraints from cluster sparsity measurements \citep[see e.g.][]{Corasaniti2018,Corasaniti2021}.
As haloes grow from inside out such that newly accreted mass is redistributed in concentric shells within a few dynamical times \citep[see e.g.][for a review]{2011MNRAS.413.1373W,2011AdAst2011E...6T}, it is natural to expect that major mergers can significantly disrupt the onion structure of haloes and result in values of the sparsity that significantly differ from those of the population of haloes that have had sufficient time to rearrange their mass distribution and reach the virial equilibrium.
Here, we perform a thorough analysis of the relation between halo sparsity and the halo mass accretion history using numerical halo catalogues from large volume high-resolution N-body simulations. We show that haloes which undergo a major merger in their recent history form a distinct population of haloes characterised by large sparsity values. Quite importantly, we are able to fully characterise the statistical distributions of such populations in terms of the halo sparsity and the time of their last major merger. Thus, building upon these results, we have developed a statistical tool which uses cluster sparsity measurements to test whether a galaxy cluster has undergone a recent major merger and if so when such event took place.
The paper is organised as follows. In Section~\ref{halocat} we describe the numerical halo catalogues used in the analysis, while in Section~\ref{sparsmah} we present the results of the study of the relation between halo sparsity and major mergers. In Section~\ref{calistat} we present the statistical tests devised to identify the imprint of mergers in galaxy clusters and in discuss the statistical estimation of the major merger epoch from sparsity measurements. In Section~\ref{cosmo_imp} we discuss the implications of these results regarding cosmological parameter estimation studies using halo sparsity. In Section~\ref{testcase} we validate our approach using similar data, assess its robustness to observational biasses and describe the application of our methodology to the analysis of known galaxy clusters. Finally, in Section~\ref{conclu} we discuss the conclusions.
\section{Numerical Simulation Dataset}\label{halocat}
\subsection{N-body Halo catalogues}
We use N-body halo catalogues from the MultiDark-Planck2 (MDPL2) simulation \citep{Klypin2016} which consists of $3840^3$ particles in $(1 \,h^{-1}\,\textrm{Gpc})^3$ comoving volume (corresponding to a particle mass resolution of $m_p=1.51\cdot 10^{9}\,h^{-1} \text{M}_{\odot}$) of a flat $\Lambda$CDM cosmology run with the \textsc{Gadget-2}\footnote{\href{https://wwwmpa.mpa-garching.mpg.de/gadget/}{https://wwwmpa.mpa-garching.mpg.de/gadget/}} code \citep{2005MNRAS.364.1105S}. The cosmological parameters have been set to the values of the \textit{Planck} cosmological analysis of the Cosmic Microwave Background (CMB) anisotropy power spectra \citep{2014A&A...571A..16P}: $\Omega_m=0.3071$, $\Omega_b=0.0482$, $h=0.6776$, $n_s=0.96$ and $\sigma_8=0.8228$. Halo catalogues and merger trees at each redshift snapshot were generated using the friend-of-friend (FoF) halo finder code \textsc{rockstar}\footnote{\href{https://code.google.com/archive/p/rockstar/}{https://code.google.com/archive/p/rockstar/}} \citep{Behroozi2013a,Behroozi2013b}. We consider the default set up with the detected haloes consisting of gravitationally bound particles only. We specifically focus on haloes in the mass range of galaxy groups and clusters corresponding to $M_{200\text{c}}>10^{13}\,h^{-1} \text{M}_{\odot}$.
For each halo in the MDPL2 catalogues we build a dataset containing the following set of variables: the halo masses $M_{200\text{c}}$, $M_{500\text{c}}$ and $M_{2500\text{c}}$ estimated from the number of N-body particles within spheres enclosing overdensities $\Delta=200,500$ and $2500$ (in units of the critical density) respectively; the scale radius, $r_s$, of the best-fitting NFW profile; the virial radius, $r_{\rm vir}$; the ratio of the kinetic to the potential energy, $K/U$; the offset of the density peak from the average particle position, $x_{\rm off}$; and the scale factor (redshift) of the last major merger, $a_{\rm LMM}$ ($z_{\rm LMM}$). From these variables we additionally compute the following set of quantities: the halo sparsities $s_{200,500}$, $s_{200,2500}$ and $s_{500,2500}$; the offset in units of the virial radius, $\Delta_r=x_{\rm off}/r_{\rm vir}$, and the concentration parameter of the best-fit NFW profile, $c_{200\text{c}}=r_{200\text{c}}/r_s$, with $r_{200\text{c}}$ being the radius enclosing an overdensity $\Delta=200$ (in units of the critical density). In our analysis we also use the mass accretion history of MDPL2 haloes.
In addition to the MDPL2 catalogues, we also use data from the Uchuu simulations \citep{Ishiyama2021}, which cover a larger cosmic volume with higher mass resolution. We use these catalogues to calibrate the sparsity statistics that provides the base for practical applications of halo sparsity measurements as cosmic chronometers of galaxy cluster mergers. The Uchuu simulation suite consists of N-body simulations of a flat $\Lambda$CDM model realised with \textsc{GreeM} code \citep{2009PASJ...61.1319I,2012arXiv1211.4406I} with cosmological parameters set to the values of a later \textit{Planck}-CMB cosmological analysis \citep{2016A&A...594A..13P}: $\Omega_m=0.3089$, $\Omega_b=0.0486$, $h=0.6774$, $n_s=0.9667$ and $\sigma_8=0.8159$. In particular, we use the halo catalogues from the $(2\,\textrm{Gpc}\,h^{-1})^3$ comoving volume simulation with $12800^3$ particles (corresponding to a particle mass resolution of $m_p=3.27\cdot 10^{8}\,h^{-1}\text{M}_{\odot}$) that, as for MDPL2, were also generated using the \textsc{rockstar} halo finder.
It is important to stress that the major merger epoch to which we refer in this work is that defined by the \textsc{rockstar} halo finder, that is the time when the particles of the merging halo and those of the parent one are within the same iso-density contour in phase-space. Hence, this should not be confused with the first core-passage time usually estimated in Bullet-like clusters.
\begin{table}
\centering
\caption{Characteristics of the selected halo samples at $z=0,0.2,0.4$ and $0.6$ (columns from left to right). Quoted in the rows are the number of haloes in the samples and the redshift of the last major merger $z_{\rm LMM}$ used to select the haloes for each sample.}
\begin{tabular}{ccccc}
\hline
\hline
& \multicolumn{4}{c}{Merging Halo Sample ($T>-1/2$)} \\
\hline
\hline
& $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\
\hline
$\#$-haloes & $23164$ & $28506$ & $31903$ & $32769$ \\
$z_{\rm LMM}$ & $<0.113$ & $<0.326$ & $<0.540$ & $<0.754$ \\
\hline
\hline
& \multicolumn{4}{c}{Quiescent Halo Sample ($T<-4$)} \\
\hline
\hline
& $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\
\hline
$\#$-haloes & $199853$ & $169490$ & $140464$ & $113829$ \\
$z_{\rm LMM}$ & $>1.15$ & $>1.50$ & $>1.86$ & $>2.22$ \\
\hline
\end{tabular}
\label{tab:samples}
\end{table}
\subsection{Halo Sample Selection}\label{haloeselection}
We aim to study the impact of merger events on the halo mass profile. To this purpose we focus on haloes which undergo their last major merger at different epochs. In such a case, it is convenient to introduce a time variable that characterises the backward time interval between the redshift $z$ (scale factor $a$) at which a halo is investigated and that of its last major merger $z_{\rm LMM}$ ($a_{\rm LMM}$) in units of the dynamical time \citep{Jiang2016, Wang2020},
\begin{equation}\label{backwardtime}
T(z|z_\text{LMM})= \frac{\sqrt{2}}{\pi}\int_{z_{\text{LMM}}}^{z}\frac{\sqrt{\Delta_\text{vir}(z)}}{z+1}dz,
\end{equation}
where $\Delta_{\rm vir}(z)$ is the virial overdensity, which we estimate using the spherical collapse model approximated formula $\Delta_{\rm vir}(z)=18\pi^2+82[\Omega_m(z)-1]-39[\Omega_m(z)-1]^2$ \citep{Bryan1998}. Hence, one has $T=0$ for haloes which undergo a major merger at the time they are investigated (i.e. $z_{\rm LMM}=z$), and $T<0$ for haloes that had their last major merger at earlier times (i.e. $z_{\rm LMM}>z$). Notice that the definition used here differs by a minus sign from that of \citet{Wang2020}, where the authors have found that merging haloes recover a quiescent state within $|T| \sim 2$ dynamical times.
In Section~\ref{sparsprof}, we investigate the differences in halo mass profile between merging haloes and quiescent ones, to maximise the differences we select haloes samples as following:
\begin{itemize}
\item {\it Merging haloes}: a sample of haloes that are at less than one half the dynamical time since their last major merger ($T> -1/2$), and therefore still in the process of rearranging their mass distribution;
\item {\it Quiescent haloes}: a sample of haloes for which their last major merger occurred far in the past ($T\le -4)$, thus they had sufficient time to rearrange their mass distribution to an equilibrium state;
\end{itemize}
In the case of the $z=0$ catalogue, the sample of merging haloes with $T>-1/2$ consists of all haloes for which their last major merger as tagged by the \textsc{rockstar} algorithm occurred at $a_{\rm LMM}>0.897$ ($z_{\rm LMM}<0.115$), while the samples of quiescent
haloes with $T\le -4$ in the same catalogue are characterised by a last major merger at $a_{\rm LMM}<0.464$ ($z_{\rm LMM}>1.155$). In order to study the redshift dependence, we perform a similar selection for the catalogues at $z=0.2,0.4$ and $0.6$ respectively. In Table~\ref{tab:samples} we quote the characteristics of the different samples selected in the various catalogues.
\begin{figure*}
\centering
\includegraphics[width=.8\linewidth]{figures/concentration_sparsities_lines.pdf}
\caption{Distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\delta_{200,500}=1-s^{\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\delta_{200,2500}=1-s^{\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel) respectively}
\label{fig:relative_spars_conc}
\end{figure*}
\section{Halo Sparsity \& Major Mergers}\label{sparsmah}
\subsection{Halo Sparsity Profile}\label{sparsprof}
Here, we seek to investigate the halo mass profile of haloes undergoing a major merger as traced by halo sparsity and evaluate to which extent the NFW profile can account for the estimated sparsities at different overdensities. To this purpose, we compute for each halo in selected samples the halo sparsities $s_{200,500}$ and $s_{200,2500}$ from their SOD estimated masses, as well as the values obtained assuming the NFW profile using the best-fit concentration parameter $c_{200\text{c}}$, which we denote as $s^{\rm NFW}_{200,500}$ and $s^{\rm NFW}_{200,2500}$ respectively. These can be inferred from the sparsity-concentration relation \citep{Balmes2014}:
\begin{equation}
x^3_{\Delta}\frac{\Delta}{200}=\frac{\ln{(1+c_{200\text{c}}x_{\Delta})}-\frac{c_{200\text{c}}x_{\Delta}}{1+c_{200\text{c}}x_{\Delta}}}{\ln{(1+c_{200\text{c}})}-\frac{c_{200\text{c}}}{1+c_{200\text{c}}}},\label{sparconc}
\end{equation}
where $x_{\Delta}=r_{\Delta}/r_{200\text{c}}$ with $r_{\Delta}$ being the radius enclosing $\Delta$ times the critical density. Hence, for any value of $\Delta$ and given the value of $c_{200\text{c}}$ for which the NFW-profile best-fit that of the halo of interest, we can solve Eq.~(\ref{sparconc}) numerically to obtain $x_{\Delta}$ and then derive the value of the NFW halo sparsity given by:
\begin{equation}
s^{\rm NFW}_{200,\Delta}=\frac{200}{\Delta}x_{\Delta}^{-3}.
\end{equation}
It is worth emphasising that such relation holds true only for haloes whose density profile is well described by the NFW formula. In such a case the higher the concentration the smaller the value of sparsity, and inversely the lower the concentration the higher the sparsity. Because of this, the mass ratio defined by Eq.~(\ref{sparsdef}) provides information on the level of sparseness of the mass distribution within haloes, that justifies being dubbed as halo sparsity. Notice that from Eq.~(\ref{sparconc}) we can compute $s_{200,\Delta}$ for any $\Delta>200$, and this is sufficient to estimate the sparsity at any other pair of overdensities $\Delta_1\ne\Delta_2>200$ as given by $s_{\Delta_1,\Delta_2}=s_{200,\Delta_1}/s_{200,\Delta_2}$. Haloes whose mass profile deviates from the NFW prediction will have sparsity values that differ from that given by Eq.~(\ref{sparconc}).
This is emphasised in Fig.~\ref{fig:relative_spars_conc}, where we plot the distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\delta_{200,500}=1-s^{\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\delta_{200,2500}=1-s^{\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0,0.2,0.4$ and $0.6$ respectively. We can see that for quiescent haloes the distributions are nearly Gaussian. More specifically, in the case $\delta_{200,500}$ we can see that the distribution has a narrow scatter with a peak that is centred at the origin at $z=0.6$, and slightly shifts toward positive values at smaller redshifts with a maximal displacement at $z=0$. This corresponds to an average bias of the NFW-estimated sparsity $s^{\rm NFW}_{200,500}$ of order $\sim 4\%$ at $z=0$. A similar trend occurs for the distribution of $\delta_{200,2500}$, though with a larger scatter and a larger shift in the location of the peak of the distribution at $z=0$, which corresponds to an average bias of $s^{\rm NFW}_{200,2500}$ of order $\sim 14\%$ at $z=0$. Such systematic differences are indicative of the limits of the NFW-profile in reproducing the halo mass profile of haloes both in the outskirt regions and the inner ones. Moreover, the redshift trend is consistent with the results of the analysis of the mass profile of stacked haloes presented in \citep[][]{2018ApJ...859...55C}, which shows that the NFW-profile better reproduce the halo mass distribution at $z=3$ than at $z=0$ (see top panels of their Fig.~8). Very different is the case of the merging halo sample, for which we find the distribution of $\delta_{200,500}$ and $\delta_{200,2500}$ to be highly non-Gaussian and irregular. In particular, in the case of $\delta_{200,500}$ we find the distribution to be characterised by a main peak located near the origin with a very heavy tail up to relative differences of order $20\%$. The effect is even more dramatic for $\delta_{200,2500}$, in which case the distribution looses the main peak and become nearly bimodal, while being shifted over a positive range of values that extend up to relative variations up $\sim 40\%$. Overall this suggests that sparsity provides a more reliable proxy of the halo mass profile than that inferred from the NFW concentration.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/sparsity_concentration_histories.pdf}
\caption{Evolution with scale factor $a$ (redshift $z$) of the median sparsity $s_{200,500}$ (top panels), $s_{500,2500}$ (middle panels) and $s_{200,2500}$ (bottom panels) for a sample of $10^4$ randomly selected haloes from the MDPL2 halo catalogue at $z=0$ and the sample of all haloes with a last major merger event at $a_{\rm LMM} = 0.67$ (right panels). The solid lines corresponds to the median sparsity computed from the mass accretion histories of the individual haloes, while the shaded area corresponds to the $68\%$ region around the median.}
\label{fig:sparsity_histories_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.8\linewidth]{figures/sparsity_vs_T.pdf}
\caption{Median sparsity histories as function of the backward time interval since the major merger events $T$ (in units of dynamical time) for halo samples from the MDPL2 catalogue at $z=0$ with different last major merger redshifts $z_{\rm LMM}=0.2,0.4,0.6,0.8,0.8$ and $1$ (curves from bottom to top). Notice that the backward time interval used here differ by a minus sign from that given by Eq.~(\ref{backwardtime}) to be consistent with the definition by \citet{Wang2020}.}
\label{fig:sparsity_histories_2}
\end{figure}
\subsection{Halo Sparsity Evolution}
Differently from the previous analysis, we now investigate the evolution of the halo mass profile as traced by halo sparsity, which we reconstruct from the mass accretion histories of the haloes in the MDPL2 catalogue at $z=0$. In Fig.~\ref{fig:sparsity_histories_1}, we plot the median sparsity evolution of $s_{200,500}$ (top panel), $s_{500,2500}$ (middle panel) and $s_{200,2500}$ (bottom panel) as function of the scale factor. In the left panels we shown the case of a sample of $10^4$ randomly selected haloes, thus behaving as quiescent haloes in the redshift range considered, while in the right panels we plot the evolution of the sparsity of all haloes in the $z=0$ catalogue undergoing a major merger at $a_{\rm LMM}=0.67$. The shaded area corresponds to the $68\%$ sparsity excursion around the median, while the vertical dashed line marks the value of the scale factor of the last major merger.
It is worth remarking taht the sparsity provides us with an estimate of the fraction of mass in a shell of radii $R_{\Delta_1}$ and $R_{\Delta_2}$ relative to the mass enclosed in the inner radius $R_{\Delta_2}$, i.e. Eq.~(\ref{sparsdef}) can be rewritten $s_{\Delta_1,\Delta_2}=\Delta{M}/M_{\Delta_2}+1$. As such $s_{200,500}$ is a more sensitive probe of the mass distribution in the external region of the halo, while $s_{500,2500}$ and $s_{200,2500}$ are more sensitive to the inner part of the halo.
As we can see from Fig.~\ref{fig:sparsity_histories_1}, the evolution of the sparsity of merging haloes matches that of the quiescent sample before the major merger event. In particular, during the quiescent phase of evolution, we notice that $s_{200,500}$ remains nearly constant, while $s_{500,2500}$ and $s_{200,2500}$ are decreasing functions of the scale factor. This is consistent with the picture that haloes grow from inside out, with the mass in the inner region (in our case $M_{2500\text{c}}$) increasing relative to that in the external shell ($\Delta{M}=M_{\Delta_1}-M_{2500\text{c}}$, with $\Delta_1=200$ and $500$ in units of critical density), thus effectively reducing the value of the sparsity. This effect is compensated on $s_{200,500}$, thus resulting in a constant evolution.
We can see that the onset of the major merger event induce a pulse-like response in the evolution of halo sparsities at the different overdensities with respect to the quiescent evolution. These trends are consistent with the evolution of the median concentration during major mergers found in \citet{Wang2020}, in which the concentration rapidly drops to a minimum before bouncing again. Here, the evolution of the sparsity allows to follow how the merger alters the mass profile of the halo throughout the merging process. In fact, we may notice that the sparsities rapidly increase to a maximum, suggesting the arrival of the merger in the external region of the parent halo, which increases the mass $\Delta{M}$ in the outer shell relative to the inner mass. Then, the sparsities decrease to a minimum, indicating that the merged mass has reached the inner region, after which the sparsities increases to a second maximum that indicates that the merged mass has been redistributed outside the $R_{2500\text{c}}$ radius. However, notice that in the case of $s_{200,2500}$ and $s_{500,2500}$ the second peak is more pronounced than the first one, while the opposite occurs for $s_{200,500}$, which suggests that the accreted mass remains confined within $R_{500\text{c}}$. Afterwards, a quiescent state of evolution is recovered.
In Fig.~\ref{fig:sparsity_histories_2} we plot the median sparsities of haloes in the MDPL2 catalogue at $z=0$ that are characterised by different major merger redshifts $z_{\rm LMM}$ as function of the backward interval of time $T$ (in units of the dyanmical time) since the last major merger. Notice that $T$ used in this plot differs by a minus sign from that given by Eq.~(\ref{backwardtime}) to conform to the definition by \citet{Wang2020}. We can see that after the onset of the major merger (at $T\ge 0$), the different curves superimpose on one another, indicating that the imprint of the major merger on the profile of haloes is universal, producing the same pulse-like feature on the evolution of the halo sparsity. Furthermore, all haloes recover a quiescent evolution within two dynamical times, i.e. for $T\ge 2$. Conversely, on smaller time scale $T<2$, haloes are still perturbed by the major merger event. These result are consistent with the findings of \citet{Wang2020}, who have shown that the impact of mergers on the median concentration of haloes leads to a time pattern that is universal and also dissipates within two dynamical times. Notice, that this distinct pattern due to the major merger is the result of gravitational interactions only. Hence, it is possible that such a feature may be sensitive to the underlying theory of gravity or the physics of dark matter particles.
As we will see next, the universality of the pulse-like imprint of the merger event on the evolution of the halo sparsity, as well as its limited duration in time, have quite important consequences, since these leave a distinct feature on the statistical distribution of sparsity values, which can be exploited to use sparsity measurements as a time proxy of major mergers in clusters.
\begin{figure*}
\centering
\includegraphics[width = 0.8\linewidth]{figures/T_aLMM_vs_s200500.pdf}
\caption{\label{fig:s_almm} Iso-probability contours of the joint probability distribution in the $s_{200,500}-T$ plane for the haloes from the MDPL2 catalogues at $z=0.0,0.2,0.4$ and $0.6$ respectively. The solid horizontal line marks the value $T=-2$. The inset plots show that marginal probability distribution for haloes with $T>-2$ (blue histograms) and $T<-2$ (beige histograms) respectively.}
\label{fig:sva}
\end{figure*}
\subsection{Halo Sparsity Distribution}
We have seen that the sparsity of different haloes evolves following the same pattern after the onset of the major merger, such that the universal imprint of the merger event is best highlighted in terms of the backward interval time $T$. Hence, we aim to investigate the joint statistical distribution of halo sparsity values for haloes characterised by different time $T$ since their last major merger in the MDPL2 catalogues at different redshift. Here, we revert to the definition of $T$ given by Eq.~(\ref{backwardtime}), where the time interval is measured relative to the time the haloes are investigated, that is the redshift $z$ of the halo catalog. Hence, $T=0$ for haloes undergoing a major merger at $z_{\rm LMM}=z$ and $T<0$ for those with $z_{\rm LMM}>z$.
For conciseness, here we only describe the features of the joint distribution $p(s_{200,500},T)$ shown in Fig.~\ref{fig:s_almm} in the form of iso-probability contours in the $s_{200,500}-T$ plane at $z=0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel). We find a similar structure of the distributions at other redshift snapshots and for the halo sparsities $s_{200,2500}$ and $s_{500,2500}$. In each panel the horizontal solid line marks the characteristic time interval $|T|=2$. As shown by the analysis of the evolution of the halo sparsity, haloes with $|T|>2$ have recovered a quiescent state, while those with $|T|<2$ are still undergoing the merging process. The marginal conditional probability distributions $p(s_{200,500}|T<-2)$ and $p(s_{200,500}|T>-2)$ are shown in the inset plot.
Firstly, we may notice that the joint probability distribution has a universal structure, that is the same at different redshift snapshots. Moreover, it is characterised by two distinct regions. The region with $T\le -2$, that corresponds to haloes which are several dynamical times away since their last major merger event ($|T|\ge 2$), as such they are in a quiescent state of evolution of the sparsity; and a region with $-2<T<0$, corresponding to haloes that are still in the merging processes ($|T|<2$). In the former case, the pdf has a rather regular structure that is independent of $T$, while in the latter case the pdf has an altered structure with a pulse-like feature shifted toward higher sparsity values. The presence of such a feature is consistent with the evolution of the median sparsity inferred from the halo mass accretion histories previously discussed. This is because among the haloes observed at a given redshift snapshot, those which are within two dynamical times from the major merger event are perturbed, thus exhibiting sparsity values that are distributed around the median shown in Fig.~\ref{fig:sparsity_histories_2}. In contrast, those which are more than two dynamical times since their last major merger had time to redistributed the accreted mass and are in a quiescent state, causing a regular structure of the pdf. From the inset plots, we can see that these two regions identify two distinct population of haloes, quiescent haloes with $T\le -2$ and merging (or perturbed) ones with $-2<T<0$ characterised by a stiff tail toward large sparsity values and that largely contributes to the overall scatter of the halo sparsity of the entire halo ensemble. It is worth stressing that the choice of $T=2$ as threshold to differentiate between the quiescent haloes and the perturbed ones at a given redshift snapshot is not arbitrary, since it is the most conservative value of the dynamical time above which haloes that have undergone a major merger recover a quiescent evolution of their halo mass profile as shown in Fig.~\ref{fig:sparsity_histories_2}.
Now, the fact that two populations of haloes have different probability distribution functions suggests that measurements of cluster sparsity can be used to identify perturbed systems that have undergone a major merger.
\section{Identifying Galaxy Cluster Major Mergers}\label{calistat}
Given the universal structure of the probability distributions characterising merging haloes and quiescent ones, we can use the numerical halo catalogues to calibrate their statistics at different redshifts and test whether a cluster with a single or multiple sparsity measurements has had a major merger in its recent mass assembly history.
In the light of these observations, we first design a method to assess whether a cluster has or hasn't been recently perturbed by a major merger. To do this we design a binary test, as defined in detection theory \citep[see e.g.][]{kay1998fundamentals}, to differentiate between the different cases. Formally, this translates into defining two hypotheses denoted as $\mathcal{H}_0$, the null hypothesis and, $\mathcal{H}_1$, the alternate hypothesis. In our case these are, $\mathcal{H}_0$: \textit{The halo has not been recently perturbed} and $\mathcal{H}_1$: \textit{The halo has undergone a recent major merger}. Formally the distinction between the two is given in terms of the backward time interval $T$,
\begin{equation}
\begin{cases}
\mathcal{H}_0:\; T(a_\text{LMM}|a(z)) < -2\\
\mathcal{H}_1:\; T(a_\text{LMM}|a(z)) \geq -2
\end{cases}
\label{eq:hypothesis}
\end{equation}
if we consider the halo to no longer be perturbed after $2\tau_\text{dyn}$. In Fig.~\ref{fig:s_almm} we have delimited these two regions using black horizontal lines.
In the context of detection theory \citep[see e.g.][]{kay1998fundamentals}, one defines some test statistic,
\begin{equation}
\Gamma \underset{\mathcal{H}_0}{\overset{\mathcal{H}_1}{\gtrless}} \Gamma_\text{th},
\end{equation}
from the observed data such that when compared to a threshold, $\Gamma_\text{th}$, allows us to distinguish between the two hypotheses.
In the following we will explore multiple ways of defining the test statistic and associated thresholds. This may appear cumbersome, however it is necessary to unambiguously define thresholds according to probabilistic criteria rather than arbitrary ones, while the variety of approaches we adopt allow us to check their robustness.
\subsection{Frequentist Approach}
\label{sec:frequentist}
We start with the simplest possible choice that is using $s_{200,500}$ as our test statistic. Separating our data set into the realisations of the two hypotheses, we estimate their respective likelihood functions, which we model using a generalised $\beta '$ probability density function (pdf),
\begin{equation}
\rho(x,\alpha,\beta,p,q) = \frac{p\left(\frac{x}{q}\right)^{\alpha p - 1}\left(1+\left(\frac{x}{q}\right)^p\right)^{-\alpha-\beta}}{q\,B(\alpha,\beta)},
\label{eq:gen_beta_prime}
\end{equation}
where $B(\alpha,\beta)$ is the Beta function and where $x = s_{200,500} - 1$. From our two samples we then fit this model using a standard least squares method to obtain the set of best fitting parameters under both hypotheses, these are reproduced in Tab.~\ref{tab:fit_params} and particular fits are shown in Fig.~\ref{fig:pdf_fit} for the halo catalogues at $z=0$. In both cases we additionally report the 95 percent confidence intervals estimated using 1000 bootstrap iterations.
\begin{table}
\centering
\caption{Best fitting parameters for the distribution of sparsities at $z=0$ under both hypotheses. Here, we quote each parameter with its 95 percent confidence interval estimated over 1000 bootstrap iterations.}
\begin{tabular}{r|cc}
\hline Parameter & $\mathcal{H}_0$& $\mathcal{H}_1$\\
\hline $\alpha$ & $1.4^{+0.1}_{-0.1}$ & $1.5^{+0.2}_{-0.2}$ \\
$\beta$ & $0.61^{+0.03}_{-0.03}$ & $0.71^{+0.10}_{-0.08}$ \\
$p$ & $7.7^{+0.3}_{-0.3}$ & $4.1^{+0.4}_{-0.3}$ \\
$q$ & $0.304^{+0.002}_{-0.003}$ & $0.370^{+0.008}_{-0.008}$ \\
\hline
\end{tabular}
\label{tab:fit_params}
\end{table}
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/binary_fit.pdf}
\caption{Estimated probability distribution functions for $\mathcal{H}_0$ (purple solid line) and $\mathcal{H}_1$ (orange solid line) hypotheses at $z=0$ along with best fitting generalised beta prime distribution functions (dotted black lines). The shaded area corresponds to the 95 percent confidence interval estimated over 1000 bootstrap iterations.}
\label{fig:pdf_fit}
\end{figure}
The quality of the fits degrades towards the tails of the distributions, most notably under $\mathcal{H}_1$ due to the fact we do not account for the pulse feature. Nonetheless, they still allow us to obtain an estimate, $\tilde\Sigma(x)$, of the corresponding likelihood ratio (LR) test statistic $\Sigma(x) = {\rho(x|\mathcal{H}_1)}/{\rho(x|\mathcal{H}_0)}$. Under the Neyman-Pearson lemma \citep[see e.g.][]{kay1998fundamentals} the true LR test statistic constitutes the most powerful estimator for a given binary test. We can express this statistic in terms of the fitted distribution, for $z=0$ this reads as:
\begin{align}
\tilde\Sigma(x) &\propto x^{\alpha_1 p_1 - \alpha_0 p_0}\frac{(1 + (x/q_1)^{p_1})^{-\alpha_1 - \beta_1}}{(1 + (x/q_0)^{p_0})^{-\alpha_0 - \beta_0}}\\
&= x^{-4.6}\frac{(1 + (x/0.370)^{4.1})^{-2.2}}{(1 + (x/0.304)^{7.7})^{-2.0}}
\end{align}
from which we can obtain an approximate expression, $\tilde\Sigma(x) \propto x^{1.8}$, for large values, $x \gg 0.3$. What one can observe is that for large values of sparsity the LR test statistic is a monotonously increasing function of $x = s_{200,500} - 1$, indicating that in this regime the sparsity itself will have comparable differentiating power to the LR test. A similar dependence holds at $z>0$. What this indicates is that we can use $\Gamma = s_{200,500}$ to efficiently differentiate between haloes that have undergone a recent major merger from an at rest population. In addition to this result, one can estimate a simple ${\rm p}-$value,
\begin{equation}
{\rm p} = \text{P}_\text{r}(\Gamma > s_{200,500}|\mathcal{H}_0) = 1 - \int_0^{s_{200,500}-1}\rho(x|\mathcal{H}_0)dx
\end{equation}
i.e. the probability of finding a higher value of $s_{200,500}$ in a halo at equilibrium. And conversely, one can also estimate the value of the threshold corresponding to a given $p-$value, by inverting this relation.
In Fig.~\ref{fig:xi_of_z} we have estimated the threshold corresponding to three key p-values at increasingly higher redshifts.
Here, each point is estimated using the sparsity distributions from the numerical halo catalogues. This figure allows to quickly estimate the values of sparsity above which a halo at some redshift $z$ should be considered as recently perturbed.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/xi_200_500_z.pdf}
\caption{Sparsity thresholds $s^{\rm th}_{200,500}$ as function of redshift for ${\rm p}-$values$=0.05$ (purple solid line), $0.01$ (orange solid line) and $0.005$ (green solid line) computed using the Frequentist Likelihood-Ratio approach.}
\label{fig:xi_of_z}
\end{figure}
It is worth noticing that these thresholds are derived from sparsity estimates from N-body halo masses. In contrast, sparsities of observed galaxy clusters are obtained from mass measurements that may be affected by systematic uncertainties that may differ depending on the type of observations. The impact of mass biases is reduced in the mass ratio, but it could still be present. As an example, using results from hydro/N-body simulations for an extreme model of AGN feedback model, \citet{Corasaniti2018} have shown that baryonic processes on average can bias the estimates of the sparsity $s_{200,500}$ up to $\lesssim 4\%$ and $s_{200,2500}$ up to $\lesssim 15\%$ at the low mass end. This being said, as long as the mass estimator is unbiased we expect our analysis to hold, albeit with a modification to the fitting parameters. In Section~\ref{testcase} we present a preliminary analysis of the impact of mass biasses on our approach, however we will leave more in depth investigations into this topic, as well as modifications that could arise from non-gravitational physics, to upcoming work.
\subsection{Bayesian approach}
\label{sec:Bayesisan}
An alternate way of tackling this problem is through the Bayesian flavour of detection theory. In this case, instead of looking directly at how likely the data $\boldsymbol{x}$ is described by a model characterised by the model parameters $\boldsymbol{\theta}$ in terms of the likelihood function $p(\boldsymbol{x}|\boldsymbol{\theta})$, one is interested in how likely is the model given the observed data, that is the posterior function $p(\boldsymbol{\theta}|\boldsymbol{x})$.
Bayes theorem allows us to relate these two quantities:
\begin{equation}
p(\bmath{\theta}|x) = \frac{p(x|\bmath{\theta})\pi(\bmath{\theta})}{\pi(x)},
\label{eq:posterior}
\end{equation}
where $\pi(\bmath{\theta})$ is the prior distribution for the parameter vector $\bmath{\theta}$ and
\begin{equation}
\pi(x) = \int p(x|\bmath{\theta})\pi(\bmath{\theta}) d\bmath{\theta},
\end{equation}
is a normalisation factor, known as evidence.
While this opens up the possibility of estimating the parameter vector, which we will discuss in sub-section~\ref{statmergerepoch}, this approach also allows one to systematically define a test statistic known as the Bayes Factor,
\begin{equation}
B_\text{f} = \frac{\int_{V_1} p(\bmath{x}|\bmath{\theta})\pi(\bmath{\theta})d\bmath{\theta}}{\int_{V_0} p(\bmath{x}|\bmath{\theta})\pi(\bmath{\theta})d\bmath{\theta}},
\end{equation}
associated to the binary test. Here, we have denoted $V_1$ and $V_0$ the volumes of the parameter space respectively attributed to hypothesis $\mathcal{H}_1$ and $\mathcal{H}_0$.
In practice, to evaluate this statistic we first need to model the likelihood. Again we use the numerical halo catalogues as calibrators. We find that the distribution of $s_{200,500}$ for a given value of the scale factor at the epoch of the last major merger, $a_\text{LMM}$, is well described by a generalised $\beta '$ pdf. In particular, we fit the set of parameters $\bmath{\theta} = [\alpha, \beta, p, q]^\top$ that depend solely on $a_\text{LMM}$ by sampling the posterior distribution using Monte-Carlo Markov Chains (MCMC) with a uniform prior for $a_\text{LMM}\sim \mathcal{U}(0; a(z))$\footnote{The upper bound is the scale factor at the epoch at which the halo is observed.}. This is done using the \textsc{emcee}\footnote{\href{https://emcee.readthedocs.io/en/stable/}{https://emcee.readthedocs.io/en/stable/}} library \citep{Emcee2013}. The resulting values of $B_\text{f}$ can then be treated in exactly the same fashion as the Frequentist statistic. It is however important to note that the Bayes factor is often associated with a standard ``rule of thumb'' interpretation \citep[see e.g.][]{Trotta2007} making these statistic particularly interesting to handle.
One way of comparing the efficiency of different tests is to draw their respective Receiver Operating Characteristic (ROC) curves \citep{Fawcett2006}, which show the probability of having a true detection, $\text{P}_\text{r}(\Gamma > \Gamma_{\rm th}|\mathcal{H}_1)$, plotted against the probability of a false one, $\text{P}_\text{r}(\Gamma > \Gamma_{\rm th}|\mathcal{H}_0)$ for the same threshold. In other words, this means we are simply plotting the probability of finding a value of $\Gamma$ that is larger than the threshold under the alternate hypothesis against that of finding a value of $\Gamma$ larger than the same threshold under the null hypothesis. The simplest graphical interpretation of this type of figure is, the closer a curve gets to the top left corner the more powerful the test is at differentiating between both cases.
In Fig.~\ref{fig:roc_curves} we plot the ROC curves corresponding to all the tests we have studied in the context of this work. These curves have been evaluated using a sub sample of $10^4$ randomly selected haloes from the MDPL2 catalogues at $z=0$ with masses $M_{200\text{c}} > 10^{13}\,h^{-1}\text{M}_{\odot}$. Let us focus on the comparison between the Frequentist direct sparsity approach (S 1D) against the Bayes Factor obtained using a single sparsity measurement (BF 1D). We can see that both tests have very similar ROC curves for low false alarm rates. This indicates that we do not gain any substantial power from the additional computational work done to estimate the Bayes factor using a single value of sparsity.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/roc_curves.pdf}
\caption{ROC curves associated with the binary tests studied in this work: the Frequentist sparsity test (S 1D, solid orange line), the Bayes Factor based on a single sparsity value (BF 1D, dashed green line) and using three values (BF 3D, dash-dotted magenta line), the Support Vector Machines with one sparsity value (SVM 1D, dotted purple line) and three sparsities (SVM 3D, dotted yellow line). What can be observed is that all 1D tests are equivalent at small false alarm rates and the only way to significantly increase the power of the test is to increase the amount of input data, i.e. adding a third mass measurement as in the BF 3D and SVM 3D cases.}
\label{fig:roc_curves}
\end{figure}
While this may seem as the end of the line for the method based on the Bayes factor, the latter does present the significant advantage of being easily expanded to include additional data. In our case this comes in the form of adding additional sparsity measurements at different overdensities. Simply including a third mass measurement, here we use $M_{2500\text{c}}$, gives us access to two additional sparsities from the three possible pairs, $s_{200,500},\,s_{200,2500}$ and $s_{500,2500}$. This leads us to defining each halo as a point in a 3-dimensional space with coordinates
\begin{equation}
\begin{cases}
x = s_{200,500} - 1 \\
y = s_{200,2500} -1 \\
z = s_{500,2500} -1
\end{cases}
\end{equation}
After estimating the likelihood in this coordinate system, one quickly observes that a switching spherical-like coordinate system, $\mathbfit{r} = [r, \vartheta, \varphi]^\top$, allows for a much simpler description. The resulting likelihood model,
\begin{equation}
L(\mathbfit{r};\bmath{\theta},\bmath{\mu},\mathbfss{C}) = \frac{f(r;\bmath{\theta})}{2\pi\sqrt{|\mathbfss{C}|}}\exp\left[-\frac{1}{2}(\bmath{\alpha} - \bmath{\mu})^\top\mathbfss{C}^{-1}(\bmath{\alpha} - \bmath{\mu})\right],
\label{eq:like3D}
\end{equation}
treats $r$ as independent from the two angular coordinates that are placed within the 2-vector $\bmath{\alpha} = [\vartheta, \varphi]^\top$. Making the radial coordinate independent allows us to constrain $f(r,\bmath{\theta})$ simply from the marginalised distribution. Doing so we found that the latter is best described by a Burr type XII \citep{10.1214/aoms/1177731607} distribution,
\begin{equation}
f(x,c,k,\lambda,\sigma) = \frac{ck}{\sigma}\left(\frac{x-\lambda}{\sigma}\right)^{c-1}\left[1+\left(\frac{x-\lambda}{\sigma}\right)^2\right]^{-k-1},
\end{equation}
with additional displacement, $\lambda$, and scale, $\sigma$, parameters. In total the likelihood function is described by 9 parameters, 3 of which are constrained by fitting the marginalised distribution of $r$ realisations assuming $\lambda = 0$ and 5, 2 in $\bmath{\mu}$ and 3 in $\mathbfss{C}$, are measured through unbiased sample means and covariances.
In a similar fashion as in the single sparsity case, we evaluate these parameters as functions of $a_\text{LMM}$ and thus recover a posterior likelihood for the epoch of the last major merger using MCMC, again applying a flat prior on $a_\text{LMM}$. This posterior in turn allow us to measure the the corresponding Bayes Factor. We calculate these Bayes factors for the same test sample used previously and evaluate the corresponding ROC curve (BF 3D in Fig.~\ref{fig:roc_curves}). As intended, the additional mass measurement has for effect of increasing the detection power of the test, thus raising the ROC curve with respect to the 1D tests. Increasing the true detection rate from 40 to 50 percent for a false positive rate of 10 percent. We have tested that the same trends hold valid at $z>0$.
\subsection{Support Vector Machines}
An alternative to the Frequentist -- Bayesian duo is to use machine learning techniques designed for classification. While Convolutional Neural Networks \citep[see eg.][for a review]{2015Natur.521..436L} are very efficient and have been profusely used to classify large datasets, both in terms of dimensionality and size, recent examples in extra-galactic astronomy include galaxy morphology classification \citep[e.g.][]{Hocking2018,Martin2020,Abul_Hayat2020,Cheng2021,Spindler2021} detection of strong gravitational lenses \citep[e.g.][]{Jacobs2017,Jacobs2019, Lanusse2018,Canameras2020,Huang2020,Huang2021,He2020,Gentile2021,Stein2021} galaxy merger detection \citep{Ciprijanovic2021} and galaxy cluster merger time estimation \citep{Koppula2021}. They may not be the tool of choice when dealing with datasets of small dimensionality, like the case at a hand. A simpler option for this problem is to use Support Vector Machines (SVM) \citep[see e.g.][]{Cristianini2000} as classifiers for the hypotheses defined in Eq.~(\ref{eq:hypothesis}), using as training data the sparsities measured from the halo catalogues.
A SVM works on the simple principle of finding the boundary that best separates the two hypotheses. In opposition to Random Forests \citep[see e.g.][]{Breiman2001} which can only define a set of horizontal and vertical boundaries, albeit to arbitrary complexity, the SVM maps the data points to a new euclidean space and solves for the plane best separating the two sub-classes. This definition of a new euclidean space allows for a non-linear boundary between the classes. For large datasets however the optimisation of the non-linear transformation can be slow to converge, and thus we restrict ourselves to a linear transformations. To do so we make use the \textsc{scikit-learn}\footnote{\href{https://scikit-learn.org/}{https://scikit-learn.org/stable/}} \citep{scikit-learn} python package. The ``user friendly'' design of this package allows for fast implementations with little required knowledge of python and little input from the user, giving this method an advantage over its Frequentist and Bayesian counterparts.
In order to compare the effectiveness of the SVM tests, with 1 and 3 sparsities, against those previously presented we again plot the corresponding ROC curves\footnote{Note that the test data used for the ROC curves was excluded from the training set.} in Fig.~\ref{fig:roc_curves}. What can be seen is that the SVM tests reach comparable differentiating power to both the Bayesian and Frequentist test for 1 sparisty and is only slightly out performed by the Bayesian test using 3 sparsities. This shows that designing a statistical test based on the sparsity can be done in a simple fashion without significant loss of differentiation power. Making sparsity an all the more viable proxy to identify recent major mergers.
\subsection{Estimating cluster major merger epoch}\label{statmergerepoch}
In the previous section we have investigated the possibility of using halo sparsity as a statistic to identify clusters that have had a recent major merger. We will now expand the Bayesian formulation of the binary test to \emph{estimate} when this last major merger took place. This can be achieved by using the posterior distributions which we have previously computed to calculate the Bayes Factor statistics. These distributions allow us to define the most likely epoch for the last major merger as well as the credible interval around this epoch.
\begin{figure}
\centering
\includegraphics[width = 0.95\linewidth]{figures/sparsity1d_posteriors.pdf}
\caption{Posterior distributions for different values of the sparsity $s_{200,500}=1.2$ (dash-dotted green line), $1.7$ (dashed orange line), $2$ (dotted purple line) and $3$ (solid magenta line). We can see that for large sparsity values, the distributions are bimodal at recent epoch, while low values produce both a continuous distribution at low scale factor values as well as a single peak at recent epochs corresponding to a confusion region. This induce a degeneracy that needs to be broken if we are to accurately estimate $a_\text{LMM}$.}
\label{fig:post_1sparsity}
\end{figure}
Beginning with the one sparsity estimate, in Fig.~\ref{fig:post_1sparsity} we plot the resulting posterior distributions $p(a_{\rm LMM}|s_{200,500})$ obtained assuming four different values of $s_{200,500}=1.2,1.7,2$ and $3$ at $z=0$. As we can see, in the case of large sparsity values ($s_{200,500}\ge 1.7$), we find a bimodal distribution in the posterior, caused by the pulse-like feature in the structure of the joint distribution shown in Fig.~\ref{fig:sva} and which is consequence of universal imprint of the major merger on the halo sparsity evolution shown in Fig.~\ref{fig:sparsity_histories_2}. In particular, we notice that the higher the measured sparsity and the lower the likelihood of having the last major merger to occur in the distant past. A consequence of this pulse-like feature is that a considerable population of haloes with a recent major mergers, characterised by $-1/2 <T(a_\text{LMM}; a(z))<-1/4$, have sparsities in the same range as those in the quiescent regime. This confusion region results in a peak of the posterior distribution for the $s_{200,500}=1.2$ case, that is located on the minimum of the binomial distributions associated to the values of $s_{200,500}\ge 1.7$. This suggests the presence of a degeneracy, such that quiescent haloes may be erroneously identified as haloes having undergone a recent merger, or on the contrary haloes having undergone a recent merger are misidentified as quiescent haloes.
\begin{figure*}
\centering
\includegraphics[width = 0.95\linewidth]{figures/posteriors_sparsity_1D_3D.pdf}
\caption{Posterior distributions of the last major merger epoch for three selected haloes with different sparsity values from the $z=0$ halo catalogue. The shaded areas corresponds to the 68\% credible interval around the median (coloured vertical line) assuming a single (orange) and three sparsity (purple) measurements. The black vertical dashed lines mark the true fiducial value of $a_\text{LMM}$ for each of the selected haloes.}
\label{fig:post_3sparsity}
\end{figure*}
The presence of this peak in the $p(a_{\rm LMM}|s_{200,500})$ posterior for low sparsity values biases the Bayesian estimation towards more recent major mergers when using a single sparsity measurement. As a result the previously mentioned Bayes factors, which depend on such a posterior, will also be biased towards recent mergers resulting in higher measured values. Moreover, this impacts our choice when it comes to the estimation we use, indeed a maximum likelihood estimation will be very sensitive to this peak. Therefore, we prefer to use a median likelihood estimation that is significantly more robust. The credible interval is then estimated iteratively around the median as to encompass 68 percent of the total probability. The end result of this procedure is shown in Fig.~\ref{fig:post_3sparsity} where we plot inferred posteriors along with the corresponding credible intervals (shaded areas) and median likelihood measurements (vertical lines) obtained assuming one (orange curves) and three sparsity (purple curves) values from three haloes selected from the numerical halo catalogue at $z=0$. The black vertical dashed lines indicate the true $a_{\rm LMM}$ value of the haloes.
We can clearly see that the inclusion of an additional mass measurement (or equivalently two additional sparsity estimates) allows to break the $s_{200,500}$ degeneracy between quiescent and merging haloes with low sparsity values. In such a case the true $a_{\rm LMM}$ value is found to be within the $1\sigma$ credible regions. Hence, this enable us to also identify merging haloes that are located in the confusion region.
\section{Cosmological Implications}\label{cosmo_imp}
Before discussing practical applications on the use of large halo sparsities as tracers of major merger events in clusters, it is worth highlighting the impact that such systems can have on average sparsity measurements that are used for cosmological parameter inference.
Halo sparsity depends on the underlying cosmological model \citep{Balmes2014,Ragagnin2021}, and it has been shown \citep[][]{Corasaniti2018,Corasaniti2021} that the determination of the average sparsity of an ensemble of galaxy clusters estimated at different redshifts can provide cosmological constraints complementary to those from standard probes. This is possible thanks to an integral relation between the average halo sparsity at a given redshift and the halo mass function at the overdensities of interest, which allows to predict the average sparsity for a given cosmological model \citep{Balmes2014}. Hence, the average is computed over the entire ensemble of haloes as accounted for by the mass functions. In principle, this implies that at a given redshift the mean sparsity should be computed over the available cluster sample without regard to their state, since any selection might bias the evaluation of the mean. This can be seen in the left panel of Fig.~\ref{fig:cosmo_mean}, where we plot the average sparsity $\langle s_{200,500}\rangle$ (top panel), $\langle s_{500,2500}\rangle$ (central panel) and $\langle s_{200,2500} \rangle$ as function of redshift in the case of haloes which are within two dynamical times since the last major merger (blue curves), for those which are more than two dynamical times since the last major merger (orange curves) and for the full sample (green curves). As we can see removing the merging haloes induces a $\sim 10\%$ bias on $\langle s_{200,500}\rangle$ at $z=0$, which decreases to $\sim 4\%$ at $z=1$, while in the same redshift range the bias is at $\sim 20\%$ level for $\langle s_{500,2500}\rangle$ and $\sim 30\%$ for $\langle s_{200,2500}\rangle$.
However, the dynamical time is not observable and in a realistic situation, one might have to face the reverse problem, which is that of having a number of outliers characterised by large sparsity values in a small cluster sample, potentially biasing the estimation of the mean compared to that of a representative cluster ensemble. Which clusters should be considered as outliers, and which should be removed from the cluster sample such that the estimation of the mean sparsity will remain representative of the halo ensemble average, say at sub-percent level? To assess this question, we can make use of the sparsity thresholds defined in Section~\ref{sec:frequentist} based on the p-value statistics. As an example in the right panel of Fig.~\ref{fig:cosmo_mean}, we plot the mean sparsities $\langle s_{200,500}\rangle$, $\langle s_{500,2500}\rangle$ and $\langle s_{200,2500}\rangle$ as function of redshift computed using the full halo sample (blue curves), and a selected halo sample from which we have removed haloes with sparsities above the sparsity thresholds, such as those shown in Fig.~\ref{fig:xi_of_z}, associated to p-values of $p\le 0.01$ (green curves) and $p\le 0.005$ (orange curves) respectively. We can see that removing outliers alter the estimated mean sparsity $\langle s_{200,500}\rangle$ at sub-percent level over the range $0<z<2$, and in the case of $\langle s_{500,2500}\rangle$ and $\langle s_{200,2500}\rangle$ up to a few per-cent level only in the high-redshift range $1\lesssim z <2$.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/mean_spars_z.pdf}
\caption{Redshift evolution of the average halo sparsity $\langle s_{200,500}\rangle$ (top panels), $\langle s_{500,2500}\rangle$ (middle panels) and $\langle s_{200,2500}\rangle$ (bottom panels). In the left panels we show the average sparsity estimated for the full halo samples (green curves), for haloes which are within two dynamical times from the last major merger event (blue curves) and for haloes which are at more than two dynamical times from it (orange curves). In the right panels we show the average sparsity estimate from the full halo samples (blue curves) and for selected samples from which we removed outliers whose sparsity lies above thresholds corresponding to p-values of $p\le 0.01$ (green curves) and $p\le 0.005$ (orange curves). In the inset plots we show the relative differences with respect to the mean sparsity estimated from the full catalogues.}
\label{fig:cosmo_mean}
\end{figure}
\section{Towards practical applications}\label{testcase}
We will now work towards applying the statistical analysis presented in Section~\ref{calistat} to observational data. To this purpose we have specifically developed the numerical code \textsc{lammas}\footnote{\href{https://gitlab.obspm.fr/trichardson/lammas}{https://gitlab.obspm.fr/trichardson/lammas}}. Given the mass measurements $M_{200\text{c}}$, $M_{500\text{c}}$ and $M_{2500\text{c}}$ of a galaxy cluster, the code computes the sparsity data vector $\bmath{D}=\{s_{200,500},s_{200,2500},s_{500,2500}\}$ (the last two values only if the estimate of $M_{2500\text{c}}$ is available) and performs a computation of the frequentist statistics discussed in Section~\ref{sec:frequentist} and the Bayesian computation presented in Section~\ref{sec:Bayesisan}. The code computes the frequentist p-value only for $s_{200,500}$ and it's associated uncertainty. Bayesian statistics are measure for both 1 and 3 sparsities, these include the posterior distributions $p(a_{\rm LMM}|\bmath{D})$ and their associated marginal statistics, along with the Bayes factor, $B_\text{f}$, using the available data. We implement the statistical distributions of merging and quiescent halo populations calibrated on the halo catalogues from the Uchuu simulations \citep{Ishiyama2021} rather than MDPL2, thus benefiting from the higher mass resolution and redshift coverage of the Uchuu halo catalogues. A description of the code \textsc{lammas} is given in Appendix~\ref{LAMMAS}. In the following we will first validate this code by presenting it with haloes from N-body catalogues that were not used for calibration. We will then quantify the robustness of our analysis to observational mass biases using empirical models. In particular, we will focus on weak lensing, hydrostatic equilibrium and NFW-concentration derived galaxy cluster masses. Finally we present a preliminary analysis of two galaxy clusters, Abell 383 and Abell 2345.
\subsection{Validation on simulated haloes}
As we have calibrated \textsc{lammas} using the Uchuu simulation suite \citep{Ishiyama2021}, we use a randomly selected sample of $10^4$ haloes from the previously described MDPL2 catalogues as validation dataset. This choice has two main advantages, firstly it naturally guarantees the same haloes are not used in both the calibration and the validation; secondly, it allows to test the robustness of the method to small changes in cosmology as the Uchuu suite is run on the cosmology of \citet{2016A&A...594A..13P} compared to MDPL2 which is run using that of \citet{2014A&A...571A..16P}. Furthermore, we choose to do this validation at $z=0.248$ to ensure that our pipeline also performs well at redshifts $z\neq 0$.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/roc_biass_samples.pdf}
\caption{ROC curves estimated from the validation dataset for sparsities estimated from the N-body halo masses (dashed lines), from the concentration parameter of the best-fitting NFW-profile (solid lines) and in the case of a conservative model for the mass bias induced by lensing observations (dash-dotted lines) in the case of the single sparsity Bayesian (BF 1D, orange curves) and frequentist (S 1D, blue curves) estimators and the three sparsity Bayesian estimator (BF 3D, green curves). We can see that in all cases, S 1D and BF 1D tests offer a similar detection power. Comparing the BF 3D curves to the S 1D ones, it is clear that while adding an independent sparsity measurement increases the detection power, this is not the case when the sparsities are deduced from the concentration parameter, with the latter having the opposite effect. Finally we can also see that strong mass biases have a strong negative impact on the efficiency of the detection of mergers.}
\label{fig:validation_roc_curves}
\end{figure}
We evaluate the efficiency of the detection procedure in terms of ROC curves shown in Fig.~\ref{fig:validation_roc_curves} and constructed using the same method as those shown in Fig.~{\ref{fig:roc_curves}}. We plot the case of the single sparsity frequentist (S 1D) and Bayesian (BF 1D) estimators, as well as the three sparsity Bayesian (BF 3D) estimator for sparsity measurements inferred from N-body halo masses (dashed lines), lensing masses (dash-dotted lines) and NFW-concentration derived masses (solid lines). Comparing the dashed curves of Fig.~\ref{fig:validation_roc_curves} and those in Fig.~{\ref{fig:roc_curves}} we can see that for the validation dataset considered here the efficiency of merger detection of the different test statistics is comparable to that we have inferred for the MDPL2 halo sample.
We quantify the accuracy of the estimation procedure by introducing three metrics defined as:
\begin{itemize}
\item the accuracy as given by the frequency at which the true value $a_\text{LMM}$ of a halo is recovered within the $1\,\sigma$ credible interval, $\alpha_\text{cc}$;
\item the estimated epoch of the last major merger, $\hat{a}_{\rm LMM}$;
\item the relative width of the $1\,\sigma$ credible interval, $\sigma/\hat{a}_{\rm LMM}$.
\end{itemize}
In Fig.~\ref{fig:test_metrics}, we plot these metrics as function of the true scale factor (redshift) of the last major merger of the haloes in the validation sample for the case of a single sparsity (orange curves) and three sparsity (blue curves) measurements, to which we will simply refer as 1S and 3S respectively. At first glance, it may appear from the top panel as if the 1S estimator is more accurate at recovering the merger epoch than it's 3S counterpart over a large interval $0.2<a_{\rm LMM}<0.68$. However, this is simply due to the fact that for haloes which are more than two dynamical times from their last major merger the posterior distribution is nearly flat and the estimator returns the same estimated time, as can be seen from the plot in the central panel. Consequently, the increased accuracy is simply due to wider credible intervals, as can be seen in the bottom panel. Hence, in this particular regime it is more prudent to extract an upper bound on $\hat{a}_{\rm LMM}$ from the resulting posterior, rather than a credible interval.
We can see that the trend is reversed for recent mergers occurring at $0.68<a_{\rm LMM}>0.8$, with the 3S estimator being much more accurate at recovering the scale factor of the last major merger and with restricted error margins (see blue curves in top and bottom panels respectively). Nevertheless from the middle panel, we may notice that both the 1S and 3S estimators have an area of confusion around the dip of the pulse feature in the $\hat{a}_{\rm LMM}$ plot. In both cases, we see that the estimator disfavours very recent merger (at $a_{\rm LMM}\approx 0.8$ in favour of placing them in the second bump of pulse, thus causing the median value and the $68\%$ region of $\hat{a}_{\rm LMM}$ to be lower than the true value of the last major merger epoch. An effect, that should be kept in mind when using the pipeline.
\begin{figure}
\centering
\includegraphics[width = .9\linewidth]{figures/estimator_tests.pdf}
\caption{\textit{Top:} Accuracy of the estimation of the epoch of the last major merger, $\alpha_{\rm cc}$, as a function of the true value $a_{\rm LMM}$ of the haloes in the validation sample for both the 1S (orange solid line) and 3S (blue solid line) estimators respectively. \textit{Middle:} Median value of the estimated epoch of the last major merger, $\hat{a}_{\rm LMM}$, as function of the true value for the 1S (orange curves) and 3S (blue curves) estimators respectively. The shaded areas correspond to the $68\%$ interval around the median, while the dashed diagonal line gives the ideal value of the estimator $\hat{a}_{\rm LMM}=a_{\rm LMM}$. \textit{Bottom:} relative width of the $68\%$ interval around the median value of $\hat{a}_{\rm LMM}$ as a function of the true value $a_{\rm LMM}$ for the 1S (orange curves) and 3S (blue curves) estimators respectively. We refer the reader to the text for a detailed discussion of the various trends.}
\label{fig:test_metrics}
\end{figure}
\subsection{Systematic Bias}
The statistical methodology we have developed here relies on sparsity estimates from N-body halo masses. However, these masses are not directly comparable to those inferred from galaxy cluster mass measurements, since the latter involve systematic uncertainties that may bias the cluster mass estimates compared to that from dark matter only simulations. Hence, before applying the sparsity test to real observations, we check the robustness of our approach against observational mass biases. More specifically, we will review conservative estimates of these biases for various mass estimation techniques and attempt to quantify the effect that these have on the sparsity.
\subsubsection{Weak Lensing Mass Bias}
A well known source of systematic error in weak lensing mass estimates comes from fitting the observed tangential shear profile of a cluster with a spherically symmetric NFW inferred shear profile. In such a case deviations from sphericity of the mass distribution within the cluster, as well as projection effects induce a systematic error on the estimated cluster mass that may vary at different radii, consequently biasing the evaluation of the sparsity.
\citet{Becker2011} have investigated the impact of this effect on weak lensing estimated masses. They modelled the observed mass at overdensity $\Delta$ as:
\begin{equation}
M_{\Delta}^{\text{WL}} = M_\Delta \exp(\beta_\Delta)\exp(\sigma_\Delta X),
\end{equation}
where $M_{\Delta}$ is the unbiased mass, $\beta_{\Delta}$ is a deterministic bias terms, while the third factor is a stochastic term with $\sigma_{\Delta}$ quantifying the spread of a log-normal distribution and $X\sim\mathcal{N}(0,1)$. Under the pessimistic assumption of independent scatter on both mass measurements, the resulting bias on the sparsity then reads as:
\begin{equation}\label{spars_wl_bias}
s_{\Delta_1,\Delta_2}^{\rm WL} = s_{\Delta_1,\Delta_2} \left(b^{\rm WL}_{\Delta_1,\Delta_2} +1\right) \exp\left(\sigma^{\rm WL}_{\Delta_1,\Delta_2} X\right),
\end{equation}
where $b^{\rm WL}_{\Delta_1,\Delta_2} = \exp(\beta_{\Delta_1} - \beta_{\Delta_2}) - 1$ and $\sigma^{\rm WL}_{\Delta_1,\Delta_2} = \sqrt{\sigma_{\Delta_1}^2 + \sigma_{\Delta_2}^2}$, with the errors being propagated from the errors quoted on the mass biases. \citet{Becker2011} have estimated the mass bias model parameters at $\Delta_1=200$ and $\Delta_2=500$, using the values quoted in their Tab.~3 and 4 we compute the sparsity bias $b^{\rm WL}_{200,500}$ and the scatter $\sigma^{\rm WL}_{200,500}$, which we quote in Tab.~\ref{tab:WL_bias}, for different redshifts and galaxy number densities, $n_\text{gal}$, in units of galaxies per arcmin$^{-2}$. Notice that the original mass bias estimates have been obtained assuming an intrinsic shape noise $\sigma_e = 0.3$.
\begin{table}
\centering
\caption{Sparsity bias and scatter obtained from the weak lensing mass bias estimates by \citet{Becker2011}.}
\begin{tabular}{cccc}
\hline
& $n_\text{gal}$ & $b^\text{WL}_\text{200,500}$ & $\sigma^\text{WL}_\text{200,500}$\\
\hline
& $10$ & $0.04\pm0.02$ & $ 0.51\pm0.03 $\\
$z=0.25$ & $20$ & $ 0.01\pm0.01 $ & $ 0.40\pm0.02 $\\
& $40$ & $ 0.03\pm0.01 $ & $ 0.35\pm0.02 $\\
& & &\\
& $10$ & $0.07\pm0.07$ & $ 0.76\pm0.03 $\\
$z=0.5$ & $20$ & $ 0.02\pm0.02 $ & $ 0.58\pm0.04 $\\
& $40$ & $ 0.03\pm0.01 $ & $ 0.49\pm0.03 $\\
\hline
\end{tabular}
\label{tab:WL_bias}
\end{table}
We may notice that although the deterministic sparsity bias is smaller than that on individual mass estimates the scatter can be large. In order to evaluate the impact of such biasses on the identification of merging clusters using sparsity estimates we use the values of the bias parameters quoted in Tab.~\ref{tab:WL_bias} to generate a population of biased sparsities using Eq.~(\ref{spars_wl_bias}) with the constraint that $s_{200,500}^\text{WL} > 1$ for our validation sample at $z=0.25$. We then performed the frequentist test for a single sparsity measurements (the Bayesian estimator has a detection power similar to that of the frequentist one.) and evaluated the Area Under the ROC curve (AUC) as function of the scatter $\sigma^{\rm WL}_{200,500}$ to quantify the efficiency of the estimator at detecting recent major merger events. This is shown in Fig.~\ref{fig:AUC-scatter}. Notice that a classifier should have values of AUC$>0.5$ \citep{Fawcett2006}. Hence, we can see that the scatter can greatly reduce the detection power of the sparsity estimator and render the method ineffective at detecting recent mergers for $\sigma^{\rm WL}_{200,500}>0.2$. In contrast, the estimator is valuable classifier for smaller values of the scatter.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/AUC_scatter.pdf}
\caption{Area Under the ROC Curve (AUC) as function of the scatter on the measured sparsity for WL mass estimates. A random classifier has an AUC$=0.5$. The vertical and horizontal lines denote AUC = 0.6 and the corresponding scatter $\sigma^{\rm WL}_{200,500}=0.2$, denoting the point, $\sigma^\text{WL}_{200,500} > 0.2$, beyond which the detector can be considered ineffective at detecting recent mergers.}
\label{fig:AUC-scatter}
\end{figure}
\subsubsection{Hydrostatic Mass Bias}
Measurements of galaxy cluster masses from X-ray observations rely on the hypothesis that the intra-cluster gas is in hydrostatic equilibrium. Deviations from this condition can induce a radially dependent bias on the cluster masses \citep[see e.g.][]{2016ApJ...827..112B,Eckert2019,Ettori2022}, thus affecting the estimation of the cluster's sparsity. The hydrostatic mass bias has been studied in \citet{2016ApJ...827..112B}, who have realised cosmological zoom N-body/hydro simulations of 29 clusters to evaluate the bias of masses at overdensities $\Delta=200, 500$ and $2500$ (in units of the critical density) for Cool Core (CC) and No Cool Core (NCC) clusters, as defined with respect to the entropy in the core of their sample, as well as for Regular and Disturbed clusters defined by the offset of the centre of mass and the fraction of substructures.
\begin{table}
\centering
\caption{Sparsity bias from the hydrostatic mass bias estimated of \citet{2016ApJ...827..112B} for different categories of simulated clusters.}
\begin{tabular}{lccc}
\hline
& $b_{200,500}^\text{HE}$ & $b_{500,2500}^\text{HE}$ & $b_{200,2500}^\text{HE}$ \\
\hline
All & $0.003\pm0.032$ & $-0.037\pm0.025$ & $-0.033\pm0.034$ \\
CC & $-0.009\pm0.031$ & $-0.151\pm0.038$ & $-0.162\pm0.041$ \\
NCC & $0.019\pm0.046$ & $0.005\pm0.027$ & $0.023\pm0.041$ \\
Regular & $0.032\pm0.089$ & $0.025\pm0.037$ & $0.057\pm0.082$ \\
Disturbed & $-0.017\pm0.077$ & $-0.080\pm0.086$ & $-0.098\pm0.052$\\
\hline
\end{tabular}
\label{tab:hydro_biasses}
\end{table}
Following the evaluation presented in \citet{Corasaniti2018}, we use the hydrostatic mass bias estimates given in Tab.~1 of \citet{2016ApJ...827..112B} to estimate the bias on cluster sparsities, these are quoted in Tab.~\ref{tab:hydro_biasses}. Overall, we can see that the hydrostatic mass bias does not significantly affect the estimated sparsity, with a bias of the order of few percent and in most cases compatible with a vanishing bias with only a few exceptions. This is consistent with the results of the recent analysis based on observed X-ray clusters presented in \citet{Ettori2022}, which yield sparsity biasses at the percent level and consistent with having no bias at all. However, we have seen in the case of the WL mass bias that even though the effect on the measured sparsity remains small, the scatter around the true sparsity can severely affect the efficiency of the detector at identifying recent mergers. Unfortunately, the limited sample from \citet{2016ApJ...827..112B} does not allow to compute the hydrostatic mass bias scatter of the sparsity. If the latter behaves in the same manner as in the WL case, then we can expect the estimator to respond to the increasing scatter as in Fig.~\ref{fig:AUC-scatter}. Consequently, as long as the scatter remains small, $\sigma^{\rm HE}_{\Delta_1,\Delta_2} < 0.1$, then the efficiency of the estimator will remain unaffected.
\subsubsection{Concentration Mass Bias}
We have seen in Section~\ref{sparsprof} that sparsities deduced from the concentration parameter of a NFW profile fitted to the halo density profile are biased compared to those measured using N-body masses. In particular, as seen in Fig.~\ref{fig:relative_spars_conc}, concentration deduced sparsities tend to underestimate their N-body counterparts. Hence, they are more likely to be associated with relaxed clusters than systems in a perturbed state characterised by higher values. A notable exception is the case of haloes undergoing recent mergers which are associated to lower concentration values, or equivalently higher sparsity, even though the N-body estimated sparsity is low. This effect is most likely due to poor fit agreement \citep{Balmes2014}, and systematically increases the population of perturbed haloes above the detection threshold. The concurrences of these two effects leads to an apparent increase in detection power for the 1S estimators when using NFW-concentration estimated masses, as can be seen for the solid lines in Fig.~\ref{fig:validation_roc_curves}.
In contrast when looking at the 3S case in Fig.~\ref{fig:validation_roc_curves}, there is a clear decrease in the detection power for the concentration based sparsity estimates. This is due to the differences in the pulse patterns deduced from concentration compared to the direct measurement of the sparsity, which results in a shape of the pulse at inner radii that is significantly different from that obtained using the N-body masses. Similarly to the 1S estimator, the sparsities measured using the NFW concentration are on average shifted towards smaller values. As such, the effect of using concentration based estimates results in an overestimation of the likelihood that a halo has not undergone a recent merger.
Keeping the above discussions in mind we now present example applications to two well studied galaxy clusters.
\subsection{Abell 383}
Abell 383 is a cluster at $z=0.187$ that has been observed in X-ray \citep{2004A&A...425..367B,2006ApJ...640..691V} and optical bands \citep{2002PASJ...54..833M,2012ApJS..199...25P} with numerous studies devoted to measurements of the cluster mass from gravitational lensing analyses \citep[e.g.][]{2016MNRAS.461.3794O,2016ApJ...821..116U,2019MNRAS.488.1704K}. The cluster appears to be a relaxed system with HE masses $M_{500\text{c}}=(3.10\pm 0.32)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{2500\text{c}}=(1.68\pm 0.15)\cdot 10^{14}\,\text{M}_{\odot}$ from Chandra X-ray observations \citep{2006ApJ...640..691V}, corresponding to the halo sparsity $s_{500,2500}=1.84\pm 0.25$ that is close to the median of the halo sparsity distribution. We compute the merger test statistics of Abell 383 using the lensing masses estimates from the latest version of the Literature catalogues of Lensing Clusters \citep[LC$^2$][]{2015MNRAS.450.3665S}. In particular, we use the mass estimates obtained from the analysis of the latest profile data of \citep{2019MNRAS.488.1704K}: $M_{2500\text{c}}=(2.221\pm 0.439)\cdot 10^{14}\,\text{M}_{\odot}$, $M_{500\text{c}}=(5.82\pm 1.15)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{200\text{c}}=(8.55\pm 1.7)\cdot 10^{14}\,\text{M}_{\odot}$. These give the following set of sparsity values: $s_{200,500}=1.47\pm 0.41$, $s_{200,2500}=3.85\pm 1.08$ and $s_{500,2500}=2.62\pm 0.73$. We obtain a p-value ${\rm p}=0.21$ and Bayes Factor $B_\text{f}=0.84$, incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\rm p}=0.40$, which can be interpreted as an effective sparsity of $s^\text{eff}_{200,500} = 1.40$. These results disfavour the hypothesis that the cluster has gone through a major merger in its recent history.
\subsection{Abell 2345}
Abell 2345 is a cluster at $z=0.179$ that has been identified as a perturbed system by a variety of studies that have investigated the distribution of the galaxy members in optical bands \citep{2002ApJS..139..313D,2010A&A...521A..78B} as well as the properties of the gas through radio and X-ray observations \citep[e.g.][]{1999NewA....4..141G,2009A&A...494..429B,2017ApJ...846...51L,2019ApJ...882...69G,2021MNRAS.502.2518S}. The detection of radio relics and the disturbed morphology of the gas emission indicate that the cluster is dynamically disturbed. Furthermore, the analysis by \citet{2010A&A...521A..78B} suggests that the system is composed of three sub-clusters. \citet{2002ApJS..139..313D} have conducted a weak lensing study on a small field of view centred on the main sub-cluster and found that the density distribution is roughly peaked on the bright central galaxy. This is also confirmed by the study of \citet{2004ApJ...613...95C}, however the analysis by \citet{2010PASJ...62..811O} on a larger field-of-view has indeed shown that Abell 2345 has a complex structure. The shear data have been re-analysed to infer lensing masses that are reported in latest version the LC$^2$-catalogue \citep{2015MNRAS.450.3665S}: $M_{200\text{c}}=(28.44\pm 10.76)\cdot 10^{14}\,\text{M}_{\odot}$, $M_{500\text{c}}=(6.52\pm 2.47)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{2500\text{c}}=(0.32\pm 0.12)\cdot 10^{14}\,\text{M}_{\odot}$. These mass estimates give the following set of sparsity values: $s_{200,500}= 4.36\pm 2.33$, $s_{200,2500}=87.51\pm 46.83$ and $s_{500,2500}=20.06\pm 10.74$. Using only the $s_{200,500}$ estimate result in a very small p-value, ${\rm p}=4.6\cdot 10^{-5}$. Incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\rm p}=7.5\cdot10^{-4}$, which can be interpreted as an effective sparsity of $s^\text{eff}_{200,500} = 2.76$, significantly lower than the measured value, however both strongly favour the signature of a major merger event, that is confirmed by the combined analysis of the three sparsity measurements for which we find a divergent Bayes factor. In Fig.~\ref{fig:post_A2345} we plot the marginal posterior for the single sparsity $s_{200,500}$ (orange solid line) and for the ensemble of sparsity estimates (purple solid line). In the former case with obtain a median redshift $z_{\rm LMM}=0.30^{+0.03}_{-0.06}$, while in the latter case we find $z_\text{LMM} = 0.39\pm 0.02$, which suggests that a major merger event occurred $t_\text{LMM} = 2.1\pm 0.2$ Gyr ago. One should however note that in light of the discussions presented above, this result could be associated to a more recent merger event which, as can be seen in Fig.~\ref{fig:test_metrics}, are artificially disfavoured by our method.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/fig_A2345.pdf}
\caption{Posterior distributions Abell 2345 obtained using three sparsity measurements from the lensing cluster masses in the LC$^2$ catalogue \citep{2015MNRAS.450.3665S} using the shear data from \citep{2010PASJ...62..811O}. The vertical lines indicates the median value of $z_{\rm LMM}$, while the shaded are corresponds to the $68\%$ credible region around the median.}
\label{fig:post_A2345}
\end{figure}
\section{Conclusions}\label{conclu}
In this work we have investigated the properties of the mass profile of massive dark matter haloes hosting galaxy clusters. We have focused on haloes undergoing major merger events with the intent of finding observational proxies of the halo mass distribution that can provide hints of recent mergers in galaxy clusters. To this purpose we have performed a thorough analysis of N-body halo catalogues from the MultiDark-Planck2 simulation.
We have shown that halo sparsity provides a good proxy of the halo mass profile, especially in the case of merging haloes whose density profile significantly deviates from the NFW formula. We have found that major mergers leave a characteristic universal imprint on the evolution of the halo sparsity. This manifests as a rapid pulse response to the major merger event with a shape that is independent of the time at which the major merger occurs. The onset of the merger systematically increases the value of the sparsity, suggesting that mass in the inner part of the halo is displaced relative to the mass in the external region. Following the pulse the value of the sparsity, a quiescent evolution of the halo mass distribution is recovered within only $\sim 2$ dynamical times, which is consistent with the findings of the concentration analysis by \citet{Wang2020}.
The universal imprint of major mergers on the evolution of halo sparsity implies the universality of the distribution of halo sparsities of merging and quiescent haloes respectively. That is to say that at any given redshift it is possible to distinctly characterise the distribution of merging and quiescent haloes. This is because the distribution of sparsity values of haloes that have undergone their last major merger within $|T|\lesssim 2$ dynamical times differs from that of quiescent haloes that had their last major merger at earlier epochs, $|T|\gtrsim 2$. The former constitutes a sub-sample of the whole halo population that largely contributes to the scatter of the halo sparsity distribution with their large sparsity values.
The characterisation of these distributions enable us to devise statistical tests to evaluate whether a cluster at a given redshift and with given sparsity estimates has gone through a major merger in its recent history and eventually at which epoch. To this purpose we have developed different metrics based on a standard binary frequentist test, Bayes Factors and Support Vector Machines. We have shown that having access to cluster mass estimates at three different overdensities, allowing to obtain three sparsity estimates, provides more robust conclusions. In the light of these results we have developed a numerical code that can be used to investigate the presence of major mergers in observed clusters. As an example case, we have considered Abell 2345 a known perturbed clusters as well as Abell 383 a known quiescent cluster.
In the future we plan to expand this work in several new directions. On the one hand, it will be interesting to assess the impact of baryons on halo sparsity estimates especially for merging haloes. This should be possible through the analysis of N-body/hydro simulations of clusters. On the other hand, it may be also useful to investigate whether the universality of the imprint of major mergers on the evolution of halo sparsity depends with the underlying cosmological model. The analysis of N-body halo catalogues from simulations of non-standard cosmological scenarios such as the RayGalGroupSims suite \citep{Corasaniti2018,2021arXiv211108745R}, may allow us to address this point.
It is important to stress that the study presented here focuses on the statistical relation between halo sparsity and the epoch of last major merger defined as the time when the parent halo merges with a smaller mass halo that has at least one third of its mass. This is different from the collision time, or the central passage time of two massive haloes, which occur on a much shorter time scale. Hence, the methodology presented here cannot be applied to Bullet-like clusters that have just gone through a collision, since the distribution of the collisionless dark matter component in the colliding clusters has not been disrupted and their merger has yet to be achieved. Overall, our results opens the way to timing major merger in perturbed galaxy clusters through measurements of dark matter halo sparsity.
\section*{Acknowledgements}
We are grateful to Stefano Ettori, Mauro Sereno and the anonymous referee for carefully reading the manuscript and their valuable comments.
The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP).
The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064.
The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de).
We thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1 in the Skies \& Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu DR1 effort has made use of the skun@IAA\_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).
\section*{Data Availability}
During this work we have used publicly available data from the MDPL2 simulation suite \citep{Klypin2016}, provided by the CosmoSim database \href{https://www.cosmosim.org/}{https://www.cosmosim.org/}, in conjunction with publicly available data from the Uchuu simulation suite \citep{Ishiyama2021}, provided by the Skies and Universes database \href{http://skiesanduniverses.org/}{http://skiesanduniverses.org/}.
The numerical code \textsc{lammas} used for this analysis are available at: \href{https://gitlab.obspm.fr/trichardson/lammas}{https://gitlab.obspm.fr/trichardson/lammas}. The package also contains the detailed fitting parameters of the 1S and 3S likelihood distributions for all Uchuu snapshots up to z = 2.
\bibliographystyle{mnras}
|
\section{Introduction}
The performance of a sea-going ship is important not only to keep the fuel and operational cost in-check but also to reduce global emissions from the shipping industry. Analyzing the performance of a ship is also of great interest for charter parties to estimate the potential of a ship and the profit that can be made out of it. Therefore, driven by both the economic and social incentives, the trade of ship performance analysis and monitoring has been booming substantially in recent times. The importance of in-service data in this context is very well understood by most of the stake holders, clearly reflected by the amount of investment made by them on onboard sensors, data acquisition systems, and onshore operational performance monitoring and control centers.
The traditional way to evaluate the performance of a ship is using the noon report data provided by the ship's crew. A more exact approach, but not very feasible for commercial vessels, was suggested by \citet{Walker2007}, conducting in-service sea trials in calm-water conditions on a regular basis. With the advent of sensor-based continuous monitoring systems, the current trend is to directly or indirectly observe the evolution of the calm-water speed-power curve over time. ISO 19030 \cite{ISO19030} along with several researchers (\citet{Koboevic2019}; \citet{Coraddu2019DigTwin}) recommends observing the horizontal shift (along the speed axis) of the calm-water speed-power curve, termed as the speed-loss, over time to monitor the performance of a sea-going ship using the in-service data. Alternatively, it is suggested to observe the vertical shift of the calm-water speed-power curve, often termed as the change in power demand (adopted by \citet{Gupta2021PrefMon} and \citet{CARCHEN2020}). Some researchers also formulated and used some indirect performance indicators like fuel consumption (\citet{Koboevic2019}), resistance (or fouling) coefficient (\citet{Munk2006}; \citet{Foteinos2017}; \citet{CARCHEN2020}), (generalized) admiralty coefficient (\citet{Ejdfors2019}; \citet{Gupta2021}), wake fraction (\citet{CARCHEN2020}), fuel efficiency (\citet{Kim2021}), etc. In each of these cases, it is clearly seen (and most of the time acknowledged) that the results are quite sensitive to the quality of the data used to estimate the ship's performance.
The ship's performance-related data obtained from various sources usually inherits some irregularities due to several factors like sensor inaccuracies, vibration of the sensor mountings, electrical noise, variation of environment, etc., as pointed out in the Guide for Smart Functions for Marine Vessels and Offshore Units (Smart Guide) published recently by \citet{ABS2020guide}. The quality of data used to carry-out ship performance analysis and the results obtained further can be significantly improved by adopting some rational data processing techniques, as shown by \citet{Liu2020} and \citet{Kim2020}. Another important factor is the source of data as it may also be possible to obtain such datasets using the publicly available AIS data (\citet{You2017}). \citet{Dalheim2020DataPrep} presented a data preparation toolkit based on the in-service data recorded onboard 2 ships. The presented toolkit was developed for a specific type of dataset, where the variables were recorded asynchronously and had to be synchronized before carrying-out ship performance analysis. The current work would rather focus on challenges faced while processing an already synchronized dataset.
The current paper presents a review of different data sources used for ship performance analysis and monitoring, namely, onboard recorded in-service data, AIS data, and noon reports, along with the characteristics for each of these data sources. Finally, a data processing framework is outlined which can be used to prepare these datasets for ship performance analysis and monitoring. Although the data processing framework is developed for the performance monitoring of ships, it may easily be casted for several other purposes. With the easy availability of data from ships, the concept of creating digital twins for sea-going ships is becoming quite popular. \citet{Major2021} presented the concept of digital twin for a ship and the cranes onboard it. The digital twin established by \citet{Major2021} can be used to perform three main offshore operations, including remote monitoring of the ship, maneuvering in harsh weather and crane operations, from an onshore control center. Moreover, as pointed out by \citet{Major2021}, the digital twin technology can also be adopted for several other purposes, like predictive maintenance, ship autonomy, etc. Nevertheless, the data processing framework presented here can also be used to process the real-time data obtained to create digital twins for ships in an efficient manner.
The following section discusses the art of ship performance analysis and the bare minimum characteristics of a dataset required to do such an analysis. Section \ref{sec:dataSources} presents the above mentioned sources of data used for ship performance analysis, their characteristics, and the tools required to process these datasets. Section \ref{sec:results} presents the data processing framework which can be used to process and prepare these datasets for ship performance monitoring. Finally, section \ref{sec:conclusion} finishes the paper with concluding remarks.
\section{Ship Performance Analysis}
The performance of a ship-in-service can be assessed by observing its current performance and, then, comparing it to a benchmarking standard. There are several ways to establish (or obtain) a benchmarking standard, like model test experiments, full-scale sea trials, CFD analysis, etc. It may even be possible to establish a benchmarking standard using the in-service data recorded onboard a newly built ship, as suggested by \citet{Coraddu2019DigTwin} and \citet{Gupta2021}. On the other hand, evaluating the current performance of a ship requires a good amount of data processing as the raw data collected during various voyages of a ship is susceptible to noise and errors. Moreover, the benchmarking standard is, generally, established for only a given environmental condition, most likely the calm-water condition. In order to draw a comparison between the current performance and the benchmarking standard, the current performance must be translated to the same environmental condition, therefore, increasing the complexity of the problem.
\subsection{Bare Minimum Variables}
For translating the current performance data to the benchmarking standard's environmental condition and carrying-out a reliable ship performance analysis, a list of bare minimum variables must be recorded (or observed) at a good enough sampling rate. The bare minimum list of variables must provide the following information about each sampling instant for the ship: (a) Operational control, (b) Loading condition, (c) Operational environment, and (d) Operating point. The variables containing the above information must either be directly recorded (or observed) onboard the ship, collected from regulatory data sources such as AIS, or may be derived using additional data sources, like the operational environment can be easily derived using the ship's location and timestamp with the help of an appropriate weather hindcast (or metocean) data repository.
The operational control information should contain the values of the propulsion-related control parameters set by the ship's captain on the bridge, like shaft rpm, rudder angle, propeller pitch, etc. The shaft rpm (or propeller pitch, in case of ships equipped with controllable pitch propellers running at constant rpm) is by far the most important variable here as it directly correlates with the ship's speed-through-water. It should be noted that even in the case of constant power or speed mode, the shaft rpm (or propeller pitch) continues to be the primary control parameter as the set power or speed is actually achieved by using a real-time optimizer (incorporated in the governor) which optimizes the shaft rpm (or propeller pitch) to get to the set power or speed. Nevertheless, in case the shaft rpm (or propeller pitch) is not available, it may be appropriate to use the ship's speed-through-water as an operational control parameter, as done by several researchers (\citet{FARAG2020}; \citet{Laurie2021}; \citet{Minoura2020}; \citet{Liang2019}), but in this case, it should be kept in mind that, unlike the shaft rpm (or propeller pitch), the speed-through-water is a dependant variable strongly influenced by the loading condition and the operational environment.
The loading condition should contain the information regarding the ship's fore and aft draft, which can be easily recorded onboard the ship. Although the wetted surface area and under-water hull-form are more appropriate for a hydrodynamic analysis, these can be derived easily using the ship's hull form, if the fore and aft draft is known. The operational environment should at least contain variables indicating the intensity of wind and wave loads acting on the ship, like wind speed and direction, significant wave height, mean wave direction, mean wave period, etc. Finally, the operating point should contain the information regarding the speed-power operating point for the sampling instant. Table \ref{tab:bareMinVars} presents the list of bare minimum variables required for ship performance analysis. The list given in the table may have to be modified according to ship specifications, for example, the propeller pitch is only relevant for a ship equipped with a controllable pitch propeller.
\begin{table}[ht]
\caption{The list of bare minimum data variables required for ship performance analysis.} \label{tab:bareMinVars}
\centering
\begin{tabular}{l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c}{\textbf{Variables}} \\
\hline
Operational Control & Shaft rpm, Rudder angle, \& Propeller pitch \\
\hline
Loading Condition & Fore and aft draft \\
\hline
Operational Environment & \begin{tabular}[l]{@{}l@{}}Longitudinal and transverse wind speed, Significant wave height,\\ Relative mean wave direction, \& Mean wave period\end{tabular} \\
\hline
Operating Point & Shaft power \& Speed-through-water \\
\hline
\end{tabular}
\end{table}
\subsection{Best Practices} \label{sec:bestPractices}
It is well-known that the accuracy of various measurements is not the same. It also depends on the source of the measurements. The measurements recorded using onboard sensors are generally more reliable as compared to the manually recorded noon report measurements, due to the possibility of human error in the latter. Even in the case of onboard recorded sensor measurements, the accuracy varies from sensor-to-sensor and case-to-case. Some sensors can be inherently faulty, whereas others can give incorrect measurements due to unfavorable installation and operational conditions, and even the best ones are known to have some measurement noise. Thus, it is recommended to establish and follow some best practices for a reliable and robust ship performance analysis.
The onboard measurements for shaft rpm ($n$) and shaft torque ($\tau$) are generally obtained using a torsion meter installed on the propeller shaft, which is considered to be quite reliable. The shaft power ($P_s$) measurements are also derived from the same as the shaft power is related to the shaft rpm and torque through the following identity: $P_s = 2\pi n\tau$. It should be noted that no approximation is assumed in this formulation and, therefore, it should be validated with the data, if all three variables ($n, \tau, P_s$) are available. On the other hand, the measurements for speed-through-water are known to have several problems, as presented by \citet{DALHEIM2021}. Thus, it is recommended to use shaft rpm (and not speed-though-water) as the independent variable while creating data-driven regression models to predict the shaft power. Owing to the same reason, it may also be a good idea to quantify the change in ship's performance in terms of change in power demand rather than speed-loss (or speed-gain), as recommended by ISO 19030 \cite{ISO19030}.
Further, it is also quite common to use fuel oil consumption as a key performance indicator for ship performance analysis (\citet{Karagiannidis2021}). The fuel oil consumption can be easily calculated from the engine delivered torque and engine rpm, if the specific fuel consumption (SFC) curve for the engine is known. Even though the SFC curve is established and supplied by the engine manufacturer, it is only valid for a specific operating environment, and it is known to evolve over time due to engine degradation and maintenance. Thus, including the fuel oil consumption in ship performance analysis increases the complexity of the problem, which requires taking engine health into account. If the objective of ship performance analysis is also to take into account the engine performance, then it may be beneficial to divide the problem into two parts: (a) Evaluate the change in power demand (for hydrodynamic performance analysis), and (b) Evaluate the change in engine SFC (for engine performance analysis). Now, the latter can be formulated as an independent problem with a completely new set of variables-of-interest, like engine delivered torque, engine rpm, ambient air temperature, calorific value of fuel, turbocharger health, etc. This would not only improve the accuracy of ship's hydrodynamic performance analysis but would also allow the user to develop a more comprehensive and, probably, accurate analysis model. The current work is focused on the hydrodynamic performance analysis.
\subsection{Sampling Frequency}
Almost all electronics-based sensors are known to have some noise in their measurements. The simplest way adopted to subdue this noise is by taking an average over a number of measurements (known as a `sample' in statistics), recorded over a very short period of time (milliseconds). It is also known that the statistical mean of a `sample' converges to the true mean (i.e., the mean of the entire population), thereby eliminating the noise, as the number of measurements in the `sample' is increased, provided the observations follow a symmetrical distribution. Nevertheless, it is observed that the high frequency data still retains some noise, probably due to the fact that the number of measurements in each `sample' is small, i.e., the measurements are obtained by averaging a small number of samples recorded over a very short period of time. On the other hand, as seen in the case of noon reports and most of the in-service datasets, time-averaging the measurements over a longer period of time obscures the effect of moderately varying influential factors, for example, instantaneous incident wind and waves, response motions, etc. Thus, a very high sampling frequency data may retain high noise, and a very low sampling frequency data, with time-averaged values, may result in obscuring important effects from the data time-series. Furthermore, in the third scenario, it may be possible that the data acquisition (DAQ) system onboard the ship is simply using low sampling frequency, recording instantaneous values instead of time-averaged ones, saving a good amount of storage and bandwidth while transmitting it to the shore-based control centers. These low frequency instantaneous values may result in an even more degraded data quality as it would contain noise as well as obscure the moderately varying effects.
The ideal sampling frequency would also depend on the objective of the analysis and the recorded variables. For example, if the objective of the analysis is to predict the motion response of a ship or analyse its seakeeping characteristics, the data should be recorded at a high enough sampling frequency such that it is able to capture such effects. \citet{hansen2011performance} analyzed the ship's rudder movement and the resulting resistance, and demonstrated that if the sampling interval would be large, the overall dynamics of the rudder movement would not be captured, resulting in a difference in resistance. One criterion for selecting the data sampling rate is Nyquist frequency (\citet{jerri1977shannon}), which is widely used in signal processing. According to this criterion, the sampling frequency shall be more than twice the frequency of the observed phenomenon to sufficiently capture the information regarding the phenomenon. Therefore, if the aim is not to record any information regarding the above mentioned moderately varying effects (instantaneous incident wind and waves, response motions, etc.), it may be acceptable to just obtain low frequency time-averaged values so that such effects are subdued. But it may still be useful to obtain high frequency data, in this case, as it can be advantageous from data cleaning point of view. For example, the legs of time-series showing very high variance, due to the noise or moderately varying effects, can be removed from the analysis to increase the reliability of results.
\section{Data Sources, Characteristics \& Processing Tools} \label{sec:dataSources}
\subsection{In-service Data}
The in-service data, referred to here, is recorded onboard a ship during its voyages. This is achieved by installing various sensors onboard the ship, collecting the measurements from these sensors on a regular basis (at a predefined sampling rate) using a data acquisition (DAQ) system, and fetching the collected data to onshore control centers. The two most important features of in-service data is the sampling rate (or, alternatively, sampling frequency) and the list of recorded variables. Unfortunately, there is no proper guide or standard which is followed while defining both these features for a ship. Thus, the in-service data processing has to be adapted to each case individually.
The in-service datasets used here are recorded over a uniform (across all recorded variables) and evenly-spaced sampling interval, which makes it easier to adopt and apply data processing techniques. In an otherwise case, where the data is sampled with a non-uniform and uneven sampling interval, some more pre-processing has to be done in order to prepare it for further analysis, as demonstrated by \citet{Dalheim2020DataPrep}. \citet{Dalheim2020DataPrep} presented a detailed algorithm to deal with time vector jumps and synchronizing non-uniformly recorded data variables. The problem of synchronization can, alternatively, be looked at using the well-known dynamic time warping (DTW) technique, which is generally used for aligning the measurements taken by two sensors, measuring the same or highly correlated features. In a different approach, \citet{virtanen2020scipy} demonstrated that the collected data can be down-sampled or up-sampled (resampling) to obtain a uniform and evenly sampled dataset.
\subsubsection{Inherently Faulty \& Incorrect Measurements} \label{sec:incorrMeasureInServData}
Some of the sensors onboard a ship can be inherently faulty and provide incorrect measurements due to unfavorable installation or operational conditions. Many of these can actually be fixed quite easily. For instance, \citet{Wahl2019} presented the case of faulty installation of the wind anemometer onboard a ship, resulting in missing measurements for head-wind condition probably due to the presence of an obstacle right in front of the sensor. Such a fault is fairly simple to deal with, say, by fixing the installation of the sensor, and it is even possible to fix the already recorded dataset using the wind measurements from one of the publicly available weather hindcast datasets. Such an instance also reflects the importance of data exploration and validation for ship performance analysis. Unlike above, the case of draft and speed-through-water measurement sensors is not as fortunate and easy to resolve.
The ship's draft is, generally, recorded using a pressure transducer installed onboard the ship. The pressure transducer measures the hydrostatic pressure acting on the bottom plate of the ship which is further converted into the corresponding water level height or the draft measurement. When the ship starts to move and the layer of water in contact with the ship develops a relative velocity with respect to the ship, the total pressure at the ship's bottom reduces due to the non-zero negative hydrodynamic pressure and, therefore, further measurements taken by the draft sensor are incorrect. This is known as the Venturi effect. It may seem like a simple case, and one may argue that the measurements can be fixed by just adding the water level height equivalent to the hydrodynamic pressure, which may be calculated using the ship's speed-through-water. Here, it should be noted that, firstly, to accurately calculate the hydrodynamic pressure, one would need the localized relative velocity of the flow (and not the ship's speed-through-water), which is impractical to measure, and secondly, the speed-though-water measurements are also known to have several sources of inaccuracy. Alternatively, it may be possible to obtain the correct draft measurements from the ship's loading computer. The loading computer can calculate the draft and trim in real-time based on the information such as the ship's lightweight, cargo weight and distribution, and ballast water loading configuration.
The state-of-the-art speed-though-water measurement device uses the Doppler acoustic speed log principle. Here, the relative speed of water around the hull (i.e., the speed-though-water) is measured by observing the shift in frequency (popularly known as the Doppler shift) of the ultrasound pulses emitted from the ship's hull, due to its motion. The ultrasonic pulses are reflected by the ocean bottom, impurities in the surrounding water, marine life, and even the liquid-liquid interface between the density difference layers in deep ocean. The speed of water surrounding the ship is influenced by the boundary layer around the hull so it is required that the ultrasonic pulses reflected only by the particles outside the boundary layer are used to estimate the speed-though-water. Therefore, a minimum pulse travelling distance has to be prescribed for the sensor. If the prescribed distance is too larger or if the ship is sailing in shallow waters, the Doppler shift is calculated using the reflection from the ocean bottom, i.e., the sensor is in ground-tracking mode, and therefore, it would clearly record the ship's speed-over-ground instead of the speed-though-water. \citet{DALHEIM2021} presented a detailed account regarding the uncertainty in the speed-though-water measurements for a ship, commenting that the speed log sensors are considered to be one of the most inaccurate ones onboard the ship.
It may also be possible to estimate the speed-though-water of a ship using the ship's speed-over-ground and incident longitudinal water current speed. The speed-over-ground of a ship is measured using a GPS sensor, which is considered to be quite accurate, but unfortunately, the water current speed is seldom recorded onboard the ship. It is certainly possible to obtain the water current speed from a weather hindcast data source, but the hindcast measurements are not accurate enough to obtain a good estimate for speed-through-water, as indicated by \citet{Antola2017}. It should also be noted that the temporal and spatial resolution of weather hindcast data is relatively larger than the sampling interval of the data recorded onboard the ship. Moreover, the water current speed varies along the depth of the sea, therefore, the incident longitudinal water current speed must be calculated as an integral of the water current speed profile over the depth of the ship. Thus, in order to obtain accurate estimates of speed-though-water, the water current speed has to be measured or estimated upto a certain depth of the sea with good enough accuracy, which is not possible with the current state-of-the-art.
\subsubsection{Outliers} \label{sec:outliers}
Another big challenge with data processing is the problem of detecting and handling outliers. As suggested by \citet{Olofsson2020}, it may be possible to categorize outliers into the following two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers\footnote{Called collective outliers by \citet{Olofsson2020}.}. \citet{Dalheim2020DataPrep} presented methods to detect and remove contextual outliers, further categorized as (i) obvious (or invalid) outliers, (ii) repeated values, (iii) drop-outs, and (iv) spikes. Contextual outliers are easily identifiable as they either violate the known validity limits of one or more recorded variables (as seen in the case of obvious outliers and spikes) or present an easily identifiable but anomalous pattern (as seen in the case of repeated values and drop-outs).
The case of correlation-defying outliers is much more difficult to handle, as they can easily blend into the cleaned data pool. The two most popular methods which can be used to identify correlation-defying outliers are Principal Component Analysis (PCA) and autoencoders. Both these methods try to reconstruct the data samples after learning the correlation between the variables. It is quite obvious that a correlation-defying outlier would result in an abnormally high reconstruction error and, therefore, can be detected using such techniques. In a recent attempt, \citet{Thomas2021} demonstrated an ensemble method combining PCA and autoencoders coupled with isolation forest to detect such outliers.
\subsubsection{Time-Averaging Problem} \label{sec:timeAvgProb}
As aforementioned, the onboard recorded in-service data can be supplied as time-averaged values over a short period of time (generally upto around 15 minutes). Although the time-averaging method eliminates white noise and reduces the variability in the data samples, it introduces a new problem in case of angular measurements. The angular measurements are, generally, recorded in the range of 0 to 360 degrees. When the measurement is around 0 or 360 degrees, it is obvious that the instantaneous measurements, reported by the sensor, will fluctuate in the vicinity of 0 and 360 degrees. For instance, assuming that the sensor reports a value of about 0 degree for half of the averaging time and about 360 degrees for the remaining time, the time-averaged value recorded by the data acquisition (DAQ) system will be around 180 degrees, which is significantly incorrect. Most of the angular measurements recorded onboard a ship, like relative wind direction, ship heading, etc., are known to inherit this problem, and it should be noted that, unlike the example given here, the incorrect time-averaged angle can take any value between 0 and 360 degrees, depending on the instantaneous values over which the average is calculated.
Although it may be possible to fix these incorrect values using a carefully designed algorithm, there is no established method available at the moment. Thus, it is suggested to fix these measurements using an alternate source for the data variables. For example, the wind direction can be gathered easily from a weather hindcast data source. Thus, it can be used to correct or just replace the relative wind direction measurements, recorded onboard the ship. The ship's heading, on the other hand, can be estimated using the latitude and longitude measurements from the GPS sensor.
\subsection{AIS Data}
AIS is an automatic tracking system that uses transceivers to help ships and maritime authorities identify and monitor ship movements. It is generally used as a tool for ship transportation services to prevent collisions during navigation. Ships over 300 tons must be equipped with transponders capable of transmitting and receiving all message types of AIS under the SOLAS Convention. AIS data is divided into dynamic (position, course, speed, etc.) static (ship name, dimensions, etc.), and voyage-related data (draft, destination, ETA, etc.). Dynamic data is automatically transmitted every 2-10 seconds depending on the speed and course of the ship, and if anchored, such information is automatically transmitted every 6 minutes. On the other hand, static and voyage-related data is provided by the ship's crew, and it is transmitted every 6 minutes regardless of the ship's movement state.
Since dynamic information is automatically updated based on sensor data, it is susceptible to faults and errors, similar to those described in section \ref{sec:incorrMeasureInServData}. In addition, problems may occur even in the process of collecting and transmitting data between AIS stations, as noted by \citet{weng2020exploring}. The AIS signal can also be influenced by external factors, such as weather conditions and Earth's magnetic field, due to their interference with the very high frequency (VHF) equipment. Therefore, some of the AIS messages are lost or get mixed. Moreover, the receiving station has a short time slot during which the data must be received, and due to heavy traffic in the region, it fails to receive the data from all the ships in that time. In some cases, small ships deliver inaccurate information due to incorrectly calibrated transmitters, as shown by \citet{weng2020exploring}. In a case study, \citet{harati2007automatic} observed that 2\% of the MMSI (Maritime Mobile Service Identity) information was incorrect and 30\% of the ships were not properly marked with the correct navigation status. In the case of ship dimensions, about 18\% of the information was found to be inaccurate. Therefore, before using AIS raw data for ship performance analysis, it is necessary to check key parameters such as GPS position, speed, and course, and the data identified as incorrect must be fixed.
\subsubsection{Irrational Speed Data}
The GPS speed (or speed-over-ground) measurements from AIS data may contain samples that have a sudden jump compared to adjacent samples or excessively higher or lower value than the normal operating range. This type of inaccurate data can be identified through comparison with location and speed data of adjacent samples. The distance covered by the ship at the corresponding speed during the time between the two adjacent AIS messages is calculated, and the distance between the actual two coordinates is calculated using the Haversine formula (given by equation \ref{eq:havsineDistance}) to compare the two values. If the difference between the two values is negligible, the GPS speed can be said to be normal, but if not, it is recommended to be replaced with the GPS speed value of the adjacent sample. It should be noted that if the time difference between the samples is too short, the deviation of the distance calculated through this method may be large. In such a case, it is necessary to consider the average trend for several samples. If there are no valid samples nearby or the GPS coordinate data is problematic, one can refer to the normal service speed according to the ship type, as shown in table \ref{tab:vParams}, or if available, a more specific method such as normalcy box (\citet{rhodes2005maritime,tu2017exploiting}), which defines the speed range of the ships according to the geographic location, may be applied.
\begin{equation}\label{eq:havsineDistance}
{D = 2r\sin^{-1} \left(\sqrt{\sin^{2}\left(\frac{y_{i+1}-y_{i}}{2}\right)+\cos{\left(y_i\right)}\cos{\left(y_{i+1}\right)}\sin^{2}\left(\frac{x_{i+1}-x_{i}}{2}\right)}\right)}
\end{equation}
Where $D$ is the distance between two coordinates ($x_i$, $y_i$) and ($x_{i+1}$, $y_{i+1}$), $r$ is the radius of Earth, and ($x_i$, $y_i$) is the longitude and latitude at timestamp $i$.
\begin{table}[ht]
\caption{Typical service speed range of different ship types, given by \citet{solutions2018basic}.} \label{tab:vParams}
\centering
\begin{tabular}{l|l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Type}} & \multicolumn{1}{c}{\textbf{Service speed (knot)}}\\
\hline
Tanker & Crude oil carrier & 13-17\\
& Gas tanker/LNG carrier & 16-20\\
& Product & 13-16\\
& Chemical & 15-18\\
\hline
Bulk carrier & Ore carrier & 14-15\\
& Regular & 12-15\\
\hline
Container & Line carrier & 20-23\\
& Feeder & 18-21\\
\hline
General cargo & General cargo & 14-20\\
& Coaster & 13-16\\
\hline
Roll-on/roll-off cargo & Ro-Ro/Ro-Pax & 18-23\\
\hline
Passenger ship & Cruise ship & 20-23\\
& Ferry & 16-23\\
\hline
\end{tabular}
\end{table}
\subsubsection{Uncertainty due to Human Error}
AIS data, excluding dynamic information, is not automatically updated by the sensors, but it is logged by the ship's crew manually, so there is a possibility of human error. This includes information such as the draft, navigation status, destination, and estimated time of arrival (ETA) of the ship. Although it is difficult to clearly distinguish the incorrectly entered information, it is possible to indirectly determine whether the manual input values have been updated using the automatically logged dynamic information. Each number in navigation status represents ship activity such as `under way using engine (0)', `at anchorage (1)', and `moored (5)'. If this field is being updated normally, it should be `0' if the ship is in-trip and `5' if it is at berth. If the navigation status of the collected AIS data is `1' or `5' above a certain GPS speed (or speed-over-ground), or if the state is set to `0' even when the speed is 0 and the location is within the port, the AIS data have not been updated on time and other manually entered information should also be questioned.
\subsection{Noon Report Data}
Ships engaged in international navigation of more than 500 gross tons are required to send a noon report to the company, which briefly records what happened on the ship from previous noon to present noon. The noon report must basically contain sufficient information regarding the location, course, speed, and internal and external conditions affecting the vessel's voyage. Additionally, the shipping company collects information related to fuel consumption and remaining fuel onboard, propeller slip, average RPM, etc. as needed. Such information is often used as a ship's management tool and reference data, such as monitoring and evaluating ship's performance, calculating energy efficiency operating indicators, and obtaining fuel and freshwater order information. Despite its customary use, the standardized information in the noon reports may not be sufficient to accurately assess the performance of the ship, due to several problems discussed as follows. This information is based on the average values from noon to noon. For an accurate ship performance analysis, higher frequency samples and additional data may be recommended.
\subsubsection{Uncertainties due to Averaging Measurements \& Human Error} \label{sec:noonReportsAvgProb}
Basically, information reported through the noon reports is created based on the measurement values of the onboard sensor. Therefore, it also has the possibility to involve the problem of inherently faulty sensors and incorrect measurements, as discussed in section \ref{sec:incorrMeasureInServData}. Apart from the problems caused by sensors, the noon report data may have problems caused by the use of 24-hour averaged values and human errors. The data collection interval is once a day and the average of the values recorded for 24 hours is reported, thus, significant inaccuracies may be included in the data. \citet{aldous2015uncertainty} performed a sensitivity analysis to assess the uncertainty due to the input data for ship performance analysis using continuously recorded in-service data and noon reports. It was observed here that the uncertainty of the outcome was significantly sensitive to the number of samples in the dataset. In other words, such uncertainty can be mitigated through the use of data representing longer time-series, data collection with higher frequency, and data processing. These results were also confirmed by \citet{park2017comparative} and \citet{themelis2018comparative}. \citet{park2017comparative} demonstrated in a case study that the power consumption between the noon reports and the recorded sensor data differed by 6.2\% and 17.8\% in ballast and laden voyage, respectively.
Using the averaged values over a long time period, as in the case of noon reports, the variations due to acceleration/deceleration and maneuvering cannot be captured. In particular, in the case of ships that sail relatively short voyages such as feeder ships and ferries, inappropriate data for performance analysis may be provided due to frequent changes in the operational state. In the case of information regarding the weather and sea states, the information generally corresponds to the condition right before the noon report is sent from the ship, therefore, it is not easy to account for the changes in the performance of the ship due to the variation of weather conditions during the last 24 hours. In general, the information to be logged in the noon report is read and noted by a person from onboard sensors. Thus, it is possible that the time at which the values are read from the sensors everyday may be different as well as different sensors may be used for the values to be logged for the same field. In addition, there may be cases when the observed value is incorrectly entered into the noon report. Thus, if the process of preparing the noon reports is not automated, there would always be a possibility of human errors in the data.
\section{Results: Data Processing Framework} \label{sec:results}
The results here are presented in the form of the developed data processing framework, which can be used to process raw data obtained from one of the above mentioned data sources (in section \ref{sec:dataSources}) for ship performance analysis. The data processing framework is designed to resolve most of the problems cited in the above section. Figure \ref{fig:flowDiag} shows the flow diagram for the data processing framework. The following sections will explain briefly the consecutive processing steps of the given flow diagram. It may be possible that the user may not able to carry-out each step due to unavailability of some information or features in the dataset, for example, due to the unavailability of the GPS data (latitude, longitude and timestamp variables), it may not be possible to interpolate weather hindcast data. In such a case, it is recommended to skip the corresponding step and continue with the next one.
The data processing framework has been outlined in such a manner that, after being implemented, it can be executed in a semi-automatic manner, i.e., requiring limited intervention from the user. The semi-autonomous nature of the framework would also result in fast data processing, which can be important for very large datasets. The implementation of the framework in terms of executable code is also quite important to obtain a semi-automatic and fast implementation of the data processing framework. Therefore, it is recommended to adopt best practices and optimized algorithms for each individual processing step according to the programming language in use. On another note, the reliability of the data processing activity is also quite critical to obtain good results. Therefore, it is important to carry-out the validation of work done in each processing step by creating visualization (or plots) and inspecting them for any undesired errors. The usual practice adopted here, while processing the data using the framework, is to create several such visualizations, like time-series plots of data variables in trip-wise manner (explained later in section \ref{sec:divideIntoTrips}), at the end of each processing step and then inspecting them to validate the outcome.
\begin{figure}
\centering
\begin{tikzpicture}[font=\small,thick, node distance = 0.35cm]
\node[draw,
rounded rectangle,
minimum width = 2.5cm,
minimum height = 1cm
] (block1) {Raw Data};
\node[draw,
below=of block1,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block2) {Ensure Uniform \\ Time Steps};
\node[draw,
below=of block2,
minimum width=3.5cm,
minimum height=1cm
] (block3) {Divide into Trips};
\node[draw,
below=of block3,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block4) {Interpolate Hindcast \\ (Using GPS Data)};
\node[draw,
trapezium,
trapezium left angle = 65,
trapezium right angle = 115,
trapezium stretches,
left=of block4,
minimum width=3.5cm,
minimum height=1cm
] (block5) {Weather Hindcast};
\node[draw,
below=of block4,
minimum width=3.5cm,
minimum height=1cm
] (block6) {Derive New Features};
\node[draw,
diamond,
right=of block6,
minimum width=2.5cm,
inner sep=1,
align=center
] (block17) {Interpolation \\ Error?};
\node[draw,
below=of block6,
minimum width=3.5cm,
minimum height=1cm
] (block7) {Validation Checks};
\node[draw,
diamond,
below=of block7,
minimum width=2.5cm,
inner sep=1,
align=center
] (block8) {Data Processing \\ Errors Detected?};
\node[coordinate,right=1.8cm of block8] (block9) {};
\node[coordinate,right=1.6cm of block4] (block10) {};
\node[draw,
below=of block8,
minimum width=3.5cm,
minimum height=1cm
] (block11) {Fix Draft \& Trim};
\node[draw,
below=of block11,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block12) {Calculate Hydrostatics \\ (Displacement, WSA, etc.)};
\node[draw,
trapezium,
trapezium left angle = 65,
trapezium right angle = 115,
trapezium stretches,
left=of block12,
minimum width=3.5cm,
minimum height=1cm
] (block15) {Ship Particulars};
\node[draw,
below=of block12,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block13) {Calculate Resistance \\ Components};
\node[draw,
below=of block13,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block16) {Data Cleaning \& \\ Outlier Detection};
\node[draw,
rounded rectangle,
below=of block16,
minimum width = 2.5cm,
minimum height = 1cm,
inner sep=0.25cm
] (block14) {Processed Data};
\draw[-latex] (block1) edge (block2)
(block2) edge (block3)
(block3) edge (block4)
(block4) edge (block6)
(block6) edge (block7)
(block7) edge (block8)
(block8) edge node[anchor=east,pos=0.25,inner sep=2.5]{No} (block11)
(block11) edge (block12)
(block12) edge (block13)
(block13) edge (block16)
(block16) edge (block14);
\draw[-latex] (block5) edge (block4);
\draw[-latex] (block15) edge (block12);
\draw[-latex] (block8) -| (block9) node[anchor=south,pos=0.1,inner sep=2.5]{Yes}
(block9) -| (block17);
\draw[-latex] (block17) |- (block10)
(block10) |- (block4) node[anchor=south,pos=0.1,inner sep=2.5]{Yes};
\draw[-latex] (block17) -- (block6) node[anchor=south,pos=0.4,inner sep=2.5]{No};
\end{tikzpicture}
\caption{Data processing framework flow diagram.} \label{fig:flowDiag}
\end{figure}
\subsection{Ensure Uniform Time Steps}
Ensuring uniform and evenly-spaced samples would not only make it easier to apply time-gradient-based data processing or analysis steps but would also help avoid any misunderstanding while visualizing the data, by clearly showing a gap in the time-series plots (when plotted against sample numbers) and removing any abrupt jumps in the data values. Depending on the data acquisition (DAQ) system, the in-service data recorded onboard a ship is generally recorded with a uniform and evenly spaced sampling interval. Nevertheless, it is observed that the extracted sub-dataset from the main database may contain several missing time steps (or timestamps). In such a case, it is recommended to check for such missing timestamps by simply calculating the gradient of timestamps, and for each missing timestamp, just add an empty row consisting only the missing timestamp value. Finally, the dataset should be sorted according the timestamps, resulting in a uniform and evenly-spaced list of samples.
Similar procedure can be adopted for a noon report dataset. The noon reports are generally recorded every 24 hours, but it may sometimes be more or less than 24 hours if the vessel's local time zone is adjusted, specially on the day of arrival or departure. The same procedure may not be feasible in case of AIS data, as the samples here are sporadically distributed in general. Here, the samples are collected at different frequencies depending on the ship's moving state, surrounding environment, traffic, and the type of AIS receiving station (land-based or satellite). It is observed here that the data is collected in short and continuous sections of the time-series, leaving some large gaps between samples, as shown in figure \ref{fig:resampleSOG}. Here, it is recommended to first resample the short and continuous sections of AIS data to a uniform sampling interval through data resampling techniques, i.e., up-sampling or down-sampling (as demonstrated by \citet{virtanen2020scipy}), and then, fill the remaining large gaps with empty rows.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/resample.png}
\caption{Down-sampling the collected AIS data to 15 minutes interval.} \label{fig:resampleSOG}
\end{figure}
\subsection{Divide Into Trips} \label{sec:divideIntoTrips}
Using conventional tools, data visualization becomes a challenge if the number of samples in the dataset is enormously large. It may simply not be practical to plot the whole time-series in a single plot. Moreover, dividing the time-series into individual trips may be considered as neat and help discretize the time-series into sensible sections which may be treated individually for further data processing and analysis. Plotting an individual trip would also give a complete overview of a port-to-port journey of the ship. Dividing the data into trips and at-berth legs would also make further data processing computationally less expensive as it may be possible to ignore a large number of samples (for further steps) where the ship is not in a trip. For such samples, it may not be necessary to interpolate hindcast, calculate hydrostatics, calculate resistance components, etc. Lastly, identifying individual trips would also make draft and trim correction step easier.
Dividing data into trips is substantially easier for noon reports and AIS data as they are generally supplied with a source and/or destination port name. In case of in-service data, it may be possible that no such information is available. In such a case, if the GPS data (latitude and longitudes) is available, it may be possible to just plot the samples on the world map and obtain individual trips by looking at the port calls. Alternatively, if the in-service data is supplied with a `State' variable\footnote{Generally available for ships equipped with Marorka systems (www.marorka.com).} (mentioned by \citet{Gupta2019}), indicating the propulsive state of the ship, like `Sea Passage', `At Berth', `Maneuvering', etc., it is recommended to find the continuous legs of `At Berth' state and enumerate the gaps in these legs with trip numbers, containing the rest of the states, as shown in figure \ref{fig:splitTSviaState}. Otherwise, it is recommended to use the shaft rpm and GPS speed (or speed-over-ground) time-series to identify the starting and end of each port-to-port trip. Here, a threshold value can be adopted for the shaft rpm and GPS speed. All the samples above these threshold values (either or both) are considered to be in-trip samples, as shown in figure \ref{fig:splitTS}. Thus, continuous legs of such in-trip samples can simply be identified and enumerated. It may also be possible to append few samples before and after each of these identified trips to obtain a proper trip, starting from zero and ending at zero. Such a process is designed keeping in mind the noise in the shaft rpm and GPS speed variables when the ship is actually static. Finally, if the GPS data is available, further adjustments can be done by looking at the port calls on the world map plotted with the GPS data.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Split_TS_J3.png}
\caption{Splitting time-series into trips using the `State' variable.} \label{fig:splitTSviaState}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Static_Indices_J3.png}
\caption{Splitting time-series into trips using threshold values (indicated by dashed red lines) for shaft rpm (10 rpm) and GPS speed (3 knots) variables.} \label{fig:splitTS}
\end{subfigure}
\caption{Splitting time-series into trips.}
\end{figure}
\subsection{Interpolate Hindcast \& GPS Position Correction} \label{sec:interpolateHindcast}
Even if the raw data contains information regarding the state of the weather for each data sample, it may be a good idea to interpolate weather hindcast (or metocean) data available from one of the well-established sources. The interpolated hindcast data would not only provide a quantitative measure of the weather conditions (and, consequently, the environmental loads) experienced by the ship, but it would also help carry-out some important validation checks (discussed later in section \ref{sec:resultsValChecks}). In order to interpolate hindcast data, the information regarding the location (latitude and longitude) and recording timestamp must be available in the ship's dataset. For ship performance analysis, it should be aimed that, at least, the information regarding the three main environmental load factors, i.e., wind, waves and sea currents, is gathered from the weather hindcast sources. For a further detailed analysis, it may also be a good idea to obtain additional variables, like sea water temperature (both surface and gradient along the depth of the ship), salinity, etc.
Before interpolating the weather hindcast data to the ship's location and timestamps, it is recommended to ensure that the available GPS (or navigation) data is validated and corrected (if possible) for any errors. If the GPS data is inaccurate, weather information at the wrong location is obtained, resulting in incorrect values for further analysis. For instance, the ship's original trajectory obtained from the GPS data, presented in figure \ref{fig:gps_outlier}, shows that the ship proceeds in a certain direction while suddenly jumping to an off-route location occasionally. The ship, of course, may have gone off-route as shown here, but referring to the GPS speed and heading of the ship at the corresponding time, shown in figure \ref{fig:gps_condition}, it is obvious that the navigation data is incorrect. Here, such an irrational position change can be detected through the two-stage steady-state (or stationarity) filter suggested by \citet{Gupta2021}, based on the method developed by \citet{Dalheim2020}. The first stage of the filter uses a sliding window to remove unsteady samples by performing a t-test on the slope of the data values, while the second stage performs an additional gradient check for the samples failing in the first stage to retain the misidentified samples. The `irrational position' in figure \ref{fig:gps_outlier} shows the coordinates identified as unsteady when the above two-stage filter is applied to longitude and latitude time-series. The filtered trajectory is further obtained after removing the samples with `irrational position' from the original data.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/GPS_outlier.png}
\caption{Original trajectory and filtered trajectory with irrational GPS position.} \label{fig:gps_outlier}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/GPS_condition.png}
\caption{Trends of GPS speed, heading, and position of the ship according to the corresponding period.} \label{fig:gps_condition}
\end{subfigure}
\caption{GPS position cleaning using the steady-state detection algorithm.}
\end{figure}
The hindcast data sources generally allow downloading a subset of the variables, timestamps, and a sub-grid of latitudes and longitudes, i.e., the geographical location. Depending on the hindcast source, the datasets can be downloaded manually (by filling a form), using an automated API script, or even by directly accessing their ftp servers. It may also be possible to select the temporal and spatial resolution of the variables being downloaded. In some cases, the hindcast web servers allows the users to send a single query, in terms of location, timestamp, and list of variables, to extract the required data for an individual sample. But every query received by these servers is generally queued for processing, causing substantially long waiting times, as they are facing a good amount of traffic from all over the world. Thus, it is recommended to simply download the required subset of data on a local machine for faster interpolation.
Once the hindcast data files are available offline, the main task at hand is to understand the cryptic (but highly efficient) data packaging format. Now-a-days, the two most poplar formats for such data files are GRIdded Binary data (GRIB) and NetCDF. GRIB (available as GRIB1 or GRIB2) is the international standard accepted by World Meteorological Organization (WMO), but due to some compatibility issues with windows operating systems, it may be preferable to use the NetCDF format.
Finally, a step-by-step interpolation has to be carried-out for each data sample from the ship's dataset. Algorithm \ref{algo:hindcastInterp} shows a simple procedure for n-th order (in time) interpolation scheme. Here, the spatial and temporal interpolation is performed in steps \ref{algoStep:spatialInterp} and \ref{algoStep:temporalInterp}, respectively. For a simple and reliable procedure, it is recommended to perform the spatial interpolation using a grid of latitudes and longitudes around the ship's location, after fitting a linear or non-linear 2D surface over the hindcast grid. It may be best to use a linear surface here as, firstly, the hindcast data may not be so accurate that performing a higher order interpolation would provide any better estimates, and secondly, in some case, higher order interpolation may result in highly inaccurate estimates, due to the waviness of the over-fitted non-linear surface. Similar arguments can be given in the case of temporal interpolation, and therefore, a linear interpolation in time can also be considered acceptable. The advantage of using the given algorithm is that the interpolation steps, here, can be easily validated by plotting contours (for spatial interpolation) and time-series (for temporal interpolation).
\begin{algorithm}
\caption{A simple algorithm for n-th order interpolation of weather hindcast data variables.}\label{algo:hindcastInterp}
\begin{algorithmic}[1]
\State $wData \gets $ weather hindcast data
\State $x \gets $ data variables to interpolate from hindcast
\State $wT \gets $ timestamps in $wData$
\ForAll{timestamps in ship's dataset}
\State $t \gets $ current ship time stamp
\State $loc \gets $ current ship location (latitude \& longitude)
\State $i \gets n+1$ indices of $wT$ around $t$
\ForAll{$x$}
\ForAll{$i$}
\State $x[i] \gets $ 2D spatial interpolation at $loc$ using $wData[x][i, :, :]$ \label{algoStep:spatialInterp}
\EndFor
\State $X \gets $ n-th order temporal interpolation at $t$ using $x[i]$ \label{algoStep:temporalInterp}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
An important feature of hindcast datasets is masking the invalid values. For instance, the significant wave height should only be predicted by the hindcast model for the grid nodes which fall in the sea, requesting the value of such a variable on land should result in an invalid value. Such invalid values (or nodes) are by default masked in the downloaded hindcast data files, probably for an efficient storage of the data. These masked nodes may be filled with zeros before carrying-out the spatial interpolation in step \ref{algoStep:spatialInterp}, as one or more of these nodes may be contributing to the interpolation. Alternatively, if a particular masked node is contributing to the interpolation, it can be set to the mean of other nodes surrounding the point of interpolation, as suggested by \citet{Ejdfors2019}. It is argued by \citet{Ejdfors2019} that this would help avoid the artificially low (zero) values during the interpolation, but if the grid resolution is fine-enough, it is expected that the calculated mean (of unmasked surrounding nodes) would also not be much higher than zero.
\subsection{Derive New Features}
Interpolating the weather hindcast variables to ship's location at a given time would provide the hindcast variables in the global (or the hindcast model's) reference frame. For further analysis, it may be appropriate to translate these variables to ship's frame of reference, and furthermore, it may be desired to calculate some new variables which could be more relevant for the analysis or could help validate the assimilated (ship and hindcast) dataset. The wind and sea current variables, obtained from the hindcast source and the ship's dataset, can be resolved into the longitudinal and transverse speed components for validation and further analysis. Unfortunately, the wave load variables cannot be resolved in a similar manner, but the mean wave direction should be translated into the relative mean wave direction (relative to the ship's heading or course).
\subsection{Validation Checks} \label{sec:resultsValChecks}
Although it is recommended to validate each processing step by visualizing (or plotting) the task being done, it may be a good idea to take an intermediate pause and perform all types of possible validation checks. These validation checks would not only help assess the dataset from reliability point of view but can also be used to understand the correlation between various features. The validation checks can be done top-down, starting from the most critical feature to the least one. As explained in section \ref{sec:bestPractices}, the shaft power measurements can be validated against the shaft rpm and shaft torque measurements, if these are available, else just plotting the shaft rpm against the shaft power can also provide a good insight into the quality of data. For a better assessment, it is suggested to visualize the shaft rpm vs shaft power overlaid with the engine operational envelope and propeller curves, as presented by \citet{Liu2020} (in figure 11). Any sample falling outside the shaft power overload envelope (specially at high shaft rpm) should be removed from the analysis, as they may be having measurement errors. It may also be possible to make corrections, if the shaft power data seems to be shifted (up or down) with respect to the propeller curves due to sensor bias.
The quality of speed-through-water measurements can be assessed by validating it against its estimate, obtained as a difference between the speed-over-ground and longitudinal current speed. Here, it should be kept in mind that the two values may not be a very good match due to several problems cited in section \ref{sec:incorrMeasureInServData}. Visualizing the speed-though-water vs shaft power along with all the available estimates of the speed-power calm-water curve is also an important validation step (shown in figure \ref{fig:speedVsPowerWSPCurves}). Here, the majority of measurement data should accumulate around these curves. In case of disparity between the curves, the curve obtained through the sea trial of the actual ship may take precedence.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{Figures/Log_speed_vs_Power_0_J3.png}
\caption{Speed-though-water (log speed) vs shaft power with various estimates of speed-power calm-water curves.} \label{fig:speedVsPowerWSPCurves}
\end{figure}
The interpolated weather hindcast data variables must also be validated against the measurements taken onboard the ship. This is quite critical as the sign and direction notations assumed by the hindcast models and ship's sensors (or data acquisition system) are probably not the same, which may cause mistakes during the interpolation step. Moreover, most ships are generally equipped with anemometers that can measure the actual and relative wind speed and directions, and these two modes (actual or relative) can be switched through a simple manipulation by the crew onboard. It is possible that this mode change may have occurred during the data recording duration, resulting in errors in the recorded data. In addition, there may be a difference between the reference height of the wind hindcast data and the vertical position of the installed anemometer, which may lead to somewhat different results even at the same location at sea. The wind speed at the reference height (${V_{WT}}_{ref}$) can be corrected using the anemometer recorded wind speed ($V_{WT}$), assuming a wind speed profile, as follows (recommended by \citet{ITTC2017}):
\begin{equation}\label{eq:referenceHeight}
{V_{WT}}_{ref} = V_{WT}\left(\frac{Z_{ref}}{Z_{a}}\right)^{\frac{1}{9}}
\end{equation}
Where $Z_{ref}$ is the reference height above the sea level and $Z_a$ is the height of the anemometer.
Finally, these wind measurements can be translated into the longitudinal and transverse relative components. The obtained transverse relative wind speed can be validated against the transverse wind speed, obtained from the hindcast source, as they are basically the same. Similarly, the difference between the longitudinal relative wind speed and the speed-over-ground of the ship can be validated against the longitudinal wind speed measurements from hindcast, as shown in figure \ref{fig:longWindSpeedValidation}. In case of time-averaged in-service data, the problem of faulty averaging of angular measurements when the measurement values are near 0 or 360 degrees (i.e., the angular limits), explained in section \ref{sec:timeAvgProb}, must also be verified and appropriate corrective measures should be taken. From figure \ref{fig:longWindSpeedValidation}, it can be clearly seen that the time-averaging problem (in relative wind direction) causes the longitudinal wind speed (estimated using the ship data) to jump from positive to negative, resulting in a mismatch with the corresponding hindcast values. In such a case, it is recommended to either fix these faulty measurements, which may be difficult as there is no proven way to do it, or just use the hindcast measurements for further analysis.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/LongWindSpeed_J3.png}
\caption{Validating longitudinal wind speed obtained using the ship data against the values obtained from the hindcast. The time-averaging problem with angular measurements around 0 or 360 degrees (explained in section \ref{sec:timeAvgProb}) is clearly visible here.} \label{fig:longWindSpeedValidation}
\end{figure}
As discussed in the case of noon reports in section \ref{sec:noonReportsAvgProb}, weather information generally refers to the state of the weather at the time when the report is logged, which is probably not the average state from noon to noon. Furthermore, the wind loads here are observed based on the Beaufort scale, therefore, the deviation may be somewhat large when converted to the velocity scale. In this case, it is recommended to consider the daily average values obtained from the weather hindcast data, over the travel region, rather than the noon report values.
\subsection{Data Processing Errors}
The validation step is very critical in finding out any processing mistakes or inherent problems with the dataset, as demonstrated in the previous section. Such problems or mistakes, if detected, must be corrected or amended for before moving forward with the processing and analysis. The main mistakes found at this step are generally either interpolation mistakes or incorrect formulation of the newly derived feature. These mistakes should be rectified accordingly, as shown in the flow diagram (figure \ref{fig:flowDiag}).
\subsection{Fix Draft \& Trim} \label{sec:fixDraft}
The draft measurements recorded onboard the ship are often found to be incorrect due to the Venturi effect, explained briefly in section \ref{sec:incorrMeasureInServData}. The Venturi effect causes the draft measurements to drop to a lower value due to a non-zero negative dynamic pressure as soon as the ship develops a relative velocity with respect of the water around the hull. Thus, the simplest solution to fix these incorrect measurements is by interpolating the draft during a voyage using the draft measured just before and after the voyage. Such a simple solution provides good results for a simple case where the draft of the ship basically remains unchanged during the voyage, except for the reduction of draft due to consumed fuel, as shown in the figure \ref{fig:simpleDraftCorr}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_014.png}
\caption{Simple draft correction.} \label{fig:simpleDraftCorr}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_TS_033_Corr_J3.png}
\caption{Complex draft correction.} \label{fig:complexDraftCorr}
\end{subfigure}
\caption{Correcting in-service measured draft.}
\end{figure}
In a more complex case where the draft of the ship is changed in the middle of the voyage and the ship is still moving, i.e., conducting ballasting operations or trim adjustments during transit, the simple draft interpolation would result in corrections which can be way off the actual draft of the vessel. As shown in figure \ref{fig:complexDraftCorr}, the fore draft is seen to be dropping and the aft draft increasing in the middle of the voyage without much change in the vessel speed, indicating trim adjustments during transit. In this case, a more complex correction is applied after taking into account the change in draft during the transit. Here, first of all, a draft change operation is identified (marked by green and red vertical lines in figure \ref{fig:complexDraftCorr}), then the difference between the measurements before and after the operation is calculated by taking an average over a number of samples. Finally, a ramp is created between the starting of the draft change operation (green line) and the end of operation (red line). The slope of the ramp is calculated using the difference between the draft measurements before and after the draft change operation. The draft change operation can either be identified manually, by looking at the time-series plots, or using the steady-state (or stationarity) filter developed by \citet{Dalheim2020}.
In case of AIS data, \citet{bailey2008training} reported that 31\% of the draft information out of the investigated AIS messages had obvious errors. The draft information from AIS data generally corresponds to the condition of ships while arriving at or departing from the port, and changes due to fuel consumption and ballast adjustment onboard are rarely updated. Since the draft obtained from the AIS as well as noon reports has a long update cycle and is acquired by humans, it is practically difficult to precisely fix the draft values as in the case of in-service data. However, by comparing the obtained draft with a reference value, it may be possible to gauge whether the obtained draft is, in fact, correct. If the obtained draft excessively deviates from the reference, it may be possible to remove the corresponding data samples from further analysis or replace the obtained draft value with a more appropriate value. Table \ref{tab:draftRatio} shows the results of investigating the average draft ratio, which is the ratio of the actual draft ($T_c$) and design draft ($T_d$), for various ship types from 2013 to 2015 by \citet{olmer2017greenhouse}. As summarized in the table, the draft ratio varies depending on the ship type and the voyage type. By using these values as the above mentioned reference, the draft obtained from the AIS data and noon reports can be roughly checked and corrected.
\begin{table}[ht]
\caption{Average draft ratio ($T_c/T_d$) for different ship types. $T_c$ = actual draft during a voyage; $T_d$ = design draft of the ship.} \label{tab:draftRatio}
\centering
\begin{tabular}{l|c|c}
\hline
\multicolumn{1}{c|}{\textbf{Ship types}} & \multicolumn{1}{c|}{\textbf{Ballast Voyage}} & \multicolumn{1}{c}{\textbf{Laden Voyage}}\\
\hline
Liquefied gas tanker & 0.67 & 0.89\\
Chemical tanker & 0.66 & 0.88\\
Oil tanker & 0.60 & 0.89\\
Bulk carrier & 0.58 & 0.91\\
General cargo & 0.65 & 0.89\\
\hline
\multicolumn{3}{c}{\textit{The following ship types do not generally have ballast-only voyages.}} \\
\hline
Container & \multicolumn{2}{c}{0.82}\\
Ro-Ro & \multicolumn{2}{c}{0.87}\\
Cruise & \multicolumn{2}{c}{0.98}\\
Ferry pax & \multicolumn{2}{c}{0.90}\\
Ferry ro-pax & \multicolumn{2}{c}{0.93}\\
\hline
\end{tabular}
\end{table}
\subsection{Calculate Hydrostatics}
Depending on the type of performance analysis, it may be necessary to have features like displacement, wetted surface area (WSA), etc. in the dataset, as they are more relevant from a hydrodynamic point of view. Moreover, most of the empirical or physics-based methods for resistance calculations (to be done in the next step) requires these features. Unfortunately, these feature cannot be directly recorded onboard the ship. But it is fairly convenient to estimate them using the ship's hydrostatic table or hull form (or offset table) for the corresponding mean draft and trim for each data sample. Here, it is recommended to use the corrected draft and trim values, obtained in the previous step. If the detailed hull form is not available, the wetted surface area can also be estimated using the empirical formulas shown in table \ref{tab:wsaParams}. The displacement at design draft, on the other hand, can be estimated using the ship particulars and typical range of block coefficient ($C_B$), presented in table \ref{tab:cbParams}.
\begin{table}[ht]
\caption{Estimation formulas for wetted surface area of different ship types.} \label{tab:wsaParams}
\centering
\begin{tabular}{l|l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Formula}} & \multicolumn{1}{c}{\textbf{Reference}}\\
\hline
Tanker/Bulk carrier & $WSA = 0.99\cdot(\frac{\nabla}{T}+1.9\cdot L_{WL}\cdot T)$ & \citet{Kristensen2017} \\
Container & $WSA = 0.995\cdot(\frac{\nabla}{T}+1.9\cdot L_{WL}\cdot T)$ & \citet{Kristensen2017} \\
Other (General) & $WSA = 1.025\cdot(\frac{\nabla}{T}+1.7\cdot L_{PP}\cdot T)$ & \citet{molland2011maritime} \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{Typical block coefficient ($C_B$) range at design draft for different ship types, given by \citet{solutions2018basic}.} \label{tab:cbParams}
\centering
\begin{tabular}{l|l|c}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Type}} & \multicolumn{1}{c}{\textbf{Block coefficient ($C_B$)}}\\
\hline
Tanker & Crude oil carrier & 0.78-0.83\\
& Gas tanker/LNG carrier & 0.65-0.75\\
& Product & 0.75-0.80\\
& Chemical & 0.70-0.78\\
\hline
Bulk carrier & Ore carrier & 0.80-0.85\\
& Regular & 0.75-0.85\\
\hline
Container & Line carrier & 0.62-0.72\\
& Feeder & 0.60-0.70\\
\hline
General cargo & General cargo/Coaster & 0.70-0.85\\
\hline
Roll-on/roll-off cargo & Ro-Ro cargo & 0.55-0.70\\
& Ro-pax & 0.50-0.70\\
\hline
Passenger ship & Cruise ship & 0.60-0.70\\
& Ferry & 0.50-0.70\\
\hline
\end{tabular}
\end{table}
\subsection{Calculate Resistance Components}
There are several components of the ship's total resistance, and there are several methods to estimate each of these components. The three main resistance components which generally constitutes the majority of the ship's total resistance are calm-water, added wind, and added wave resistance. It is possible to further divide the calm-water resistance into sub-components, namely, skin friction and residual resistance. The total calm-water resistance can be calculated using one of the many well-known empirical methods, like Guldhammer and Harvald (\citet{Guldhammer1970}), updated Guldhammer and Harvald (\citet{Kristensen2017}), Hollenbach (\citet{Hollenbach1998}), Holtrop and Mennen (\citet{Holtrop1982}), etc. These empirical methods are developed using the data from numerous model test results of different types of ships, and each one is proven to be fitting well on several different ship types. The latter makes choosing the right method for a ship quite complicated.
The easiest way to choose the right calm-water resistance estimation method is to calculate the calm-water resistance from each of these methods and comparing it with the corresponding data obtained for the given ship. The calm-water data for a given ship can be obtained from the model tests, sea trial, or even filtering the operational data, obtained from one of the sources discussed here (in section \ref{sec:dataSources}), for near-calm-water condition. The usual practice here is to use the sea trial data as it is obtained and corrected for near-calm-water condition and do not suffer from scale effects, seen in model test results. But the sea trials are sometimes conducted at only the high range of speed and ballast displacement (as shown in figure \ref{fig:speedVsPowerWSPCurves}). Thus, it is recommended to use the near-calm-water filtered (and corrected) operational data to choose the right method so that a good fit can be ensured for a complete range to speed and displacement.
According to \citet{ITTC2017}, the increase in resistance due to wind loads can be obtained by applying one of the three suggested methods, namely, wind tunnel model tests, STA-JIP, and Fujiwara's method. If the wind tunnel model test results for the vessel are available, it may be considered as the most accurate method for estimating added wind resistance. Otherwise, the database of wind resistance coefficients established by STA-JIP (\citet{van2013new}) or the regression formula presented by \citet{Fujiwara2005} is recommended. From the STA-JIP database, experimental values according to the specific ship type can be obtained, whereas Fujiwara's method is based-on the regression analysis of data obtained from several wind tunnel model tests for different ship types.
The two main sets of parameters required to estimate the added wind resistance using any of the above three methods are incident wind parameters and information regarding the exposed area to the wind. The incident wind parameters, i.e., relative wind speed and direction, can be obtained from onboard measurements or weather hindcast data. In case of weather hindcast data, the relative wind measurements can be calculated from the hindcast values according to the formulation outlined by \citet{ITTC2017} in section E.1, and in case of onboard measurements, the relative wind measurements should be corrected for the vertical position of the anemometer according to the instructions given by \citet{ITTC2017} in section E.2, also explained here in section \ref{sec:resultsValChecks}. The information regarding the exposed area to the wind can be either estimated using the general arrangement drawing of the ship or approximately obtained using a regression formula based-on the data from several ship, presented by \citet{kitamura2017estimation}.
The added wave resistance ($R_{AW}$) can also be obtained in a similar manner using one of the several well-established methods for estimating $R_{AW}$. \citet{ITTC2017} recommends conducting sea keeping model tests in regular waves to obtain $R_{AW}$ transfer functions, which can further be used to estimate $R_{AW}$ for the ship in irregular seas. To empirically obtain these transfer functions or $R_{AW}$ for a given ship, it is possible to use physics-based empirical methods like STAWAVE1 and STAWAVE2 (recommended by \citet{ITTC2017}). STAWAVE1 is a simplified method for directly estimating $R_{AW}$ in head wave conditions only, and it requires limited input, including ship's waterline length, breadth, and significant wave height. STAWAVE2 is an advanced method to empirically estimate parametric $R_{AW}$ transfer functions for a ship. The method is developed using an extensive database of sea keeping model test results from numerous ships, but unfortunately, it only provides transfer functions for approximate head wave conditions (0 to $\pm$45 degrees from bow). A method proposed by DTU (\citet{Martinsen2016}; \citet{Taskar2019}; \citet{Taskar2021}) provides transfer functions for head to beam seas, i.e., 0 to $\pm$90 degrees from bow. Finally, for all wave heading, it may be recommended to use the newly established method by \citet{Liu2020}. There have been several studies to assess and compare the efficacy of each of these methods and several other methods, but no consistent guidelines are provided regarding their applicability.
\subsection{Data Cleaning \& Outlier Detection}
It may be argued by some that the process of data cleaning and outlier detection should be carried-out way earlier in the data processing framework, as proposed by \citet{Dalheim2020DataPrep}, but it should be noted here that all the above steps proposed here have to be performed only once for a given dataset, whereas data cleaning is done based on the features selected for further analysis. Since the same dataset can be used for several different analyses, which may be using different sets of features, some part of data cleaning has to be repeated before each analysis to obtain a clean dataset with as many data samples as possible. Moreover, the additional features acquired during the above listed processing steps may be helpful in determining to a better extent if a suspected sample is actually an outlier or not.
Nevertheless, it may be possible to reduce the work load for the above processing steps by performing some basic data cleaning before some of these steps. For instance, while calculating the resistance components for in-trip data samples, it is possible to filter-out samples with invalid values for one or more of the ship data variables used to calculate these components, like speed-though-water, mean draft (or displacement), etc. This would reduce the number of samples for which the new feature has to be calculated. It should also be noted that even if such simple data cleaning (before each step) is not performed, these invalid samples would be easily filtered-out in the present step. Thus, the reliability and efficacy of the data processing framework is not affected by performing the data cleaning and outlier detection step at the end.
Most of the methods developed for ship performance monitoring assumes that the ship is in a quasi-steady state for each data sample. The quasi-steady assumption indicates that the propulsive state of the ship remains more or less constant during the sample recording duration, i.e., the ship is neither accelerating nor decelerating. This is specially critical for aforementioned time-averaged datasets, as the averaging duration can be substantially long, hiding the effects of accelerations and decelerations. Here, the two-stage steady-state filter, explained in section \ref{sec:interpolateHindcast}, can be applied to the shaft rpm time-series to remove the samples with accelerations and decelerations, resulting in quasi-steady samples. In tandem to the steady-state filter on the shaft rpm time-series, it may also be possible to use the steady-state filter, with relaxed setting, on the speed-over-ground time-series to filter-out the sample where the GPS speed (or speed-over-ground) signal suddenly drops or recovers from a dead state, resulting in measurement errors.
As discussed in section \ref{sec:outliers}, the outliers can be divided into two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers. The contextual outliers can be identified and resolved by the methods presented as well as demonstrated by \citet{Dalheim2020DataPrep}, and for correlation-defying outliers, methods like Principal Component Analysis (PCA) and autoencoders can be used. Figure \ref{fig:corrDefyingOutliers} shows the in-service data samples recorded onboard a ship. The data here is already filtered-out for quasi-steady assumption, explained above, and contextual outliers, according to the methods suggested by \citet{Dalheim2020DataPrep}. Thus, the samples highlighted by red circles (around 6.4 MW shaft power in figure \ref{fig:corrDefyingOutliersSP}) can be classified as correlation-defying outliers. The time-series plot (shown in figure \ref{fig:corrDefyingOutliersTS}) clearly indicates that the detected outliers have faulty measurements for the speed-through-water (stw) and speed-over-ground (sog), defying the correlation between these variables and the rest. It is also quite surprising to notice that the same fault occurs in both the speed measurements at the same time, considering that they are probably obtained from different sensors.
\begin{figure}[ht]
\centering
\begin{subfigure}[]{0.42\linewidth}
\includegraphics[width=\linewidth]{Figures/stw_vs_power_J3.png}
\caption{Log speed (or stw) vs shaft power.} \label{fig:corrDefyingOutliersSP}
\end{subfigure}
\begin{subfigure}[]{0.57\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_TS_128_J3.png}
\caption{Time-series.} \label{fig:corrDefyingOutliersTS}
\end{subfigure}
\caption{Correlation-defying outliers marked with red circles.} \label{fig:corrDefyingOutliers}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
The quality of data is very important in estimating the performance of a ship. In this study, a streamlined semi-automatic data processing framework is developed for ship performance analysis. The data processing framework can be used to process data from several different sources, like onboard recorded in-service data, AIS data and noon reports. These three data sources are discussed here in detail along with their inherent problems and associated examples. It is here recommended to use the onboard recorded in-service data for ship performance monitoring over the other data sources, as it is considered more reliable due its consistent and higher sampling rate. Moreover, the AIS data and noon reports lacks some of the critical variables required for ship performance analysis, and they are also susceptible to human error, as some of the data variables recorded here are manually logged by the ship's crew. Nevertheless, all three data sources are known to have several problems and should be processed carefully for any further analysis.
The data processing framework, presented in the current work, is designed to address and resolve most of the problems found in the above three data sources. It is first recommended to divide the data into trips so that further processing can be performed in a more systematic manner. A simple logic to divide the data into individual trips is outlined here if the port call information is not available. The weather hindcast (metocean) data is considered as an important supplementary information, which can be used for data validation and estimating environmental loads experienced by the ship. A simple algorithm to effectively interpolate the hindcast data at a specific time and location of a ship is presented within the data processing framework. The problem of erroneous draft measurements, caused due to the Venturi effect, is discussed in detail as well as simple interpolation is recommended to fix these measurements. A more complex case, where the draft or trim is voluntarily adjusted during the voyage without reducing the vessel speed, is also presented here. Such a case cannot be resolved with simple interpolation, and therefore, an alternate method is suggested for the same problem.
Choosing the most suitable methods for estimating resistance components may also be critical for ship performance analysis. It is, therefore, recommended to carry out some validation checks to find the most suitable methods before adopting them into practice. Such validation checks should be done, wherever possible, using the data obtained from the ship while in-service rather than just using the sea trial or model test results. Data cleaning and outlier detection is also considered an important step for processing the data. Since cleaning the data requires selecting a subset of features relevant for the analysis, it is recommended to perform this as the last step of the data processing framework, and some part of it should be reiterated before carrying out a new type of analysis. The presented data processing framework can be systematically and efficiently adopted to process the datasets for ship performance analysis. Moreover, the various data processing methods or steps mentioned here can also be used elsewhere to process the time-series data from ships or similar sources, which can be used further for a variety of tasks.
\section{}\label{}
|
\section{Introduction}
Interplay between interactions and disorders has been one of the
central issues in modern condensed matter physics
\cite{Interaction_Disorder_Book,RMP_Disorder_Interaction}. In the
weakly disordered metal the lowest-order interaction-correction
was shown to modify the density of states at the Fermi energy in
the diffusive regime \cite{AAL}, giving rise to non-Fermi liquid
physics particulary in low dimensions less than $d = 3$ while
further enhancement of electron correlations was predicted to
cause ferromagnetism \cite{Disorder_FM}. In an insulating phase
spin glass appears ubiquitously, where the average of the spin
moment vanishes in the long time scale, but local spin
correlations become finite, making the system away from
equilibrium \cite{SG_Review}.
An outstanding question is the role of disorder in the vicinity of
quantum phase transitions
\cite{Disorder_QCP_Review1,Disorder_QCP_Review2}, where effective
long-range interactions associated with critical fluctuations
appear to cause non-Fermi liquid physics
\cite{Disorder_QCP_Review2,QCP_Review}. Unfortunately, complexity
of this problem did not allow comprehensive understanding until
now. In the vicinity of the weakly disordered ferromagnetic
quantum critical point, an electrical transport-coefficient has
been studied, where the crossover temperature from the ballistic
to diffusive regimes is much lowered due to critical fluctuations,
compared with the disordered Fermi liquid
\cite{Paul_Disorder_FMQCP}. Generally speaking, the stability of
the quantum critical point should be addressed, given by the
Harris criterion \cite{Harris}. When the Harris criterion is not
satisfied, three possibilities are expected to arise
\cite{Disorder_QCP_Review2}. The first two possibilities are
emergence of new fixed points, associated with either a
finite-randomness fixed point satisfying the Harris criterion at
this new fixed point or an infinite randomness fixed point
exhibiting activated scaling behaviors. The last possibility is
that quantum criticality can be destroyed, replaced with a smooth
crossover. In addition, even away from the quantum critical point
the disordered system may show non-universal power law physics,
called the Griffiths phase \cite{Griffiths}. Effects of rare
regions are expected to be strong near the infinite randomness
fixed point and the disorder-driven crossover region
\cite{Disorder_QCP_Review2}.
This study focuses on the role of strong randomness in the heavy
fermion quantum transition. Heavy fermion quantum criticality is
believed to result from competition between Kondo and RKKY
(Ruderman-Kittel-Kasuya-Yosida) interactions, where larger Kondo
couplings give rise to a heavy fermion Fermi liquid while larger
RKKY interactions cause an antiferromagnetic metal
\cite{Disorder_QCP_Review2,QCP_Review,HF_Review}. Generally
speaking, there are two competing view points for this problem.
The first direction is to regard the heavy fermion transition as
an antiferromagnetic transition, where critical spin fluctuations
appear from heavy fermions. The second view point is that the
transition is identified with breakdown of the Kondo effect, where
Fermi surface fluctuations are critical excitations. The first
scenario is described by the Hertz-Moriya-Millis (HMM) theory in
terms of heavy electrons coupled with antiferromagnetic spin
fluctuations, the standard model for quantum criticality
\cite{HMM}. There are two ways to realize the second scenario
depending on how to describe Fermi surface fluctuations. The first
way is to express Fermi surface fluctuations in terms of a
hybridization order parameter called holon in the slave-boson
context \cite{KB_z2,KB_z3}. This is usually referred as the Kondo
breakdown scenario. The second one is to map the lattice problem
into the single site one resorting to the dynamical mean-field
theory (DMFT) approximation \cite{DMFT_Review}, where order
parameter fluctuations are critical only in the time direction.
This description is called the locally critical scenario
\cite{EDMFT}.
Each scenario predicts its own critical physics. Both the HMM
theory and the Kondo breakdown model are based on the standard
picture that quantum criticality arises from long-wave-length
critical fluctuations while the locally quantum critical scenario
has its special structure, that is, locally (space) critical
(time). Critical fluctuations are described by $z = 2$ in the HMM
theory due to finite-wave vector ordering \cite{HMM} while by $z =
3$ in the Kondo breakdown scenario associated with uniform
"ordering" \cite{KB_z3}, where $z$ is the dynamical exponent
expressing the dispersion relation for critical excitations. Thus,
quantum critical physics characterized by scaling exponents is
completely different between these two models. In addition to
qualitative agreements with experiments depending on compounds
\cite{Disorder_QCP_Review2}, these two theories do not allow the
$\omega/T$ scaling in the dynamic susceptibility of their critical
modes because both theories live above their upper critical
dimensions. On the other hand, the locally critical scenario gives
rise to the $\omega/T$ scaling behavior for the dynamic spin
susceptibility \cite{EDMFT} while it seems to have some
difficulties associated with some predictions for transport
coefficients.
We start to discuss an Ising model with Gaussian randomness for
its exchange coupling, called the Edwards-Anderson model
\cite{SG_Review}. Using the replica trick and performing the
saddle-point analysis, one can find a spin glass phase when the
average value of the exchange interaction vanishes, characterized
by the Edwards-Anderson order parameter without magnetization.
Applying this concept to the Heisenberg model with Gaussian
randomness, quantum fluctuations should be incorporated to take
into account the Berry phase contribution carefully. It was
demonstrated that quantum corrections in the DMFT approximation
lead the spin glass phase unstable at finite temperatures,
resulting in a spin liquid state when the average value of the
exchange coupling vanishes \cite{Sachdev_SG}. It should be noted
that this spin liquid state differs from the spin liquid phase in
frustrated spin systems in the respect that the former state
originates from critical single-impurity dynamics while the latter
phase results from non-trivial spatial spin correlations described
by gauge fluctuations \cite{Spin_Liquid_Review}. The spin liquid
phase driven by strong randomness is characterized by its critical
spin spectrum, given by the $\omega/T$ scaling local spin
susceptibility \cite{Sachdev_SG}.
Introducing hole doping into the spin liquid state, Parcollet and
Georges examined the disordered t-J model within the DMFT
approximation \cite{Olivier}. Using the U(1) slave-boson
representation, they found marginal Fermi-liquid phenomenology,
where the electrical transport is described by the $T$-linear
resistivity, resulting from the marginal Fermi-liquid spectrum for
collective modes, here the $\omega/T$ scaling in the local spin
susceptibility. They tried to connect this result with physics of
high T$_{c}$ cuprates.
In this study we introduce random hybridization with conduction
electrons into the spin liquid state. Our original motivation was
to explain both the $\omega/T$ scaling in the spin spectrum
\cite{INS_Local_AF} and the typical $T$-linear resistivity
\cite{LGW_F_QPT_Nature} near the heavy fermion quantum critical
point. In particular, the presence of disorder leads us to the
DMFT approximation naturally \cite{Moore_Dis_DMFT}, expected to
result in the $\omega/T$ scaling for the spin spectrum
\cite{Sachdev_SG}.
Starting from an Anderson lattice model with disorder, we derive
an effective local field theory in the DMFT approximation, where
randomness is introduced into both hybridization and RKKY
interactions. Performing the saddle-point analysis in the U(1)
slave-boson representation, we reveal its phase diagram which
shows a quantum phase transition from a spin liquid state to a
local Fermi liquid phase. In contrast with the clean limit of the
Anderson lattice model \cite{KB_z2,KB_z3}, the effective
hybridization given by holon condensation turns out to vanish,
resulting from the zero mean value of the hybridization coupling
constant. However, we show that the holon density becomes finite
when variance of hybridization is sufficiently larger than that of
the RKKY coupling, giving rise to the Kondo effect. On the other
hand, when the variance of hybridization becomes smaller than that
of the RKKY coupling, the Kondo effect disappears, resulting in a
fully symmetric paramagnetic state, adiabatically connected with
the spin liquid state of the disordered Heisenberg model
\cite{Sachdev_SG}.
Our contribution compared with the previous works
\cite{Kondo_Disorder} is to introduce RKKY interactions between
localized spins and to observe the quantum phase transition in the
heavy fermion system with strong randomness. The previous works
focused on how the non-Fermi liquid physics can appear in the
Kondo singlet phase away from quantum criticality
\cite{Kondo_Disorder}. A huge distribution of the Kondo
temperature $T_{K}$ turns out to cause such non-Fermi liquid
physics, originating from the finite density of unscreened local
moments with almost vanishing $T_K$, where the $T_{K}$
distribution may result from either the Kondo disorder for
localized electrons or the proximity of the Anderson localization
for conduction electrons. Because RKKY interactions are not
introduced in these studies, there always exist finite $T_{K}$
contributions. On the other hand, the presence of RKKY
interactions gives rise to breakdown of the Kondo effect, making
$T_{K} = 0$ identically in the strong RKKY coupling phase.
In Ref. [\onlinecite{Kondo_RKKY_Disorder}] the role of random RKKY
interactions was examined, where the Kondo coupling is fixed while
the chemical potential for conduction electrons is introduced as a
random variable with its variance $W$.
Increasing the randomness of the electron chemical potential, the
Fermi liquid state in $W < W_{c}$ turns into the spin liquid phase
in $W > W_{c}$, which displays the marginal Fermi-liquid
phenomenology due to random RKKY interactions
\cite{Kondo_RKKY_Disorder}, where the Kondo effect is suppressed
due to the proximity of the Anderson localization for conduction
electrons \cite{Kondo_Disorder}. However, the presence of finite
Kondo couplings still gives rise to Kondo screening although the
$T_{K}$ distribution differs from that in the Fermi liquid state,
associated with the presence of random RKKY interactions. In
addition, the spin liquid state was argued to be unstable against
the spin glass phase at low temperatures, maybe resulting from the
fixed Kondo interaction. On the other hand, we do not take into
account the Anderson localization for conduction electrons, and
introduce random hybridization couplings. As a result, the Kondo
effect is completely destroyed in the spin liquid phase, thus
quantum critical physics differs from the previous study of Ref.
[\onlinecite{Kondo_RKKY_Disorder}]. In addition, the spin liquid
phase is stable at finite temperatures in the present study
\cite{Sachdev_SG}.
We investigate the quantum critical point beyond the mean-field
approximation. Introducing quantum corrections fully
self-consistently in the non-crossing approximation
\cite{Hewson_Book}, we prove that the local charge susceptibility
has exactly the same critical exponent as the local spin
susceptibility. This is quite unusual because these correlation
functions are symmetry-unrelated in the lattice scale. This
reminds us of deconfined quantum criticality \cite{Senthil_DQCP},
where the Landau-Ginzburg-Wilson forbidden continuous transition
may appear with an enhanced emergent symmetry. Actually, the
continuous quantum transition was proposed between the
antiferromagnetic phase and the valence bond solid state
\cite{Senthil_DQCP}. In the vicinity of the quantum critical point
the spin-spin correlation function of the antiferromagnetic
channel has the same scaling exponent as the valence-bond
correlation function, suggesting an emergent O(5) symmetry beyond
the symmetry O(3)$\times$Z$_{4}$ of the lattice model
\cite{Tanaka_SO5} and confirmed by the Monte Carlo simulation of
the extended Heisenberg model \cite{Sandvik}. Tanaka and Hu
proposed an effective O(5) nonlinear $\sigma$ model with the
Wess-Zumino-Witten term as an effective field theory for the
Landau-Ginzburg-Wilson forbidden quantum critical point
\cite{Tanaka_SO5}, expected to allow fractionalized spin
excitations due to the topological term. This proposal can be
considered as generalization of an antiferromagnetic spin chain,
where an effective field theory is given by an O(4) nonlinear
$\sigma$ model with the Wess-Zumino-Witten term, which gives rise
to fractionalized spin excitations called spinons, identified with
topological solitons \cite{Tsvelik_Book}. Applying this concept to
the present quantum critical point, the enhanced emergent symmetry
between charge (holon) and spin (spinons) local modes leads us to
propose novel duality between the Kondo singlet phase and the
critical local moment state beyond the Landau-Ginzburg-Wilson
paradigm. We suggest an O(4) nonlinear $\sigma$ model in a
nontrivial manifold as an effective field theory for this local
quantum critical point, where the local spin and charge densities
form an O(4) vector with a constraint. The symmetry enhancement
serves the mechanism of electron fractionalization in critical
impurity dynamics, where such fractionalized excitations are
identified with topological excitations.
This paper is organized as follows. In section II we introduce an
effective disordered Anderson lattice model and perform the DMFT
approximation with the replica trick. Equation (\ref{DMFT_Action})
is the main result in this section. In section III we perform the
saddle-point analysis based on the slave-boson representation and
obtain the phase diagram showing breakdown of the Kondo effect
driven by the RKKY interaction. We show spectral functions,
self-energies, and local spin susceptibility in the Kondo phase.
Figures (1)-(3) with Eqs. (\ref{Sigma_C_MFT})-(\ref{Sigma_FC_MFT})
and (\ref{Lambda_MFT})-(\ref{Constraint_MFT}) are main results in
this section. In section IV we investigate the nature of the
impurity quantum critical point based on the non-crossing
approximation beyond the previous mean-field analysis. We solve
self-consistent equations analytically and find power-law scaling
solutions. As a result, we uncover the marginal Fermi-liquid
spectrum for the local spin susceptibility. We propose an
effective field theory for the quantum critical point and discuss
the possible relationship with the deconfined quantum critical
point. In section V we summarize our results.
The present study extends our recent publication
\cite{Tien_Kim_PRL}, showing both physical and mathematical
details.
\section{An effective DMFT action from an Anderson lattice model with strong randomness}
We start from an effective Anderson lattice model \bqa H &=& -
\sum_{ij,\sigma} t_{ij} c^{\dagger}_{i\sigma} c_{j\sigma} + E_{d}
\sum_{i\sigma} d^{\dagger}_{i\sigma} d_{i\sigma} \nn &+& \sum_{ij}
J_{ij} \mathbf{S}_{i} \cdot \mathbf{S}_{j} + \sum_{i\sigma} (V_{i}
c^{\dagger}_{i\sigma} d_{i\sigma} + {\rm H.c.}) , \label{ALM} \eqa
where $t_{ij} = \frac{t}{M \sqrt{z}}$ is a hopping integral for
conduction electrons and \bqa && J_{ij} = \frac{J}{\sqrt{z M}}
\varepsilon_{i}\varepsilon_{j} , ~~~~~ V_{i} = \frac{V}{\sqrt{M}}
\varepsilon_{i} \nonumber \eqa are random RKKY and hybridization
coupling constants, respectively. Here, $M$ is the spin degeneracy
and $z$ is the coordination number. Randomness is given by the
Gaussian distribution \bqa \overline{\varepsilon_{i}} = 0 , ~~~~~
\overline{\varepsilon_{i}\varepsilon_{j}} = \delta_{ij} . \eqa
The disorder average can be performed in the replica trick
\cite{SG_Review}. Performing the disorder average in the Gaussian
distribution function, we reach the following expression for the
replicated effective action
\begin{eqnarray}
&& \overline{Z^n} = \int \mathcal{D}c_{i\sigma}^{a}
\mathcal{D}d_{i\sigma}^{a} e^{-\bar{S}_n } , \nn &&
\overline{S}_{n} = \int\limits_{0}^{\beta} d\tau \sum_{ij\sigma a}
c^{\dagger a}_{i\sigma}(\tau) ((\partial_{\tau} - \mu)\delta_{ij}
+ t_{ij}) c^{a}_{j\sigma}(\tau) \nn && + \int\limits_{0}^{\beta}
d\tau \sum_{i\sigma a}d^{\dagger a}_{i\sigma}(\tau)
(\partial_{\tau} + E_d) d^{a}_{i\sigma}(\tau) \nn && -
\frac{J^2}{2 z M} \int\limits_{0}^{\beta} d\tau
\int\limits_{0}^{\beta} d\tau' \sum_{ijab}
\mathbf{S}^{a}_{i}(\tau) \cdot \mathbf{S}^{a}_{j}(\tau) \;\;
\mathbf{S}^{b}_{i}(\tau') \cdot \mathbf{S}^{b}_{j}(\tau') \nn && -
\frac{V^{2}}{2 M} \int\limits_{0}^{\beta} d\tau
\int\limits_{0}^{\beta} d\tau' \sum_{i \sigma \sigma' ab} \big(
c^{\dagger a}_{i\sigma}(\tau) d^{a}_{i\sigma}(\tau) + d^{\dagger
a}_{i\sigma}(\tau) c^{a}_{i\sigma}(\tau)\big) \nn &&
~~~~~~~~~~~~~~~ \times \big( c^{\dagger b}_{i\sigma'}(\tau')
d^{b}_{i\sigma'}(\tau') + d^{\dagger b}_{i\sigma'}(\tau')
c^{b}_{i\sigma'}(\tau')\big) , \label{DALM}
\end{eqnarray}
where $\sigma, \sigma' = 1, ..., M$ is the spin index and $a, b =
1, ..., n$ is the replica index. In appendix A we derive this
replicated action from Eq. (\ref{ALM}).
One may ask the role of randomness of $E_{d}$, generating \bqa &&
- \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{i\sigma\sigma' ab} d^{\dagger a}_{i\sigma}(\tau)
d^{a}_{i\sigma}(\tau) d^{\dagger b}_{i\sigma'}(\tau')
d^{b}_{i\sigma'}(\tau') , \nonumber \eqa where density
fluctuations are involved. This contribution is expected to
support the Kondo effect because such local density fluctuations
help hybridization with conduction electrons. In this paper we fix
$E_{d}$ as a constant value in the Kondo limit, allowed as long as
its variance is not too large to overcome the Kondo limit.
One can introduce randomness in the hopping integral of conduction
electrons. But, this contribution gives rise to the same effect as
the DMFT approximation in the $z\rightarrow \infty$ Bethe lattice
\cite{Olivier}. In this respect randomness in the hopping integral
is naturally introduced into the present DMFT study.
The last disorder contribution can arise from randomness in the
electron chemical potential, expected to cause the Anderson
localization for conduction electrons. Actually, this results in
the metal-insulator transition at the critical disorder strength,
suppressing the Kondo effect in the insulating phase. Previously,
the Griffiths phase for non-Fermi liquid physics has been
attributed to the proximity effect of the Anderson localization
\cite{Kondo_Disorder}. In this work we do not consider the
Anderson localization for conduction electrons.
We observe that the disorder average neutralizes spatial
correlations except for the hopping term of conduction electrons.
This leads us to the DMFT formulation, resulting in an effective
local action for the strong random Anderson lattice model
\begin{eqnarray}
&& \bar{S}_{n}^{\rm eff} = \int_{0}^{\beta} d\tau \Bigl\{
\sum_{\sigma a} c^{\dagger a}_{\sigma}(\tau) (\partial_{\tau} -
\mu) c^{a}_{\sigma}(\tau) \nn && + \sum_{\sigma a}d^{\dagger
a}_{\sigma}(\tau) (\partial_{\tau} + E_d) d^{a}_{\sigma}(\tau)
\Bigr\} \nn && -\frac{V^2}{2 M} \int_{0}^{\beta} d\tau
\int_{0}^{\beta} d\tau' \sum_{\sigma \sigma' a b} \big[ c^{\dagger
a}_{\sigma}(\tau) d^{a}_{\sigma}(\tau) + d^{\dagger
a}_{\sigma}(\tau) c^{a}_{\sigma}(\tau)\big] \nn &&
~~~~~~~~~~~~~~~~~~~~~~~~~ \times \big[ c^{\dagger
b}_{\sigma'}(\tau') d^{b}_{\sigma'}(\tau') + d^{\dagger
b}_{\sigma'}(\tau') c^{b}_{\sigma'}(\tau')\big] \nn && -
\frac{J^2}{2 M} \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{ab} \sum_{\alpha\beta\gamma\delta} S^{a}_{\alpha\beta}(\tau)
R^{ab}_{\beta\alpha\gamma\delta}(\tau-\tau')
S^{b}_{\delta\gamma}(\tau') \nn && + \frac{t^2}{M^2}
\int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau' \sum_{ab\sigma}
c^{\dagger a}_{\sigma}(\tau) G^{ab}_{c \;
\sigma\sigma}(\tau-\tau') c^{b}_{\sigma}(\tau' ) ,
\label{DMFT_Action}
\end{eqnarray}
where $G^{ab}_{c \; ij\sigma\sigma}(\tau-\tau')$ is the local
Green's function for conduction electrons and $R^{ab}_{\beta
\alpha \gamma \delta}(\tau-\tau')$ is the local spin
susceptibility for localized spins, given by \bqa G^{ab}_{c \;
ij\sigma\sigma}(\tau-\tau') &=& - \langle T_{\tau} [
c^{a}_{i\sigma}(\tau) c^{\dagger b}_{j\sigma}(\tau') ] \rangle ,
\nn R^{ab}_{\beta \alpha \gamma \delta}(\tau-\tau') &=& \langle
T_{\tau} [S^{a}_{\beta\alpha}(\tau) S^{b}_{\gamma\delta}(\tau')]
\rangle , \label{Local_Green_Functions} \eqa respectively. Eq.
(\ref{DMFT_Action}) with Eq. (\ref{Local_Green_Functions}) serves
a completely self-consistent framework for this problem.
Derivation of Eq. (\ref{DMFT_Action}) from Eq. (\ref{DALM}) is
shown in appendix B.
This effective model has two well known limits, corresponding to
the disordered Heisenberg model \cite{Sachdev_SG} and the
disordered Anderson lattice model without RKKY interactions
\cite{Kondo_Disorder}, respectively. In the former case a spin
liquid state emerges due to strong quantum fluctuations while a
local Fermi liquid phase appears at low temperatures in the latter
case as long as the $T_{K}$ distribution is not so broadened
enough. In this respect it is natural to consider a quantum phase
transition driven by the ratio between variances for the RKKY and
hybridization couplings.
\section{Phase diagram}
\subsection{Slave boson representation and mean field approximation}
We solve the effective DMFT action based on the U(1) slave boson
representation
\begin{eqnarray}
d^{a}_{\sigma} &=& \hat{b}^{\dagger a} f^{a}_{\sigma} , \label{SB_Electron} \\
S_{\sigma\sigma'}^{a} &=& f^{a\dagger}_{\sigma} f_{\sigma'}^{a} -
q_{0}^{a} \delta_{\sigma \sigma'} \label{SB_Spin}
\end{eqnarray}
with the single occupancy constraint $|b^{a}|^2 + \sum_{\sigma}
f^{a}_{\sigma}(\tau) f^{a}_{\sigma}(\tau) = 1$, where $q_{0}^{a} =
\sum_{\sigma}f^{a\dagger}_{\sigma} f_{\sigma}^{a}/M $.
In the mean field approximation we replace the holon operator
$\hat{b}^{a}$ with its expectation value $\langle \hat{b}^{a}
\rangle \equiv b^{a}$. Then, the effective action Eq.
(\ref{DMFT_Action}) becomes
\begin{widetext}
\begin{eqnarray}
&& \bar{S}_{n}^{\rm eff} = \int_{0}^{\beta} d\tau \Bigl\{
\sum_{\sigma a} c^{\dagger a}_{\sigma}(\tau) (\partial_{\tau} -
\mu) c^{a}_{\sigma}(\tau) + \sum_{\sigma a} f^{\dagger
a}_{\sigma}(\tau) (\partial_{\tau} + E_d) f^{a}_{\sigma}(\tau) +
\sum_{a} \lambda^{a} (|b^{a}|^2 + \sum_{\sigma}
f^{a}_{\sigma}(\tau) f^{a}_{\sigma}(\tau)- 1) \Bigr\} \nonumber \\
&& -\frac{V^2}{2 M} \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{\sigma \sigma' a b} \big[ c^{\dagger a}_{\sigma}(\tau)
f^{a}_{\sigma}(\tau) (b^{a})^{*} + b^{a} f^{\dagger
a}_{\sigma}(\tau) c^{a}_{\sigma}(\tau)\big] \big[ c^{\dagger
b}_{\sigma'}(\tau') f^{b}_{\sigma'}(\tau') (b^{b})^{*} + b^{b}
f^{\dagger b}_{\sigma'}(\tau') c^{b}_{\sigma'}(\tau')\big]
\nonumber \\ &&-\frac{J^2}{2 M} \int_{0}^{\beta} d\tau
\int_{0}^{\beta} d\tau' \sum_{ab} \sum_{\alpha\beta\gamma\delta}
\big[f^{\dagger a}_{\alpha}(\tau) f^{a}_{\beta}(\tau) -
q_{\alpha}^{a} \delta_{\alpha\beta} \big]
R^{ab}_{\beta\alpha\gamma\delta}(\tau-\tau') \big[f^{\dagger
b}_{\delta}(\tau') f^{b}_{\gamma}(\tau') - q_{\gamma}^{b}
\delta_{\gamma\delta} \big] \nonumber \\ && + \frac{t^2}{M^2}
\int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau' \sum_{ab\sigma}
c^{\dagger a}_{\sigma}(\tau) G^{ab}_{\sigma}(\tau-\tau')
c^{b}_{\sigma}(\tau' ) , \label{SB_MFT}
\end{eqnarray}
\end{widetext}
where $\lambda^{a}$ is a lagrange multiplier field to impose the
constraint and $q_{\alpha}^{a} =\langle f^{\dagger a}_{\alpha}
f^{a}_{\alpha} \rangle$.
Taking the $M\rightarrow \infty$ limit, we obtain self-consistent
equations for self-energy corrections,
\begin{eqnarray}
\Sigma_{c \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{f \;
\sigma\sigma'}^{\; a b}(\tau) (b^{a})^{*} b^b + \frac{t^2}{M^2}
\delta_{\sigma\sigma'} G_{c \; \sigma}^{\; a b}(\tau) ,
\\ \Sigma_{f \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{c
\; \sigma\sigma'}^{\; a b}(\tau) (b^{b})^{*} b^a \nn &+&
\frac{J^2}{2 M} \sum_{s s'} G_{f \; s s'}^{\; a b}(\tau) [
R^{ab}_{s\sigma \sigma' s'}(\tau) + R^{ba}_{\sigma' s' s
\sigma}(-\tau) ] , \nn \\ \Sigma_{cf \; \sigma\sigma'}^{\;\;
ab}(\tau) &=& - \delta_{ab} \delta_{\sigma\sigma'}\delta(\tau)
\frac{V^2}{M} \sum_{s c} [\langle f^{\dagger c}_{s} c^{c}_{s}
\rangle b^c + {\rm c.c.} ] (b^{a})^{*} \nn &+& \frac{V^2}{M}
G_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) (b^a b^b)^{*} , \\
\Sigma_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) &=& - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c}
[\langle f^{\dagger c}_{s} c^{c}_{s} \rangle b^c + {\rm c.c.} ]
b^{a} \nn &+& \frac{V^2}{M} G_{cf \; \sigma\sigma'}^{\;\;
ab}(\tau) b^a b^b ,
\end{eqnarray} respectively, where local Green's functions are given by
\begin{eqnarray}
G_{c \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
c^{a}_{\sigma}(\tau) c^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{f \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
f^{a}_{\sigma}(\tau) f^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{cf \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
c^{a}_{\sigma}(\tau) f^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{fc \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
f^{a}_{\sigma}(\tau) c^{\dagger b}_{\sigma'} (0) \rangle .
\end{eqnarray}
In the paramagnetic and symmetric replica phase these Green's
functions are diagonal in the spin and replica indices, i.e.,
$G^{ab}_{x \sigma\sigma'}(\tau)=\delta_{ab}\delta_{\sigma\sigma'}
G_{x}(\tau)$ with $x=c,f,cf,fc$. Then, we obtain the Dyson
equation
\begin{widetext}
\begin{eqnarray}
\left(\begin{array}{cc} G_{c}(i \omega_l) & G_{fc}(i \omega_l) \\
G_{cf}(i \omega_l) & G_{f}(i \omega_l)
\end{array} \right) = \left( \begin{array}{cc}
i\omega_l + \mu - \Sigma_{c}(i \omega_l) & - \Sigma_{cf}(i
\omega_l) \\
- \Sigma_{fc}(i \omega_l) & i\omega_l - E_d -\lambda -
\Sigma_{f}(i \omega_l)
\end{array} \right)^{-1} ,
\end{eqnarray}
\end{widetext}
where $\omega_l=(2 l+1) \pi T$ with $l$ integer. Accordingly, Eqs.
(9)-(12) are simplified as follows
\begin{eqnarray}
\Sigma_{c}(i\omega_l) &=& \frac{V^2}{M} G_{f}(i\omega_l) |b|^2 +
\frac{t^2}{M^2} G_{c}(i\omega_l) , \label{Sigma_C_MFT} \\
\Sigma_{f}(i\omega_l) &=& \frac{V^2}{M} G_{c}(i\omega_l) |b|^2 +
\frac{J^2}{2 M} T \sum_{s} \sum_{\nu_m} G_{f}(i\omega_l-\nu_m) \nn
&\times& [R_{s\sigma\sigma s}(i\nu_m) + R_{\sigma s
s\sigma}(-i\nu_m) ] , \label{Sigma_F_MFT} \\
\Sigma_{cf}(i\omega_l) &=& \frac{V^2}{M} G_{fc}(i\omega_l)
(b^2)^{*} - n \frac{V^2}{M} (b^2)^{*} \sum_s \langle
f^{\dagger}_{s} c_{s}
+ c^{\dagger}_{s} f_{s} \rangle , \label{Sigma_CF_MFT} \nn \\
\Sigma_{fc}(i\omega_l) &=& \frac{V^2}{M} G_{cf}(i\omega_l) b^2 - n
\frac{V^2}{M} b^2 \sum_s \langle f^{\dagger}_{s} c_{s} +
c^{\dagger}_{s} f_{s} \rangle \label{Sigma_FC_MFT}
\end{eqnarray} in the frequency space.
Note that $n$ is the replica index and the last terms in
Eqs.~(\ref{Sigma_CF_MFT})-(\ref{Sigma_FC_MFT}) vanish in the limit
of $n \rightarrow 0$. $R_{s\sigma\sigma s}(i\nu_m)$ is the local
spin susceptibility, given by
\begin{eqnarray}
R_{\sigma s s \sigma}(\tau) = - G_{f \sigma}(-\tau) G_{f s}(\tau)
\label{Spin_Corr_MFT}
\end{eqnarray} in the Fourier transformation.
The self-consistent equation for boson condensation is
\begin{eqnarray}
&& b \Big[ \lambda + 2 V^2 T \sum_{\omega_l} G_{c}(i\omega_l)
G_{f}(i\omega_l) \nn && + V^2 T \sum_{\omega_l} \Bigl\{
G_{fc}(i\omega_l) G_{fc}(i\omega_l) + G_{cf}(i\omega_l)
G_{cf}(i\omega_l)\Bigr\} \Big] =0 . \label{Lambda_MFT} \nn
\end{eqnarray}
The constraint equation is given by
\begin{eqnarray}
|b|^2 + \sum_{\sigma} \langle f^{\dagger}_{\sigma} f_{\sigma}
\rangle = 1 . \label{Constraint_MFT}
\end{eqnarray}
The main difference between the clean and disordered cases is that
the off diagonal Green's function $G_{fc}(i\omega_l)$ should
vanish in the presence of randomness in $V$ with its zero mean
value while it is proportional to the condensation $b$ when the
average value of $V$ is finite. In the present situation we find
$b^{a} = \langle f^{a\dagger}_{\sigma} c_{\sigma}^{a} \rangle = 0$
while $(b^{a})^{*}b^{b} = \langle f^{a\dagger}_{\sigma}
c_{\sigma}^{a} c_{\sigma'}^{b\dagger} f_{\sigma'}^{b} \rangle
\equiv |b|^{2} \delta_{ab} \not= 0$. As a result, Eqs.
(\ref{Sigma_CF_MFT}) and (\ref{Sigma_FC_MFT}) are identically
vanishing in both left and right hand sides. This implies that the
Kondo phase is not characterized by the holon condensation but
described by finite density of holons. It is important to notice
that this gauge invariant order parameter does not cause any kinds
of symmetry breaking for the Kondo effect as it should be.
\subsection{Numerical analysis}
We use an iteration method in order to solve the mean field
equations (\ref{Sigma_C_MFT}), (\ref{Sigma_F_MFT}),
(\ref{Sigma_CF_MFT}), (\ref{Sigma_FC_MFT}), (\ref{Lambda_MFT}),
and (\ref{Constraint_MFT}). For a given $E_d+\lambda$, we use
iterations to find all Green's functions from Eqs.
(\ref{Sigma_C_MFT})-(\ref{Sigma_FC_MFT}) with Eq.
(\ref{Spin_Corr_MFT}) and $b^2$ from Eq.~(\ref{Lambda_MFT}). Then,
we use Eq.~(\ref{Spin_Corr_MFT}) to calculate $\lambda$ and $E_d$.
We adjust the value of $E_d+\lambda$ in order to obtain the
desirable value for $E_d$. Using the obtained $\lambda$ and $b^2$,
we calculate the Green's functions in the real frequency by
iterations. In the real frequency calculation we introduce the
following functions \cite{Saso}
\begin{eqnarray}
\alpha_{\pm}(t)=\int_{-\infty}^{\infty} d\omega e^{-i \omega t}
\rho_{f}(\omega) f(\pm \omega/T),
\end{eqnarray}
where $\rho_{f}(\omega) = - {\rm Im} G_{f}(\omega+i0^{+})/\pi$ is
the density of states for f-electrons, and $f(x)=1/(\exp(x)+1)$ is
the Fermi-Dirac distribution function. Then, the self-energy
correction from spin correlations is expressed as follows
\begin{eqnarray}
&& \Sigma_{J}(i\omega_l) \equiv \frac{J^2}{2 M} T \sum_{s}
\sum_{\nu_m} G_{f}(i\omega_l-\nu_m) \nn && ~~~~~~~~~~ \times
[R_{s\sigma\sigma s}(i\nu_m) + R_{\sigma s s\sigma}(-i\nu_m) ] \nn
&& = - i J^2 \int_{0}^{\infty} d t e^{i\omega t} \Bigl( [
\alpha_{+}(t)]^2 \alpha_{-}^{*}(t) + [ \alpha_{-}(t)]^2
\alpha_{+}^{*}(t) \Bigr) . \nn
\end{eqnarray} Performing the Fourier transformation, we
calculate $\alpha_{\pm}(t)$ and obtain $\Sigma_{J}(\omega)$.
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{crit.eps}
\caption{The phase diagram of the strongly disordered Anderson
lattice model in the DMFT approximation ($E_d=-1$, $\mu=0$,
$T=0.01$, $t=1$, $M=2$).} \label{fig1}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{imself.eps}
\caption{The imaginary part of the self-energy of conduction
electrons and that of localized electrons for various values of
$J$ ($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M$=2).}
\label{fig2}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{dosfcbw.eps}
\caption{Density of states of conduction ($\rho_{c}(\omega)$) and
localized ($\rho_{f}(\omega)$) electrons for various values of $J$
($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M=2$). }
\label{fig3}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{imchi.eps}
\caption{Local spin susceptibility for various values of $J$
($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M=2$).}
\label{fig4}
\end{figure}
Figure \ref{fig1} shows the phase diagram of the strongly
disordered Anderson lattice model in the plane of $(V, J)$, where
$V$ and $J$ are variances for the Kondo and RKKY interactions,
respectively. The phase boundary is characterized by $|b|^{2} =
0$, below which $|b|^{2} \not= 0$ appears to cause effective
hybridization between conduction electrons and localized fermions
although our numerical analysis shows $\langle
f^{\dagger}_{\sigma} c_{\sigma} \rangle =0$, meaning
$\Sigma_{cf(fc)}(i\omega) = 0$ and $G_{cf(fc)}(i\omega) = 0$ in
Eqs. (\ref{Sigma_CF_MFT}) and (\ref{Sigma_FC_MFT}).
In Fig. \ref{fig2} one finds that the effective hybridization
enhances the scattering rate of conduction electrons dramatically
around the Fermi energy while the scattering rate for localized
electrons becomes reduced at the resonance energy. Enhancement of
the imaginary part of the conduction-electron self-energy results
from the Kondo effect. In the clean situation it is given by the
delta function associated with the Kondo effect
\cite{Hewson_Book}. This self-energy effect reflects the spectral
function, shown in Fig. \ref{fig3}, where the pseudogap feature
arises in conduction electrons while the sharply defined peak
appears in localized electrons, identified with the Kondo
resonance although the description of the Kondo effect differs
from the clean case. Increasing the RKKY coupling, the Kondo
effect is suppressed as expected. In this Kondo phase the local
spin susceptibility is given by Fig. \ref{fig4}, displaying the
typical $\omega$-linear behavior in the low frequency limit,
nothing but the Fermi liquid physics for spin correlations
\cite{Olivier}. Increasing $J$, incoherent spin correlations are
enhanced, consistent with spin liquid physics \cite{Olivier}.
One can check our calculation, considering the $J = 0$ limit to
recover the known result. In this limit we obtain an analytic
expression for $V_c$ at half filling ($\mu=0$)
\begin{eqnarray}
V_c(J=0) &=& \sqrt{\frac{E_d}{2 P_c }}, \\
P_c &=& \int_{-1}^{1} d\omega \rho_{0}(\omega)
\frac{f(\omega/T)-f(0)}{\omega} ,
\end{eqnarray}
where $\rho_{0}(\omega)=\frac{2}{\pi} \sqrt{1-\omega^2}$ is the
bare density of states of conduction electrons. One can check
$V_c(J=0) \rightarrow 0$ in the zero temperature limit because
$P_{c} \rightarrow \infty$.
\section{Nature of quantum criticality}
\subsection{Beyond the saddle-point analysis : Non-crossing approximation}
Resorting to the slave-boson mean-field approximation, we
discussed the phase diagram of the strongly disordered Anderson
lattice model, where a quantum phase transition appears from a
spin liquid state to a dirty "heavy-fermion" Fermi liquid phase,
increasing $V/J$, the ratio of variances of the hybridization and
RKKY interactions. Differentiated from the heavy-fermion quantum
transition in the clean situation, the order parameter turns out
to be the density of holons instead of the holon condensation.
Evaluating self-energies for both conduction electrons and
localized electrons, we could identify the Kondo effect from each
spectral function. In addition, we obtained the local spin
susceptibility consistent with the Fermi liquid physics.
The next task will be on the nature of quantum criticality between
the Kondo and spin liquid phases. This question should be
addressed beyond the saddle-point analysis. Introducing quantum
corrections in the non-crossing approximation, justified in the
$M\rightarrow \infty$ limit, we investigate the quantum critical
point, where density fluctuations of holons are critical.
Releasing the slave-boson mean-field approximation to take into
account holon excitations, we reach the following self-consistent
equations for self-energy corrections,
\begin{eqnarray}
\Sigma_{c \;\sigma\sigma'}^{\;ab}(\tau) = \frac{V^2}{M} G_{f \;
\sigma\sigma'}^{\; a b}(\tau) G_{b}^{a b}(-\tau) + \frac{t^2}{M^2}
\delta_{\sigma\sigma'} G_{c \; \sigma}^{\; a b}(\tau) ,
\label{Sigma_C_NCA}
\end{eqnarray}
\begin{eqnarray}
\Sigma_{f \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{c \;
\sigma\sigma'}^{\; a b}(\tau) G_{b}^{a b}(\tau) \nn &+&
\frac{J^2}{2 M} \sum_{s s'} G_{f \; s s'}^{\; a b}(\tau) [
R^{ab}_{s\sigma \sigma' s'}(\tau) + R^{ba}_{\sigma' s' s
\sigma}(-\tau) ] , \label{Sigma_F_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{cf \; \sigma\sigma'}^{\;\; ab}(\tau) = - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c} \int
d\tau_1 \langle f^{\dagger c}_{s} c^{c}_{s} \rangle G_{b}^{c
a}(\tau_1-\tau') , \label{Sigma_CF_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) = - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c}\int
d\tau_1 \langle c^{\dagger c}_{s} f^{c}_{s} \rangle G_{b}^{a
c}(\tau-\tau_1) , \label{Sigma_FC_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{b}^{a b}(\tau) = \frac{V^2}{M} \sum_{\sigma\sigma'} G_{f
\; \sigma\sigma'}^{\; b a}(\tau) G_{c \; \sigma'\sigma}^{\; b
a}(-\tau) . \label{Sigma_B_NCA}
\end{eqnarray}
Since we considered the paramagnetic and replica symmetric phase,
it is natural to assume such symmetries at the quantum critical
point. Note that the off diagonal self-energies,
$\Sigma_{cf}(i\omega_l)$ and $\Sigma_{fc}(i\omega_l)$, are just
constants and proportional to $\langle f^{\dagger}_{\sigma}
c_{\sigma} \rangle$ and $\langle c^{\dagger}_{\sigma} f_{\sigma}
\rangle$, respectively. As a result, $\Sigma_{cf}(i\omega_l) =
\Sigma_{fc}(i\omega_l) = 0$ should be satisfied at the quantum
critical point as the Kondo phase because of $\langle
f^{\dagger}_{\sigma} c_{\sigma} \rangle = \langle
c^{\dagger}_{\sigma} f_{\sigma} \rangle = 0$. Then, we reach the
following self-consistent equations called the non-crossing
approximation
\begin{eqnarray}
\Sigma_{c}(\tau) &=& \frac{V^2}{M} G_{f}(\tau) G_{b}(-\tau) +
\frac{t^2}{M^2} G_{c}(\tau) ,
\label{Sigma_C_NCA_GF} \\
\Sigma_{f}(\tau) &=& \frac{V^2}{M} G_{c}(\tau) G_{b}(\tau) - J^2
[G_{f}(\tau)]^2 G_{f}(-\tau) , \label{Sigma_F_NCA_GF} \\
\Sigma_{b}(\tau) &=& V^2 G_{c}(-\tau) G_{f}(\tau) .
\label{Sigma_B_NCA_GF}
\end{eqnarray}
Local Green's functions are given by
\begin{eqnarray}
G_{c}(i\omega_l) &=& \Big[i\omega_l + \mu - \Sigma_{c}(i\omega_l)
\Big]^{-1} , \label{Dyson_Gc} \\
G_{f}(i\omega_l) &=& \Big[i\omega_l - E_d -\lambda -
\Sigma_{f}(i\omega_l) \Big]^{-1} , \label{Dyson_Gf} \\
G_{b}(i \nu_{l}) &=& \Big[ i\nu_{l} -\lambda -\Sigma_{b}(i\nu_l)
\Big]^{-1} , \label{Dyson_Gb}
\end{eqnarray}
where $\omega_l=(2 l+1) \pi T$ is for fermions and $\nu_{l} = 2 l
\pi T$ is for bosons.
\subsection{Asymptotic behavior at zero temperature }
Calling quantum criticality, power-law scaling solutions are
expected. Actually, if the second term is neglected in Eq.
(\ref{Sigma_F_NCA_GF}), Eqs. (\ref{Sigma_F_NCA_GF}) and
(\ref{Sigma_B_NCA_GF}) are reduced to those of the multi-channel
Kondo effect in the non-crossing approximation \cite{Hewson_Book}.
Power-law solutions are well known in the regime of $1/T_K \ll
\tau \ll \beta=1/T \rightarrow \infty$, where $T_{K} =
D[\Gamma_{c}/\pi D]^{1/M} \exp[\pi E_{d}/M \Gamma_{c}]$ is an
effective Kondo temperature \cite{Tien_Kim} with the conduction
bandwidth $D$ and effective hybridization $\Gamma_{c} = \pi
\rho_{c} \frac{V^{2}}{M}$. In the presence of the RKKY interaction
[the second term in Eq. (\ref{Sigma_F_NCA_GF})], the effective
hybridization will be reduced, where $\Gamma_{c}$ is replaced with
$\Gamma_{c}^{J} \approx \pi \rho_{c} (\frac{V^{2}}{M} - J^{2})$.
Our power-law ansatz is as follows
\begin{eqnarray}
G_{c} &=& \frac{A_c}{\tau^{\Delta_c}} , \\
G_{f} &=& \frac{A_f}{\tau^{\Delta_f}} , \\
G_{b} &=& \frac{A_b}{\tau^{\Delta_b}} ,
\end{eqnarray} where $A_{c}$, $A_{f}$, and $A_{b}$ are positive
numerical constants. In the frequency space these are
\begin{eqnarray}
G_{c}(\omega) &=& A_c C_{\Delta_{c}-1} \omega^{\Delta_c-1}, \label{Dyson_W_Gc} \\
G_{f}(\omega) &=& A_f C_{\Delta_{f}-1} \omega^{\Delta_f-1}, \label{Dyson_W_Gf} \\
G_{b}(\omega) &=& A_b C_{\Delta_{b}-1} \omega^{\Delta_b-1},
\label{Dyson_W_Gb}
\end{eqnarray}
where $C_{\Delta_{c,f,b}} = \int_{-\infty}^{\infty} d x \frac{e^{i
x}}{x^{\Delta_{c,f,b}+1}}.$
Inserting Eqs. (\ref{Dyson_W_Gc})-(\ref{Dyson_W_Gb}) into Eqs.
(\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}), we obtain scaling
exponents of $\Delta_{c}$, $\Delta_{f}$, and $\Delta_{b}$. In
appendix C-1 we show how to find such critical exponents in a
detail. Two fixed points are allowed. One coincides with the
multi-channel Kondo effect, given by $\Delta_{c} = 1$, and
$\Delta_{f} = \frac{M}{M+1}$, $\Delta_{b} = \frac{1}{M+1}$ with $M
= 2$, where contributions from spin fluctuations to self-energy
corrections are irrelevant, compared with holon fluctuations. The
other is $\Delta_{c} = 1$ and $\Delta_{f} = \Delta_{b} =
\frac{1}{2}$, where spin correlations are critical as much as
holon fluctuations.
One can understand the critical exponent $\Delta_{f} = 1/2$ as the
proximity of the spin liquid physics \cite{Sachdev_SG}.
Considering the $V \rightarrow 0$ limit, we obtain the scaling
exponents of $\Delta_c = 1$ and $\Delta_f = 1/2$ from the scaling
equations (\ref{92}) and (\ref{93}). Thus, $G_{c}(\omega) \sim
\mbox{sgn}(\omega)$ and $G_{f}(\omega) \sim 1/\sqrt{\omega}$
result for $\omega \rightarrow 0$. In this respect both spin
fluctuations and holon excitations are critical as equal strength
at this quantum critical point.
\subsection{Finite temperature scaling behavior}
We solve Eqs. (\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}) in the
regime $\tau, \beta \gg 1/T_K$ with arbitrary $\tau/\beta$, where
the scaling ansatz at zero temperature is generalized as follows
\begin{eqnarray}
G_{c}(\tau) &=& A_{c} \beta^{-\Delta_{c}}
g_{c}\Big(\frac{\tau}{\beta} \Big) , \label{Dyson_T_Gc} \\
G_{f}(\tau) &=& A_{f} \beta^{-\Delta_{f}}
g_{f}\Big(\frac{\tau}{\beta} \Big) , \label{Dyson_T_Gf} \\
G_{b}(\tau) &=& A_{b} \beta^{-\Delta_{b}}
g_{b}\Big(\frac{\tau}{\beta} \Big) . \label{Dyson_T_Gb}
\end{eqnarray}
\begin{eqnarray}
g_{\alpha}(x) = \bigg(\frac{\pi}{\sin(\pi
x)}\bigg)^{\Delta_\alpha} \label{T_Scaling}
\end{eqnarray}
with $\alpha=c,f,b$ is the scaling function at finite
temperatures. In the frequency space we obtain
\begin{eqnarray}
G_{c}(i\omega_l) &=& A_c \beta^{1-\Delta_c}
\Phi_c(i\bar{\omega}_l) , \label{Dyson_TW_Gc} \\
G_{f}(i\omega_l) &=& A_f \beta^{1-\Delta_f}
\Phi_f(i\bar{\omega}_l) , \label{Dyson_TW_Gf} \\
G_{b}(i\nu_l) &=& A_c \beta^{1-\Delta_b} \Phi_b(i\bar{\nu}_l) ,
\label{Dyson_TW_Gb}
\end{eqnarray}
where $\bar{\omega}_l=(2 l+1) \pi$, $\bar{\nu}_l= 2 l \pi$, and
\begin{eqnarray}
\Phi_{\alpha}(i\bar{x}) = \int_{0}^{1} d t e^{i \bar{x} t}
g_{\alpha}(t) . \label{Phi_alpha}
\end{eqnarray}
Inserting Eqs. (\ref{Dyson_TW_Gc})-(\ref{Dyson_TW_Gb}) into Eqs.
(\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}), we find two fixed
points, essentially the same as the case of $T = 0$. But, scaling
functions of $\Phi_c(i\bar{\omega}_l)$, $\Phi_f(i\bar{\omega}_l)$,
and $\Phi_b(i\bar{\omega}_l)$ are somewhat complicated. All
scaling functions are derived in appendix C-2.
\subsection{Spin susceptibility}
We evaluate the local spin susceptibility, given by
\begin{eqnarray}
\chi(\tau) &=& G_{f}(\tau) G_{f}(-\tau) , \nonumber \\
&=& A_f^2 \beta^{-2 \Delta_f} \bigg(\frac{\pi}{\sin(\pi
\tau/\beta)} \bigg)^{2\Delta_f} . \label{126}
\end{eqnarray}
The imaginary part of the spin susceptibility
$\chi^{''}(\omega)={\rm Im} \; \chi(\omega+ i0^{+})$ can be found
from
\begin{eqnarray}
\chi(\tau) = \int \frac{d \omega}{\pi} \frac{e^{-\tau
\omega}}{1-e^{-\beta \omega}} \chi^{''}(\omega) . \label{127}
\end{eqnarray}
Inserting the scaling ansatz
\begin{eqnarray}
\chi^{''}(\omega) = A_f^2 \beta^{1-2\Delta_f}
\phi\Big(\frac{\omega}{T}\Big) \label{128}
\end{eqnarray}
into Eq. (\ref{127}) with Eq. (\ref{126}), we obtain
\begin{eqnarray}
\int \frac{d x}{\pi} \frac{e^{-x \tau/\beta}}{1-e^{-x}} \phi(x) =
\bigg(\frac{\pi}{\sin(\pi \tau/\beta)} \bigg)^{2\Delta_f} .
\end{eqnarray}
Changing the variable $t=i(\tau/\beta -1/2)$, we obtain
\begin{eqnarray}
\int \frac{d x}{\pi} e^{i x t} \frac{\phi(x)}{e^{x}-e^{-x}} =
\bigg(\frac{\pi}{\cosh(\pi t)} \bigg)^{2\Delta_f} .
\end{eqnarray}
As a result, we find the scaling function
\begin{eqnarray}
\phi(x) = 2 (2\pi)^{2 \Delta_f-1} \sinh\Big(\frac{x}{2}\Big)
\frac{\Gamma(\Delta_f+i x/2 \pi)\Gamma(\Delta_f - i
x/2\pi)}{\Gamma(2\Delta_f)} . \nn
\end{eqnarray}
This coincides with the spin spectrum of the spin liquid state
when $V = 0$ \cite{Olivier}.
\subsection{Discussion : Deconfined local quantum criticality}
The local quantum critical point characterized by $\Delta_{c} = 1$
and $\Delta_{f} = \Delta_{b} = 1/2$ is the genuine critical point
in the spin-liquid to local Fermi-liquid transition because such a
fixed point can be connected to the spin liquid state ($\Delta_{c}
= 1$ and $\Delta_{f} = 1/2$) naturally. This fixed point results
from the fact that the spinon self-energy correction from RKKY
spin fluctuations is exactly the same order as that from critical
holon excitations. It is straightforward to see that the critical
exponent of the local spin susceptibility is exactly the same as
that of the local charge susceptibility ($2\Delta_{f} =
2\Delta_{b} = 1$), proportional to $1/\tau$. Since the spinon
spin-density operator differs from the holon charge-density
operator in the respect of symmetry at the lattice scale, the same
critical exponent implies enhancement of the original symmetry at
low energies. The symmetry enhancement sometimes allows a
topological term, which assigns a nontrivial quantum number to a
topological soliton, identified with an excitation of quantum
number fractionalization. This mathematical structure is actually
realized in an antiferromagnetic spin chain \cite{Tsvelik_Book},
generalized into the two dimensional case
\cite{Senthil_DQCP,Tanaka_SO5}.
We propose the following local field theory in terms of physically
observable fields \bqa Z_{eff} &=& \int D
\boldsymbol{\Psi}^{a}(\tau)
\delta\Bigl(|\boldsymbol{\Psi}^{a}(\tau)|^{2} - 1\Bigr) e^{-
\mathcal{S}_{eff}} , \nn \mathcal{S}_{eff} &=& - \frac{g^{2}}{2M}
\int_{0}^{\beta} d \tau \int_{0}^{\beta} d \tau'
\boldsymbol{\Psi}^{a T}(\tau)
\boldsymbol{\Upsilon}^{ab}(\tau-\tau')
\boldsymbol{\Psi}^{b}(\tau') \nn &+& \mathcal{S}_{top} ,
\label{O4_Sigma_Model} \eqa where \bqa &&
\boldsymbol{\Psi}^{a}(\tau) = \left(
\begin{array}{c} \boldsymbol{S}^{a}(\tau) \\ \rho^{a}(\tau)
\end{array} \right) \eqa represents an $O(4)$ vector, satisfying
the constraint of the delta function.
$\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$ determines dynamics of
the $O(4)$ vector, resulting from spin and holon dynamics in
principle. However, it is extremely difficult to derive Eq.
(\ref{O4_Sigma_Model}) from Eq. (\ref{DMFT_Action}) because the
density part for the holon field in Eq. (\ref{O4_Sigma_Model})
cannot result from Eq. (\ref{DMFT_Action}) in a standard way. What
we have shown is that the renormalized dynamics for the O(4)
vector field follows $1/\tau$ asymptotically, where $\tau$ is the
imaginary time. This information should be introduced in
$\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$. $g \propto V/J$ is an
effective coupling constant, and $\mathcal{S}_{top}$ is a possible
topological term.
One can represent the O(4) vector generally as follows
\begin{widetext} \bqa \boldsymbol{\Psi}^{a} : \tau \longrightarrow
\Bigl( \sin \theta^{a}(\tau) \sin \phi^{a}(\tau) \cos
\varphi^{a}(\tau) , \sin \theta^{a}(\tau) \sin \phi^{a}(\tau) \sin
\varphi^{a}(\tau) , \sin \theta^{a}(\tau) \cos \phi^{a}(\tau) ,
\cos \theta^{a}(\tau) \Bigr) , \label{O4_Vector} \eqa
\end{widetext} where $\theta^{a}(\tau), \phi^{a}(\tau),
\varphi^{a}(\tau)$ are three angle coordinates for the O(4)
vector. It is essential to observe that the target manifold for
the O(4) vector is not a simple sphere type, but more complicated
because the last component of the O(4) vector is the charge
density field, where three spin components lie in $- 1 \leq
S^{a}_{x}(\tau), S^{a}_{y}(\tau), S^{a}_{z}(\tau) \leq 1$ while
the charge density should be positive, $0 \leq \rho^{a}(\tau) \leq
1$. This leads us to identify the lower half sphere with the upper
half sphere. Considering that $\sin\theta^{a}(\tau)$ can be folded
on $\pi/2$, we are allowed to construct our target manifold to
have a periodicity, given by
$\boldsymbol{\Psi}^{a}(\theta^{a},\phi^{a},\varphi^{a}) =
\boldsymbol{\Psi}^{a}(\pi - \theta^{a},\phi^{a},\varphi^{a})$.
This folded space allows a nontrivial topological excitation.
Suppose the boundary configuration of
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = 0)$ and
$\boldsymbol{\Psi}^{a}(\pi,\phi^{a},\varphi^{a}; \tau = \beta)$,
connected by $\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0
< \tau < \beta)$. Interestingly, this configuration is {\it
topologically} distinguishable from the configuration of
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = 0)$ and
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = \beta)$ with
$\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0 < \tau <
\beta)$ because of the folded structure. The second configuration
shrinks to a point while the first excitation cannot, identified
with a topologically nontrivial excitation. This topological
excitation carries a spin quantum number $1/2$ in its core, given
by $\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0 < \tau <
\beta) = \Bigl( \sin \phi^{a}(\tau) \cos \varphi^{a}(\tau) , \sin
\phi^{a}(\tau) \sin \varphi^{a}(\tau) , \cos \phi^{a}(\tau) , 0
\Bigr)$. This is the spinon excitation, described by an O(3)
nonlinear $\sigma$ model with the nontrivial spin correlation
function $\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$, where the
topological term is reduced to the single spin Berry phase term in
the instanton core.
In this local impurity picture the local Fermi liquid phase is
described by gapping of instantons while the spin liquid state is
characterized by condensation of instantons. Of course, the low
dimensionality does not allow condensation, resulting in critical
dynamics for spinons. This scenario clarifies the
Landau-Ginzburg-Wilson forbidden duality between the Kondo singlet
and the critical local moment for the impurity state, allowed by
the presence of the topological term.
If the symmetry enhancement does not occur, the effective local
field theory will be given by \bqa Z_{eff} &=& \int
D\boldsymbol{S}^{a}(\tau) D \rho^{a}(\tau) e^{- \mathcal{S}_{eff}}
, \nn \mathcal{S}_{eff} &=& - \int_{0}^{\beta} d \tau
\int_{0}^{\beta} d \tau' \Bigl\{ \frac{V^{2}}{2M} \rho^{a}(\tau)
\chi^{ab}(\tau-\tau') \rho^{b}(\tau') \nn &+& \frac{J^{2}}{2M}
\boldsymbol{S}^{a}(\tau) R^{ab} (\tau-\tau')
\boldsymbol{S}^{b}(\tau') \Bigr\} + \mathcal{S}_{B} \eqa with the
single-spin Berry phase term \bqa \mathcal{S}_{B} = - 2 \pi i S
\int_{0}^{1} d u \int_{0}^{\beta} d \tau \frac{1}{4\pi}
\boldsymbol{S}^{a}(u,\tau)
\partial_{u} \boldsymbol{S}^{a}(u,\tau) \times
\partial_{\tau} \boldsymbol{S}^{a}(u,\tau) , \nonumber \eqa where charge
dynamics $\chi^{ab}(\tau-\tau')$ will be different from spin
dynamics $R^{ab} (\tau-\tau')$. This will not allow the spin
fractionalization for the critical impurity dynamics, where the
instanton construction is not realized due to the absence of the
symmetry enhancement.
\section{Summary}
In this paper we have studied the Anderson lattice model with
strong randomness in both hybridization and RKKY interactions,
where their average values are zero. In the absence of random
hybridization quantum fluctuations in spin dynamics cause the spin
glass phase unstable at finite temperatures, giving rise to the
spin liquid state, characterized by the $\omega/T$ scaling spin
spectrum consistent with the marginal Fermi-liquid phenomenology
\cite{Sachdev_SG}. In the absence of random RKKY interactions the
Kondo effect arises \cite{Kondo_Disorder}, but differentiated from
that in the clean case. The dirty "heavy fermion" phase in the
strongly disordered Kondo coupling is characterized by a finite
density of holons instead of the holon condensation. But,
effective hybridization exists indeed, causing the Kondo resonance
peak in the spectral function. As long as variation of the
effective Kondo temperature is not so large, this disordered Kondo
phase is identified with the local Fermi liquid state because
essential physics results from single impurity dynamics,
differentiated from the clean lattice model.
Taking into account both random hybridization and RKKY
interactions, we find the quantum phase transition from the spin
liquid state to the local Fermi liquid phase at the critical
$(V_{c}, J_{c})$. Each phase turns out to be adiabatically
connected with each limit, i.e., the spin liquid phase when $V =
0$ and the local Fermi liquid phase when $J = 0$, respectively.
Actually, we have checked this physics, considering the local spin
susceptibility and the spectral function for localized electrons.
In order to investigate quantum critical physics, we introduce
quantum corrections from critical holon fluctuations in the
non-crossing approximation beyond the slave-boson mean-field
analysis. We find two kinds of power-law scaling solutions for
self-energy corrections of conduction electrons, spinons, and
holons. The first solution turns out to coincide with that of the
multi-channel Kondo effect, where effects of spin fluctuations are
sub-leading, compared with critical holon fluctuations. In this
respect this quantum critical point is characterized by breakdown
of the Kondo effect while spin fluctuations can be neglected. On
the other hand, the second scaling solution shows that both holon
excitations and spinon fluctuations are critical as the same
strength, reflected in the fact that the density-density
correlation function of holons has the exactly the same critical
exponent as the local spin-spin correlation function of spinons.
We argued that the second quantum critical point implies an
enhanced emergent symmetry from O(3)$\times$O(2)
(spin$\otimes$charge) to O(4) at low energies, forcing us to
construct an O(4) nonlinear $\sigma$ model on the folded target
manifold as an effective field theory for this disorder-driven
local quantum critical point. Our effective local field theory
identifies spinons with instantons, describing the local
Fermi-liquid to spin-liquid transition as the condensation
transition of instantons although dynamics of instantons remains
critical in the spin liquid state instead of condensation due to
low dimensionality. This construction completes novel duality
between the Kondo and critical local moment phases in the strongly
disordered Anderson lattice model.
We explicitly checked that the similar result can be found in the
extended DMFT for the clean Kondo lattice model, where two fixed
point solutions are allowed \cite{EDMFT_Spin,EDMFT_NCA}. One is
the same as the multi-channel Kondo effect and the other is
essentially the same as the second solution in this paper. In this
respect we believe that the present scenario works in the extended
DMFT framework although applicable to only two spatial dimensions
\cite{EDMFT}.
One may suspect the applicability of the DMFT framework for this
disorder problem. However, the hybridization term turns out to be
exactly local in the case of strong randomness while the RKKY term
is safely approximated to be local for the spin liquid state,
expected to be stable against the spin glass phase in the case of
quantum spins. This situation should be distinguished from the
clean case, where the DMFT approximation causes several problems
such as the stability of the spin liquid state \cite{EDMFT_Rosch}
and strong dependence of the dimension of spin dynamics
\cite{EDMFT}.
\section*{Acknowledgement}
This work was supported by the National Research Foundation of
Korea (NRF) grant funded by the Korea government (MEST) (No.
2010-0074542). M.-T. was also supported by the Vietnamese
NAFOSTED.
|
\section{Introduction}
The moment problem is a classical question in analysis, well studied because of its
importance and variety of applications. A simple example is the (univariate) Hamburger
moment problem: when does a given sequence of real numbers represent the successive
moments $\int\! x^n\, d\mu(x)$ of a positive Borel measure $\mu$ on $\mathbb R$?
Equivalently, which linear functionals $L$ on univariate real polynomials are
integration with respect to some $\mu$? By Haviland's theorem \cite{Hav}
this is the case if and only if $L$ is nonnegative on all polynomials nonnegative on
$\mathbb R$. Thus Haviland's theorem relates the moment problem to positive polynomials. It
holds in several variables and also if we are interested in restricting the support of
$\mu$. For details we refer the reader to one of the many beautiful expositions of this
classical branch of functional analysis, e.g.~\cite{Akh,KN,ST}.
Since Schm\"udgen's celebrated solution of the moment problem
on compact basic closed semialgebraic sets \cite{Smu},
the moment problem has played a prominent role in real algebra,
exploiting this duality between positive polynomials and the
moment problem, cf.~\cite{KM,PS,Put,PV}.
The survey of Laurent \cite{laurent2} gives a nice presentation of
up-to-date results and applications;
see also \cite{Mar,PD} for more on positive polynomials.
Our main motivation are trace-positive polynomials in non-commuting
variables. A polynomial is called \emph{trace-positive} if all
its matrix evaluations (of \emph{all} sizes) have nonnegative trace.
Trace-positive polynomials have been employed to investigate
problems on
operator algebras (Connes' embedding conjecture \cite{connes,ksconnes})
and mathematical physics (the Bessis-Moussa-Villani conjecture
\cite{bmv,ksbmv}), so a good understanding of this set is desired.
By duality this leads us to consider the tracial moment problem
introduced below.
We mention that the free non-commutative moment problem
has been studied and solved by
McCullough \cite{McC} and Helton \cite{helton}.
Hadwin \cite{had} considered
moments involving traces on von Neumann algebras.
This paper is organized as follows. The short Section \ref{sec:basic}
fixes notation and terminology involving non-commuting variables used in the sequel.
ection \ref{sec:ttmp} introduces
tracial moment sequences,
tracial moment matrices,
the tracial moment problem, and their truncated counterparts.
Our main results in this section relate the truncated tracial moment problem
to flat extensions of tracial moment matrices and resemble the
results of Curto and Fialkow \cite{cffinite,cfflat} on the (classical)
truncated moment problem. For example,
we prove
that a tracial sequence can be represented with tracial moments of
matrices
if its corresponding tracial moment matrix is positive semidefinite and of finite
rank (Theorem \ref{thm:finiterank}).
A truncated tracial sequence allows for such a representation
if and only if one if its extensions admits a flat extension (Corollary
\ref{cor:flatt}).
Finally, in Section \ref{sec:poly} we
explore the duality
between the tracial moment problem and trace-positivity of polynomials.
Throughout the paper several examples are given
to illustrate the theory.
\section{Basic notions}\label{sec:basic}
Let $\mathbb R\ax$ denote the unital associative $\mathbb R$-algebra freely generated
by $\ushort X=(X_1,\dots,X_n)$. The elements of $\mathbb R\ax$ are polynomials in the non-commuting
variables $X_1,\dots,X_n$ with coefficients in $\mathbb R$.
An element $w$ of the monoid $\ax$, freely generated by $\ushort X$,
is called a \textit{word}. An element of the form $aw$, where $0\neq a\in\mathbb R$
and $w\in\ax$, is called a \textit{monomial} and $a$ its \textit{coefficient}.
We endow $\mathbb R\ax$ with the \textit{involution} $p\mapsto p^*$ fixing $\mathbb R\cup\{\ushort X\}$
pointwise. Hence for each word $w\in\ax$, $w^*$ is its reverse. As an example, we have
$(X_1X_2^2-X_2X_1)^*=X_2^2X_1-X_1X_2$.
For $f\in\mathbb R\ax$ we will substitute symmetric matrices
$\ushort A=(A_1,\dots A_n)$ of the same size for the variables $\ushort X$
and obtain a matrix $f(\ushort A)$. Since $f(\ushort A)$ is
not well-defined if the $A_i$ do not have the
same size, we will assume this condition implicitly without further mention in the sequel.
Let $\sym \mathbb R\ax$ denote the set of \emph{symmetric elements} in $\mathbb R\ax$, i.e.,
$$\sym \mathbb R\ax=\{f\in \mathbb R\ax\mid f^*=f\}.$$
Similarly, we use $\sym \mathbb R^{t\times t}$ to denote the set of all symmetric $t\times t$ matrices.
In this paper we will mostly consider the \emph{normalized} trace $\Tr$,
i.e.,
$$\Tr(A)=\frac 1t\tr(A)\quad\text{for } A\in\mathbb R^{t\times t}.$$
The invariance of the trace under cyclic permutations motivates the
following definition of cyclic equivalence \cite[p.~1817]{ksconnes}.
\begin{dfn}
Two polynomials $f,g\in \mathbb R\ax$ are \emph{cyclically equivalent}
if $f-g$ is a sum of commutators:
$$f-g=\sum_{i=1}^k(p_iq_i-q_ip_i) \text{ for some } k\in\mathbb N
\text{ and } p_i,q_i \in \mathbb R\ax.$$
\end{dfn}
\begin{remark}\label{rem:csim}
\mbox{}\par
\begin{enumerate}[(a)]
\item Two words $v,w\in\ax$ are cyclically equivalent if and only if $w$
is a cyclic permutation of $v$.
Equivalently: there exist $u_1,u_2\in\ax$ such that
$v=u_1u_2$ and $w=u_2u_1$.
\item If $f\stackrel{\mathrm{cyc}}{\thicksim} g$ then $\Tr(f(\ushort A))=\Tr(g(\ushort A))$ for all tuples
$\ushort A$ of symmetric matrices.
Less obvious is the converse: if $\Tr(f(\ushort A))=\Tr(g(\ushort A))$
for all $\ushort A$ and $f-g\in\sym\mathbb R\ax$, then $f\stackrel{\mathrm{cyc}}{\thicksim} g$ \cite[Theorem 2.1]{ksconnes}.
\item Although $f\stackrel{\mathrm{cyc}}{\nsim} f^*$ in general, we still have
$$\Tr(f(\ushort A))=\Tr(f^*(\ushort A))$$
for all $f\in\mathbb R \ax$ and all $\ushort A\in (\sym\mathbb R^{t\times t})^n$.
\end{enumerate}
\end{remark}
The length of the longest word in a polynomial $f\in\mathbb R\ax$ is the
\textit{degree} of $f$ and is denoted by $\deg f$.
We write $\mathbb R\ax_{\leq k}$ for the set of all polynomials of degree $\leq k$.
\section{The truncated tracial moment problem}\label{sec:ttmp}
In this section we define tracial (moment) sequences,
tracial moment matrices,
the tracial moment problem, and their truncated analogs.
After a few motivating examples we proceed to show that the
kernel of a tracial moment matrix has some real-radical-like
properties (Proposition \ref{prop:radical}).
We then prove that a tracial moment matrix of finite
rank has a tracial moment representation, i.e., the tracial moment problem
for the associated tracial sequence is solvable (Theorem \ref{thm:finiterank}).
Finally, we give the solution of
the truncated tracial moment problem: a truncated tracial sequence has
a tracial representation if and only if one of its extensions has a tracial moment matrix that
admits a flat extension (Corollary \ref{cor:flatt}).
For an overview of the classical (commutative) moment problem in several
variables we refer
the reader to Akhiezer \cite{Akh} (for the analytic theory) and
to the survey of Laurent \cite{laurent} and references therein for a more
algebraic approach.
The standard references on the truncated moment problems are
\cite{cffinite,cfflat}.
For the non-commutative moment problem with \emph{free} (i.e.,
unconstrained) moments see
\cite{McC,helton}.
\begin{dfn}
A sequence of real numbers $(y_w)$ indexed by words $w\in \ax$ satisfying
\begin{equation}
y_w=y_u \text{ whenever } w\stackrel{\mathrm{cyc}}{\thicksim} u, \label{cyc}
\end{equation}
\begin{equation}
y_w=y_{w^*} \text{ for all } w, \label{cycstar}
\end{equation}
and $y_\emptyset=1$, is called a (normalized) \emph{tracial sequence}.
\end{dfn}
\begin{example}
Given $t\in\mathbb N$ and symmetric matrices $A_1,\dots,A_n\in \sym \mathbb R^{t\times t}$,
the sequence given by $$y_w:= \Tr(w(A_1,\dots,A_n))=\frac 1t \tr(w(A_1,\dots,A_n))$$
is a tracial sequence since by Remark \ref{rem:csim}, the traces of cyclically
equivalent words coincide.
\end{example}
We are interested in the converse of this example (the \emph{tracial moment problem}):
\emph{For which sequences $(y_w)$ do there exist $N\in \mathbb N$, $t\in \mathbb N$,
$\lambda_i\in \mathbb R_{\geq0}$ with $\sum_i^N \lambda_i=1$ and
vectors $\ushort A^{(i)}=(A_1^{(i)},\dots,A_n^{(i)})\in (\sym \mathbb R^{t\times t})^n$, such that
\begin{equation}
y_w=\sum_{i=1}^N \lambda_i \Tr(w(\ushort A^{(i)}))\,? \label{rep}
\end{equation}}
We then say that $(y_w)$ has a \emph{tracial moment representation}
and call it a \emph{tracial moment sequence}.
The \emph{truncated tracial moment problem} is the study of (finite) tracial sequences
$(y_w)_{\leq k}$
where $w$ is constrained by $\deg w\leq k$ for some $k\in\mathbb N$,
and properties \eqref{cyc} and \eqref{cycstar} hold for these $w$.
For instance, which sequences $(y_w)_{\leq k}$ have a tracial moment
representation, i.e., when does there
exist a representation of the values $y_w$ as in \eqref{rep} for $\deg w\leq k$?
If this is the case, then
the sequence $(y_w)_{\leq k}$ is called a \emph{truncated tracial moment sequence}.
\begin{remark}
\mbox{}\par
\begin{enumerate}[(a)]
\item
To keep a perfect analogy with the classical moment problem,
one would need to consider the existence of a positive
Borel measure $\mu$ on $(\sym \mathbb R^{t\times t})^n$ (for some
$t\in\mathbb N$) satisfying
\begin{equation}\label{eq:gewidmetmarkus}
y_w = \int \! w(\ushort A) \, d\mu(\ushort A).
\end{equation}
As we shall mostly focus on the \emph{truncated}
tracial moment problem in the sequel, the
finitary representations \eqref{rep} seem to be the
proper setting.
We look forward to studying the more general representations
\eqref{eq:gewidmetmarkus} in the future.
\item
Another natural extension of our tracial moment problem
with respect to matrices would be to consider moments obtained by
traces in finite \emph{von Neumann algebras} as
done by Hadwin \cite{had}.
However, our
primary motivation were trace-positive polynomials
defined via traces of matrices (see Definition \ref{def:trpos}),
a theme we expand upon in Section \ref{sec:poly}. Understanding these
is one of the approaches to Connes' embedding conjecture \cite{ksconnes}.
The notion dual to that of trace-positive polynomials is
the tracial moment problem as defined above.
\item The tracial moment problem
is a natural extension of the classical quadrature problem
dealing with
representability via atomic positive measures in
the commutative case. Taking $\ushort a^{(i)}$
consisting of $1\times 1$ matrices $a_j^{(i)}\in\mathbb R$
for the $\ushort A^{(i)}$
in \eqref{rep}, we have
$$y_w=\sum_i \lambda_i w(\ushort a^{(i)})= \int \!x^w \, d\mu(x),$$
where $x^w$ denotes the commutative collapse of $w\in\ax$.
The measure $\mu$ is the convex combination
$\sum \lambda_i\delta_{\ushort a^{(i)}}$
of the atomic measures $\delta_{\ushort a^{(i)}}$.
\end{enumerate}
\end{remark}
The next example shows that there are (truncated) tracial moment sequences $(y_w)$
which
cannot be written as $$y_w=\Tr(w(\ushort A)).$$
\begin{example}\label{exconv}
Let $X$ be a single free (non-commutative) variable.
We take the index set $J=(1,X,X^2,X^3,X^4)$ and $y=(1,1-\sqrt2,1,1-\sqrt2,1)$. Then
$$y_w=\frac{\sqrt2}{2}w(-1)+(1-\frac{\sqrt2}{2})w(1),$$ i.e.,
$\lambda_1=\frac{\sqrt2}{2}$, $\lambda_2=1-\lambda_1$ and $A^{(1)}=-1$, $A^{(2)}=1$.
But there is no symmetric matrix $A\in \mathbb R^{t\times t}$ for any $t\in\mathbb N$ such that
$y_w=\Tr(w(A))$ for all $w\in J$. The proof is given in the appendix.
\end{example}
The (infinite) \emph{tracial moment matrix} $M(y)$ of a tracial
sequence $y=(y_w)$ is defined by
$$M(y)=(y_{u^*v})_{u,v}.$$
This matrix is symmetric due to the condition \eqref{cycstar} in the
definition of a tracial sequence.
A necessary condition for $y$ to be a tracial moment sequence is positive
semidefiniteness of $M(y)$ which in general is not sufficient.
The tracial moment matrix of \emph{order $k$} is the tracial moment matrix $M_k(y)$
indexed by words $u,v$ with $\deg u,\deg v\leq k$.
If $y$ is a truncated tracial moment sequence, then $M_k(y)$ is positive
semidefinite. Here is an easy example showing the converse is false:
\begin{example}\label{expsd}
When dealing with two variables, we write $(X,Y)$ instead of $(X_1,X_2)$.
Taking the index set
$$(1,X,Y,X^2,XY,Y^2,X^3,X^2Y,XY^2,Y^3,X^4,X^3Y,X^2Y^2,XYXY,XY^3,Y^4)$$
the truncated moment sequence $$y=(1,0,0,1,1,1,0,0,0,0,4,0,2,1,0,4) $$ yields the
tracial moment matrix
$$M_2(y)=\left(\begin{smallmatrix}
1&0&0&1&1&1&1\\ 0&1&1&0&0&0&0\\ 0&1&1&0&0&0&0\\ 1&0&0&4&0&0&2\\
1&0&0&0&2&1&0\\ 1&0&0&0&1&2&0\\ 1&0&0&2&0&0&4
\end{smallmatrix}\right)$$
with respect to the basis $(1,X,Y,X^2,XY,YX,Y^2)$.
$M_2(y)$ is positive semidefinite but $y$ has no tracial representation.
Again, we postpone the proof until the appendix.
\end{example}
For a given polynomial $p=\sum_{w\in \ax} p_w w\in \mathbb R \ax$ let $\vv p$ be the
(column) vector of coefficients $p_w$ in a given fixed order.
One can identify $\mathbb R \ax_{\leq k}$ with $\mathbb R^\eta$
for $\eta=\eta(k)=\dim\mathbb R\ax_{\leq k}<\infty$ by sending each $p\in \mathbb R \ax_{\leq k}$ to the vector
$\vv p$ of its entries with $\deg w\leq k$.
The tracial moment matrix $M(y)$ induces the linear map
$$\varphi_M:\mathbb R\ax\to \mathbb R^\mathbb N,\quad p\mapsto M\vv p.$$ The tracial moment matrices $M_k(y)$,
indexed by $w$ with $\deg w\leq k$, can be regarded as linear maps
$\varphi_{M_k}:\mathbb R^\eta\to \mathbb R^\eta$, $\vv p\mapsto M_k\vv p$.
\begin{lemma}\label{lem:mk}
Let $M=M(y)$ be a tracial moment matrix. Then the following holds:
\begin{enumerate}[\rm (1)]
\item $p(y):=\sum_w p_w y_w={\vv{1}}^*M\vv{p}$. In particular,
${\vv{1}}^*M\vv{p}={\vv{1}}^*M\vv{q}$ if $p\stackrel{\mathrm{cyc}}{\thicksim} q$;
\item ${\vv{p}}^*M\vv{q}={\vv{1}}^*M\vv{p^*q}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $p,q\in \mathbb R \ax$. For $k:=\max \{\deg p,\deg q\}$, we have
\begin{equation}
{\vv{p}}^*M(y)\vv{q}={\vv{p}}^*M_k(y)\vv{q}.
\end{equation}
Both statements now follow by direct calculation.
\end{proof}
We can identify the kernel of a tracial moment matrix $M$ with the subset of $\mathbb R \ax$
given by
\begin{equation}\label{eq:momKer}
I:=\{p\in \mathbb R \ax\mid M\vv p=0\}.
\end{equation}
\begin{prop}\label{lem:kerideal} Let $M\succeq0$ be a tracial moment matrix. Then
\begin{equation}\label{kerideal}
I=\{p\in \mathbb R \ax\mid \langle M\vv{p},\vv{p}\rangle=0\}.
\end{equation}
Further, $I$
is a two-sided ideal of $\mathbb R \ax$ invariant under the involution.
\end{prop}
\begin{proof}
Let $J:=\{p\in \mathbb R \ax\mid \langle M\vv{p},\vv{p}\rangle=0\}$. The implication
$I\subseteq J$ is obvious. Let $p\in J$ be given and $k=\deg p$.
Since $M$ and thus $M_k$ for each $k\in \mathbb N$ is positive semidefinite, the square root
$\sqrt{M_k}$ of $M_k$ exists. Then
$0=\langle M_k\vv{p},\vv p\rangle=\langle\sqrt{M_k}\vv{p}, \sqrt{M_k}\vv{p}\rangle$ implies
$\sqrt{M_k}\vv{p}=0$. This leads to $M_k\vv{p}=M\vv p=0$, thus $p\in I$.
To prove that $I$ is a two-sided ideal, it suffices to show that $I$ is a right-ideal
which is closed under *. To do this, consider the bilinear map
$$ \langle p,q\rangle_M:= \langle M\vv{p},\vv{q}\rangle$$ on $\mathbb R \ax$, which is a semi-scalar
product. By Lemma \ref{lem:mk}, we get that
$$\langle pq,pq\rangle_M=((pq)^*pq)(y)=(qq^*p^*p)(y)= \langle pqq^*,p\rangle_M.$$
Then by the Cauchy-Schwarz inequality it follows that for $p\in I$, we have
$$0\leq \langle pq,pq\rangle_M^2=\langle pqq^*,p\rangle_M^2\leq
\langle pqq^*,pqq^*\rangle_M\langle p,p\rangle_M=0.$$
Hence $pq\in I$, i.e., $I$ is a right-ideal.
Since $p^*p\stackrel{\mathrm{cyc}}{\thicksim} pp^*$, we obtain from Lemma \ref{lem:mk} that
$$\langle M\vv{p},\vv{p} \rangle=\langle p,p \rangle_M=(p^*p)(y)=(pp^*)(y)=\langle p^*,p^*
\rangle_M=
\langle M{\vv p}^*,{\vv p}^* \rangle.$$ Thus if $p\in I$ then also $p^*\in I$.
\end{proof}
In the \emph{commutative} context, the kernel of $M$ is a real radical ideal if $M$ is positive
semidefinite as observed by Scheiderer (cf.~\cite[p.~2974]{laurent2}).
The next proposition gives a description of
the kernel of $M$ in the non-commutative setting, and could be helpful in
defining a non-commutative real radical ideal.
\begin{prop}\label{prop:radical}
For the ideal $I$ in \eqref{eq:momKer} we have
$$I=\{f\in \mathbb R \ax\mid (f^*f)^k\in I \;\text{for some}\;k\in \mathbb N\}.$$
Further,
$$I=\{f\in \mathbb R \ax\mid (f^*f)^{2k}+\sum g_i^*g_i\in I \;\text{for some}\;k\in \mathbb N, g_i\in \mathbb R \ax\}.
$$
\end{prop}
\begin{proof}
If $f\in I$ then also $f^*f\in I$ since $I$ is an ideal. If $f^*f\in I$ we have
$M\vv{f^*f}=0$ which implies by Lemma \ref{lem:mk} that
$$0={\vv 1}^*M\vv{f^*f}={\vv f}^*M\vv{f}=\langle Mf,f\rangle.$$
Thus $f\in I$.
If $(f^*f)^k\in I$ then also $(f^*f)^{k+1}\in I$. So without loss of generality let $k$ be even.
From $(f^*f)^k\in I$ we obtain
$$0={\vv 1}^*M\vv{(f^*f)^k}={\vv{(f^*f)^{k/2}}}^*M\vv{(f^*f)^{k/2}},$$ implying
$(f^*f)^{k/2}\in I$. This leads to $f\in I$ by induction.
To show the second statement let $(f^*f)^{2k}+\sum g_i^*g_i\in I$. This leads to
$${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}+\sum_i {\vv{g_i}}^*M\vv{g_i}=0.$$ Since
$M(y)\succeq0$ we have ${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}\geq 0$ and
${\vv{g_i}}^*M\vv{g_i}\geq 0.$ Thus ${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}=0$
(and ${\vv{g_i}}^*M\vv{g_i}= 0$) which implies $f\in I$ as above.
\end{proof}
In the commutative setting one uses the Riesz representation theorem for
some set of continuous functions (vanishing at infinity or with compact support)
to show the existence of a representing measure. We will use the Riesz
representation theorem for positive linear functionals on a
finite-dimensional Hilbert space.
\begin{dfn}
Let $\mathcal A$ be an $\mathbb R$-algebra with involution. We call a linear map
$L:\mathcal A\to \mathbb R$ a \emph{state} if
$L(1)=1$, $L(a^*a)\geq0$ and $L(a^*)=L(a)$ for all $a\in\mathcal A$.
If all the commutators have value $0$, i.e., if $L(ab)=L(ba)$ for all
$a,b\in \mathcal A$, then $L$ is called a \emph{tracial state}.
\end{dfn}
With the aid of the Artin-Wedderburn theorem we shall
characterize tracial states on matrix $*$-algebras in Proposition
\ref{prop:convtrace}.
This will enable us to prove the existence of a tracial moment representation for
tracial sequences with a finite rank tracial moment matrix; see Theorem
\ref{thm:finiterank}.
\begin{remark}\label{rem:aw}
The only central simple algebras over $\mathbb R$ are full matrix
algebras over $\mathbb R$, $\mathbb C$ or $\mathbb H$ (combine the Frobenius theorem
\cite[(13.12)]{Lam} with the Artin-Wedderburn theorem \cite[(3.5)]{Lam}).
In order to understand ($\mathbb R$-linear) tracial states on these, we recall
some basic Galois theory.
Let
$$\Trd_{\mathbb C/\mathbb R}:\mathbb C\to\mathbb R, \quad z\mapsto\frac 12(z+\bar z) $$
denote the \emph{field trace} and
$$\Trd_{\mathbb H/\mathbb R}:\mathbb H\to\mathbb R,\quad z\mapsto\frac12(z+\bar z)$$
the \emph{reduced trace} \cite[p.~5]{boi}.
Here the Hamilton quaternions $\mathbb H$ are endowed with the \emph{standard
involution}
$$
z=a+\mathbbm i b+\mathbbm j c+\mathbbm k d \mapsto a-\mathbbm i b-\mathbbm j k-\mathbbm k d = \bar z
$$
for $a,b,c,d\in\mathbb R$.
We extend the canonical involution on $\mathbb C$ and $\mathbb H$ to the conjugate
transpose involution $*$ on matrices
over $\mathbb C$ and $\mathbb H$, respectively.
Composing the field trace and reduced trace, respectively, with the normalized
trace, yields an $\mathbb R$-linear map from $\mathbb C^{t\times t}$ and
$\mathbb H^{t\times t}$, respectively, to $\mathbb R$. We will denote it simply
by $\Tr$. A word of \emph{caution}:
$\Tr(A)$ does not denote the (normalized) matricial trace
over $\mathbb K$
if $A\in \mathbb K^{t\times t}$ and $\mathbb K\in\{\mathbb C,\mathbb H\}$.
\end{remark}
An alternative description of $\Tr$ is given by the following lemma:
\begin{lemma}\label{lem:convtrace}
Let $\mathbb K\in\{\mathbb R,\mathbb C,\mathbb H\}$. Then
the only $(\mathbb R$-linear$)$ tracial state on $\mathbb K^{t\times t}$ is $\Tr$.
\end{lemma}
\begin{proof}
An easy calculation shows that $\Tr$ is indeed a tracial state.
Let $L$ be a tracial state on $\mathbb R^{t\times t}$.
By the Riesz representation theorem there exists a positive
semidefinite matrix $B$ with $\Tr(B)=1$ such that $$L(A)=\Tr(BA)$$ for all
$A\in\mathbb R^{t\times t}$.
Write $B=\begin{pmatrix}b_{ij}\end{pmatrix}_{i,j=1}^{t}$.
Let
$i\neq j$.
Then $A=\lambda E_{ij}$ has zero trace for every
$\lambda\in \mathbb R$ and is thus a sum of commutators.
(Here $E_{ij}$ denotes the $t\times t$ \emph{matrix unit} with a one
in the $(i,j)$-position and zeros elsewhere.)
Hence
$$\lambda b_{ij} = L(A) = 0.$$
Since $\lambda\in\mathbb R$ was arbitrary, $b_{ij}=0$.
Now let $A=\lambda (E_{ii}-E_{jj})$. Clearly,
$\Tr(A)=0$ and hence $$\lambda(b_{ii}-b_{jj})= L(A)= 0.$$
As before, this gives $b_{ii}=b_{jj}$. So $B$ is scalar,
and $\Tr(B)=1$. Hence it is the
identity matrix. In particular, $L=\Tr$.
If $L$ is a tracial state on $\mathbb C^{t\times t}$,
then $L$ induces a tracial state on $\mathbb R^{t\times t}$,
so $L_0:=L|_{\mathbb R^{t\times t}}=\Tr$ by the above.
Extend $L_0$ to
$$L_1:\mathbb C^{t\times t} \to \mathbb R,
\quad A+\mathbbm i B\mapsto L_0(A)=\Tr(A) \quad\text{for } A,B\in\mathbb R^{t\times t}.
$$
$L_1$ is a tracial state on $\mathbb C^{t\times t}$ as a
straightforward computation
shows. As $\Tr(A)=\Tr(A+\mathbbm i B)$, all we need to show is that $L_1=L$.
Clearly, $L_1$ and $L$ agree on the vector space spanned
by all commutators in $\mathbb C^{t\times t}$. This space is (over $\mathbb R$)
of codimension $2$. By construction, $L_1(1)=L(1)=1$ and
$L_1(\mathbbm i)=0$. On the other hand,
$$L(\mathbbm i)=L(\mathbbm i^*)=-L(\mathbbm i)$$ implying $L(\mathbbm i)=0$.
This shows $L=L_1=\Tr$.
The remaining case of tracial states over $\mathbb H$ is dealt
with
similarly and is left as an exercise for the reader.
\end{proof}
\begin{remark}\label{rem:real}
Every complex number $z=a+\mathbbm i b$ can be represented
as a $2\times 2$ real matrix
$z'=\left(\begin{smallmatrix} a & b \\ -b & a\end{smallmatrix}\right)$.
This gives rise to
an $\mathbb R$-linear $*$-map
$\mathbb C^{t\times t}\to \mathbb R^{(2t)\times(2t)}$ that commutes with $\Tr$.
A similar property holds if quaternions
$a+\mathbbm i b+\mathbbm j c+\mathbbm k d$
are represented by the $4\times 4$ real matrix
$$\left(\begin{smallmatrix}
a & b & c & d \\
-b & a & -d & c \\
-c & d & a & -b \\
-d & -c & b & a
\end{smallmatrix}\right).$$
\end{remark}
\begin{prop}\label{prop:convtrace}
Let $\mathcal A$ be a $*$-subalgebra of $ \mathbb R^{t\times t}$ for some $t\in \mathbb N$ and
$L:\mathcal A\to \mathbb R$ a tracial state.
Then there exist
full matrix algebras $\mathcal A^{(i)}$ over $\mathbb R$, $\mathbb C$ or $\mathbb H$,
a $*$-isomorphism
\begin{equation}\label{eq:iso}
\mathcal A\to\bigoplus_{i=1}^N \mathcal A^{(i)},
\end{equation}
and $\lambda_1,\dots, \lambda_N\in \mathbb R_{\geq0}$ with $\sum_i \lambda_i=1$, such that for all
$A\in \mathcal A$,
$$L(A)=\sum_i^N \lambda_i\Tr(A^{(i)}).$$
Here, $\bigoplus_i A^{(i)} =\left(\begin{smallmatrix} A^{(1)} \\ & \ddots \\ & & A^{(N)}
\end{smallmatrix}\right)$ denotes the image of $A$ under the isomorphism
\eqref{eq:iso}. The size of $($the real representation of$)$ $\bigoplus_i A^{(i)}$ is
at most $t$.
\end{prop}
\begin{proof}
Since $L$ is tracial,
$L(U^*AU)=L(A)$ for all orthogonal $U\in\mathbb R^{t\times t}$.
Hence we can apply orthogonal transformations to $\mathcal A$
without changing the values of $L$.
So $\mathcal A$ can be transformed into block diagonal form
as in \eqref{eq:iso}
according to its invariant subspaces.
That is, each of the blocks $\mathcal A^{(i)}$
acts irreducibly on a subspace of $\mathbb R^t$ and is thus
a central
simple algebra (with involution) over $\mathbb R$.
The involution on $\mathcal A^{(i)}$ is induced by the
conjugate transpose involution. (Equivalently, by the
transpose on the real matrix representation in the complex
of quaternion case.)
Now $L$ induces (after a possible normalization) a tracial state on the block
$\mathcal A^{(i)}$ and hence by Lemma \ref{lem:convtrace}, we have
$L_i:=L|_{\mathcal A^{(i)}}=\lambda_i \Tr$ for some $\lambda_i\in\mathbb R_{\geq0}$.
Then
\[
L(A)=L\big(\bigoplus_i A^{(i)}\big)=\sum_i L_i\big(A^{(i)}\big)
= \sum_i \lambda_i \Tr\big(A^{(i)}\big)
\]
and
$1=L(1)=\sum_i \lambda_i$.
\end{proof}
The following theorem is the tracial version of the representation theorem
of Curto and Fialkow for moment matrices with finite rank \cite{cffinite}.
\begin{thm}\label{thm:finiterank}
Let $y=(y_w)$ be a tracial sequence with positive semidefinite
moment matrix $M(y)$ of finite rank $t$. Then $y$ is a tracial moment
sequence, i.e., there exist vectors
$\ushort A^{(i)}=(A_1^{(i)},\dots,A_n^{(i)})$ of symmetric matrices $A_j^{(i)}$
of size at most $t$ and $\lambda_i\in \mathbb R_{\geq0}$ with $\sum \lambda_i=1$
such that $$y_w=\sum \lambda_i \Tr(w(\ushort A^{(i)})).$$
\end{thm}
\begin{proof}
Let $M:=M(y)$. We equip $\mathbb R\ax$ with the bilinear form given by
$$\langle p,q\rangle_M:=\langle M\vv{p},\vv{q} \rangle={\vv{q}}^*M\vv p.$$ Let
$I=\{p\in \mathbb R\ax\mid \langle p,p\rangle_M=0\}.$ Then by Proposition \ref{lem:kerideal},
$I$ is an ideal of $\mathbb R \ax$. In particular, $I=\ker \varphi_M$ for
$$\varphi_M:\mathbb R \ax\to \ran M,\quad p\mapsto M\vv{p}.$$ Thus if we define
$E:=\mathbb R \ax/I$, the induced linear map
$$\overline\varphi_M:E\to \ran M,\quad \overline p\mapsto M\vv{p}$$
is an isomorphism and $$\dim E=\dim(\ran M)=\rank M=t<\infty.$$ Hence
$(E,\langle$\textvisiblespace ,\textvisiblespace $\rangle_E)$ is a finite-dimensional
Hilbert space for
$\langle \bar p,\bar q\rangle_E={\vv{q}}^*M\vv{p}$.
Let $\hat X_i$ be the right multiplication with $X_i$ on $E$, i.e.,
$\hat X_i \overline p:=\overline{pX_i}$. Since
$I$ is a right ideal of $\mathbb R \ax$, the operator $\hat X_i$ is well defined.
Further, $\hat X_i$ is symmetric since
\begin{align*}
\langle \hat X_i \overline p,\overline q \rangle_E&=\langle M \vv{pX_i},\vv{q} \rangle
= (X_ip^*q)(y)\\
&=(p^*qX_i)(y)=\langle M \vv{p},\vv{qX_i} \rangle=\langle\overline p,\hat X_i\overline q \rangle_E.
\end{align*}
Thus each $\hat X_i$, acting on a $t$-dimensional vector space, has a representation matrix
$A_i\in \sym \mathbb R^{t\times t}$.
Let $\mathcal B=B(\hat X_1,\dots,\hat X_n)=B(A_1,\dots,A_n)$ be the algebra of
operators generated by $\hat X_1,\dots,\hat X_n$. These operators can be written
as $$\hat p=\sum_{w\in\ax} p_w \hat{w}$$ for some $p_w\in \mathbb R$,
where $\hat w=\hat X_{w_1}\cdots \hat X_{w_s}$ for $w=X_{w_1}\cdots X_{w_s}$.
Observe that $\hat{w}=w(A_1,\dots,A_n)$.
We define the linear functional $$L:\mathcal B\to\mathbb R,\quad
\hat p\mapsto {\vv{1}}^*M\vv p=p(y),$$
which is a state on $\mathcal B$.
Since $y_w=y_u$ for $w\stackrel{\mathrm{cyc}}{\thicksim} u$, it follows that $L$ is tracial. Thus by Proposition
\ref{prop:convtrace} (and Remark \ref{rem:real}), there exist
$\lambda_1,\dots \lambda_N\in \mathbb R_{\geq0}$ with $\sum_i\lambda_i=1$ and real symmetric matrices $A_j^{(i)}$
$(i=1,\ldots,N$)
for each $A_j\in \sym \mathbb R^{t\times t}$, such that for all $w\in \ax$,
$$y_w=w(y)=L(\hat w)=\sum_i \lambda_i \Tr(w(\ushort A^{(i)})),$$
as desired.
\end{proof}
The sufficient conditions on $M(y)$ in Theorem \ref{thm:finiterank} are also
necessary for $y$ to be a tracial moment sequence. Thus we get our first
characterization of tracial moment sequences:
\begin{cor}\label{cor:finite}
Let $y=(y_ w)$ be a tracial sequence. Then $y$ is a tracial moment sequence
if and only if $M(y)$ is positive semidefinite and of finite rank.
\end{cor}
\begin{proof}
If $y_ w=\Tr( w(\ushort A))$ for some $\ushort A=(A_1,\dots,A_n)\in(\sym \mathbb R^{t\times t})^n$,
then $$L(p)=\sum_ w p_ w y_ w=\sum_ w p_ w \Tr( w(\ushort A))=
\Tr(p(\ushort A)).$$
Hence
\begin{align*}
{\vv p}^*M(y)\vv{p}&=L(p^*p)=\Tr(p^*(\ushort A)p(\ushort A))\geq0.
\end{align*}
for all $p \in \mathbb R\ax$.
Further, the tracial moment matrix $M(y)$ has rank at most $t^2$.
This can be seen as follows:
$M$ induces a bilinear map
$$\Phi:\mathbb R \ax\rightarrow\mathbb R \ax^*,\quad p\mapsto\Big(q\mapsto \Tr\big((q^*p)(\ushort A)\big)\Big),$$
where $\mathbb R \ax^*$ is the dual space of $\mathbb R \ax$. This implies
$$\rank M=\dim (\ran\Phi)=\dim(\mathbb R \ax/\ker\Phi).$$
The kernel of the evaluation map
$\varepsilon_{\ushort A}:\mathbb R\ax\rightarrow\mathbb R^{t\times t}$, $p\mapsto p(\ushort A)$
is a subset of $\ker \Phi$. In particular,
\[\dim(\mathbb R\ax/\ker\Phi)\leq \dim(\mathbb R\ax/\ker\varepsilon_{\ushort A})=\dim(\ran \varepsilon_{\ushort A})\leq t^2. \]
The same holds true for each convex combination $y_w=\sum_i \lambda_i \Tr( w(\ushort A^{(i)}))$.
The converse is Theorem \ref{thm:finiterank}.
\end{proof}
\begin{dfn}\label{defflat}
Let $A\in \sym\mathbb R^{t\times t}$ be given. A (symmetric) extension of $A$ is a matrix
$\tilde A\in \sym\mathbb R^{(t+s)\times (t+s)}$ of the form
$$\tilde A=\begin{pmatrix} A &B \\ B^* & C\end{pmatrix} $$
for some $B\in \mathbb R^{t\times s}$ and $C\in \mathbb R^{s\times s}$.
Such an extension is \emph{flat} if $\rank A=\rank\tilde A$,
or, equivalently, if $B = AW$ and $C = W^*AW$ for some matrix $W$.
\end{dfn}
The kernel of a flat extension $M_k$ of a tracial moment matrix $M_{k-1}$
has some (truncated) \emph{ideal-like properties} as
shown in the following lemma.
\begin{lemma}\label{lem:flatrideal}
Let $f\in \mathbb R \ax$ with $\deg f\leq k-1$ and let $M_k$ be a flat extension of $M_{k-1}$.
If $f\in\ker M_k$ then $fX_i,X_if\in \ker M_k$.
\end{lemma}
\begin{proof}
Let $f=\sum_w f_w w$. Then for $v\in \ax_{k-1}$, we have
\begin{equation}\label{eqker}
(M_k\vv{fX_i})_v =\sum_w f_w y_{v^*wX_i}=
\sum_w f_w y_{(vX_i)^*w}=(M_k \vv f)_{vX_i}=0.
\end{equation}
The matrix $M_k$ is of the form $M_k=\left(\begin{smallmatrix} M_{k-1}&B\\B^*&C\end{smallmatrix}\right)$.
Since $M_k$ is a flat extension,
$\ker M_k=\ker \begin{pmatrix} M_{k-1}&B\end{pmatrix}$.
Thus by \eqref{eqker},
$fX_i\in \ker \begin{pmatrix} M_{k-1}&B\end{pmatrix}=\ker M_k$.
For $X_if$ we obtain analogously that
$$(M_k\vv{X_if})_v =\sum_w f_w y_{v^*X_iw}=
\sum_w f_w y_{(X_iv)^*w}=(M_k \vv f)_{X_iv}=0$$
for $v\in \ax_{k-1}$, which implies $X_if\in \ker M_k$.
\end{proof}
We are now ready to prove the tracial version of the flat extension theorem of
Curto and Fialkow \cite{cfflat}.
\begin{thm}\label{thm:flatextension}
Let $y=(y_w)_{\leq 2k}$ be a truncated tracial sequence of order $2k$. If
$\rank M_k(y)=\rank M_{k-1}(y)$, then there exists
a unique tracial extension $\tilde y=(\tilde y_w)_{\leq 2k+2}$ of $y$ such that
$M_{k+1}(\tilde y)$ is a flat extension of $M_k(y)$.
\end{thm}
\begin{proof}
Let $M_k:=M_k(y)$.
We will construct a flat extension $M_{k+1}:=\left(\begin{smallmatrix} M_k&B\\B^*&C\end{smallmatrix}\right)$
such that $M_{k+1}$ is a tracial moment matrix. Since
$M_k$ is a flat extension of $M_{k-1}(y)$ we can find a basis $b$ of
$\ran M_k$ consisting of columns of $M_k$ labeled by $w$ with $\deg w\leq k-1$.
Thus the range of $M_k$ is completely determined by the range of $M_k|_{\spann b}$,
i.e., for each $p\in \mathbb R \ax$ with $\deg p\leq k$ there exists a \emph{unique}
$r\in \spann b$ such that
$M_k\vv p=M_k \vv r$; equivalently, $p-r\in \ker M_k$.
Let $v\in\ax$, $\deg v=k+1$, $v=v'X_i$ for some $i\in \{1,\dots,n\}$ and $v'\in \ax$
with $\deg v'=k$.
For $v'$ there exists an $r\in \spann b$ such that $v'-r\in \ker M_k$.
\emph{If} there exists a flat extension $M_{k+1}$, then by Lemma \ref{lem:flatrideal},
from $v'-r\in \ker M_k\subseteq\ker M_{k+1}$ it
follows that $(v'-r)X_i\in \ker M_{k+1}$. Hence the desired flat extension
has to satisfy
\begin{equation}\label{eqflatcond}
M_{k+1}\vv{v}=M_{k+1}\vv{rX_i}=M_k\vv{rX_i}.
\end{equation}
Therefore we define
\begin{equation}\label{eq:sabinedefinesB}
B\vv{v}:=M_k\vv{rX_i}.
\end{equation}
More precisely, let $(w_1,\dots,w_\ell)$ be the
basis of $M_k$, i.e., $(M_k)_{i,j}=w_i^*w_j$. Let $r_{w_i}$
be the unique element in $\spann b$ with $ w_i-r_{ w_i}\in \ker M_k$.
Then $B=M_kW$ with
$W=(r_{ w_1X_{i_1}},\dots,r_{ w_\ell X_{i_\ell}})$ and we define
\begin{equation}\label{eq:sabinedefinesC}
C:=W^*M_kW.
\end{equation}
Since the $r_{ w_i}$ are uniquely determined,
\begin{equation}\label{eq:sabinedefinesMk+1}
M_{k+1}=\left(\begin{smallmatrix} M_k&B\\B^*&C\end{smallmatrix}\right)
\end{equation}
is well-defined. The constructed $M_{k+1}$ is a flat extension of
$M_k$, and
$M_{k+1}\succeq0$ if and only if $M_k\succeq0$, cf.~\cite[Proposition 2.1]{cfflat}.
Moreover, once $B$ is chosen, there is only one $C$ making
$M_{k+1}$ as in \eqref{eq:sabinedefinesMk+1} a flat extension of $M_k$.
This follows from general
linear algebra, see e.g.~\cite[p.~11]{cfflat}. Hence $M_{k+1}$ is the
\emph{only} candidate for a flat extension.
Therefore we are done if $M_{k+1}$ is a tracial moment matrix, i.e.,
\begin{equation}
(M_{k+1})_w=(M_{k+1})_v \;\text{ whenever}\; w\stackrel{\mathrm{cyc}}{\thicksim} v. \label{mm}
\end{equation}
To show this we prove that $(M_{k+1})_{X_iw}=(M_{k+1})_{wX_i}$. Then \eqref{mm}
follows recursively.
Let $w=u^*v$. If $\deg u,\deg vX_i\leq k$ there is nothing to show since
$M_k$ is a tracial moment matrix. If $\deg u\leq k$ and $\deg vX_i=k+1$ there exists
an $r\in \spann b$ such that $r-v\in \ker M_{k-1}$, and by Lemma \ref{lem:flatrideal},
also $vX_i-rX_i\in \ker M_k$. Then we get
\begin{align*}
(M_{k+1})_{u^*vX_i}&=\vv{u}^*M_{k+1}\vv{vX_i}=\vv{u}^*M_{k+1}\vv{rX_i}
=\vv{u}^*M_{k}\vv{rX_i}\\
&=(M_k)_{u^*rX_i}
=(M_k)_{X_iu^*r}
=(M_k)_{(uX_i)^*r}\\
&\overset{(\ast)}{=}{\vv{uX_i}}^*M_{k+1}\vv{v}=(M_{k+1})_{(uX_i)^*v}
=(M_{k+1})_{X_iw},
\end{align*}
where equality $(\ast)$ holds by \eqref{eqflatcond} which implies Lemma
\ref{lem:flatrideal} by construction.
If $\deg u=\deg vX_i=k+1$, write $u=X_ju'$. Further, there exist $s,r\in \spann b$ with
$u'-s\in \ker M_{k-1}$ and $r-v\in \ker M_{k-1}$. Then
\begin{align*}
(M_{k+1})_{u^*vX_i}&=\vv{X_ju'}^*M_{k+1}\vv{vX_i}=\vv{X_js}^*M_{k}\vv{rX_i}\\
&=(M_k)_{s^*X_jrX_i}=(M_k)_{(sX_i)^*(X_jr)}\\
&\overset{(*)}{=}\vv{uX_i}^*M_{k+1}\vv{X_jv}=(M_{k+1})_{(uX_i)^*X_jv}
=(M_{k+1})_{X_i w}.
\end{align*}
Finally, the construction of $\tilde y$ from $M_{k+1}$ is clear.
\end{proof}
\begin{cor}\label{cor:flat}
Let $y=(y_ w)_{\leq 2k}$ be a truncated tracial sequence. If
$M_k(y)$ is positive semidefinite
and $M_k(y)$ is a flat extension of $M_{k-1}(y)$, then $y$
is a truncated tracial moment sequence.
\end{cor}
\begin{proof}
By Theorem \ref{thm:flatextension} we can extend $M_k(y)$ inductively
to a positive semidefinite moment matrix $M(\tilde y)$ with
$\rank M(\tilde y)=\rank M_k(y)<\infty$. Thus $M(\tilde y)$ has finite
rank and by Theorem \ref{thm:finiterank}, there exists a tracial moment
representation
of $\tilde y$. Therefore $y$ is a truncated tracial moment sequence.
\end{proof}
The following two corollaries give characterizations of tracial
moment matrices coming from tracial moment sequences.
\begin{cor}\label{cor:flatall}
Let $y=(y_ w)$ be a tracial sequence. Then $y$
is a tracial moment sequence if and only if $M(y)$ is positive semidefinite and there
exists some $N\in \mathbb N$ such that $M_{k+1}(y)$ is a flat extension of
$M_{k}(y)$ for all $k\geq N$.
\end{cor}
\begin{proof}
If $y$ is a tracial moment sequence then by Corollary \ref{cor:finite},
$M(y)$ is positive semidefinite and has finite rank $t$. Thus there exists an
$N\in \mathbb N$ such that $t=\rank M_N(y)$.
In particular, $\rank M_k(y)=\rank M_{k+1}(y)=t$ for all $k\geq N$, i.e., $M_{k+1}(y)$
is a flat extension of $M_k(y)$ for all $k\geq N$.
For the converse, let $N$ be given such that $M_{k+1}(y)$ is a flat extension of
$M_{k}(y)$ for all $k\geq N$. By Theorem \ref{thm:flatextension}, the (iterated)
unique extension $\tilde y$ of $(y_w)_{\leq 2k}$ for $k\geq N$ is equal to $y$.
Otherwise there exists a flat extension $\tilde y$ of $(y_w)_{\leq 2\ell}$
for some $\ell\geq N$ such that $M_{\ell+1}(\tilde y)\succeq 0$ is a flat extension
of $M_\ell(y)$ and $M_{\ell+1}(\tilde y)\neq M_{\ell+1}(y)$ contradicting the
uniqueness of the extension in Theorem \ref{thm:flatextension}.
Thus $M(y)\succeq 0$ and $\rank M(y)=\rank M_N(y)<\infty$. Hence by Theorem \ref{thm:finiterank},
$y$ is a tracial moment sequence.
\end{proof}
\begin{cor}\label{cor:flatt}
Let $y=(y_ w)$ be a tracial sequence. Then $y$
has a tracial moment representation with matrices of size at most
$t:=\rank M(y)$ if
$M_N(y)$ is positive semidefinite and $M_{N+1}(y)$ is
a flat extension of $M_{N}(y)$ for some $N\in \mathbb N$ with $\rank M_N(y)=t$.
\end{cor}
\begin{proof}
Since $\rank M(y)=\rank M_N(y)=t,$
each $M_{k+1}(y)$ with $k\geq N$ is a flat extension of $M_k(y)$.
As $M_N(y)\succeq0$, all $M_k(y)$
are positive semidefinite.
Thus $M(y)$ is also positive semidefinite. Indeed, let
$p\in\mathbb R\ax$
and $\ell=\max\{\deg p,N\}$. Then
${\vv p}^*M(y)\vv p={\vv p}^*M_\ell(y)\vv p\geq0$.
Thus by Corollary \ref{cor:flatall}, $y$ is a tracial moment sequence. The
representing matrices can be chosen to be of size at most $\rank M(y)=t$.
\end{proof}
\section{Positive definite moment matrices and trace-positive polynomials}\label{sec:poly}
In this section we explain how the representability of \emph{positive definite}
tracial moment matrices relates
to sum of hermitian squares representations of
trace-positive polynomials. We start by introducing some terminology.
An element of the form $g^*g$ for some $g\in\mathbb R\ax$ is called a
\textit{hermitian square} and we denote the set of all sums of hermitian
squares by
$$\Sigma^2=\{f\in\mathbb R\ax\mid f=\sum g_i^*g_i \;\text{for some}\; g_i\in\mathbb R\ax\}.$$
A polynomial $f\in \mathbb R \ax$ is \emph{matrix-positive} if $f(\ushort A)$ is positive
semidefinite for all tuples $\ushort A$ of symmetric matrices
$A_i\in \sym \mathbb R^{t\times t}$, $t\in\mathbb N$. Helton \cite{helton} proved that $f\in\mathbb R\ax$ is
matrix-positive if and only if $f\in \Sigma^2$ by solving a non-commutative
moment problem; see also \cite{McC}.
We are interested in a different type of positivity induced by
the trace.
\begin{dfn}\label{def:trpos}
A polynomial $f\in \mathbb R \ax$ is called \emph{trace-positive} if
$$\Tr(f(\ushort A))\geq 0\;\text{ for all}\; \ushort A\in(\sym\mathbb R^{t\times t})^n,\; t\in\mathbb N.$$
\end{dfn}
Trace-positive polynomials are intimately connected to deep open
problems from
e.g.~operator algebras (Connes' embedding conjecture \cite{ksconnes})
and mathematical physics (the Bessis-Moussa-Villani conjecture
\cite{ksbmv}), so a good understanding of this set is needed.
A distinguished subset is formed by sums of hermitian squares and
commutators.
\begin{dfn}
Let $\Theta^2$ be the set of all polynomials which are cyclically
equivalent to a sum of hermitian squares, i.e.,
\begin{equation}\label{eq:defcycsohs}
\Theta^2=\{f\in \mathbb R\ax\mid f\stackrel{\mathrm{cyc}}{\thicksim}\sum g_i^*g_i\;\text{for some}\;g_i \in\mathbb R\ax\}.
\end{equation}
\end{dfn}
Obviously, all $f\in \Theta^2$ are trace-positive. However, in contrast to
Helton's sum of squares theorem mentioned above, the following
non-commutative version of the well-known Motzkin polynomial \cite[p.~5]{Mar} shows that
a trace-positive polynomial need not be a member of $\Theta^2$ \cite{ksconnes}.
\begin{example}\label{motznc}
Let $$M_{\rm nc}=XY^4X+YX^4Y-3XY^2X+1\in\mathbb R\axy.$$ Then $M_{\rm nc}\notin \Theta^2$ since
the commutative Motzkin polynomial is not a (commutative) sum of squares \cite[p.~5]{Mar}.
The fact that $M_{\rm nc}(A,B)$ has nonnegative trace for all symmetric matrices $A,B$
has been shown by Schweighofer and the second author \cite[Example 4.4]{ksconnes} using
Putinar's
Positivstellensatz \cite{Put}.
\end{example}
Let $\Sigma_k^2:=\Sigma^2\cap \mathbb R \ax_{\leq 2k}$ and $\Theta_k^2:=\Theta^2\cap \mathbb R \ax_{\leq 2k}$.
These are convex cones in $\mathbb R \ax_{\leq 2k}$.
By duality there exists a connection
between $\Theta_k^2$ and positive semidefinite tracial moment matrices of order $k$.
If every tracial moment matrix $M_k(y)\succeq0$ of order $k$ has a tracial representation
then every trace-positive polynomial of degree at most $2k$ lies in $\Theta_k^2$.
In fact:
\begin{thm}\label{thm:posdefmm}
The following statements are equivalent:
\begin{enumerate}[\rm (i)]
\item all truncated tracial sequences $(y_ w)_{\leq 2k}$ with
{\rm{positive definite}} tracial moment matrix $M_k(y)$ have a tracial moment representation \eqref{rep};
\item all trace-positive polynomials of degree $\leq2k$ are elements of $\Theta^2_k$.
\end{enumerate}
\end{thm}
For the proof we need some preliminary work.
\begin{lemma}\label{lem:thetaclosed}
$\Theta_k^2$ is a closed convex cone in $\mathbb R \ax_{\leq 2k}$.
\end{lemma}
\begin{proof}
Endow $\mathbb R\ax_{\leq 2k}$ with a norm
$\|$\textvisiblespace $\|$ and the quotient space $\mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$
with the quotient norm
\begin{equation}\label{eq:qnorm}
\| \pi(f) \| := \inf \big\{ \| f+h \| \mid h\stackrel{\mathrm{cyc}}{\thicksim} 0\big\}, \quad
f\in\mathbb R\ax_{\leq 2k}.
\end{equation}
Here $\pi:\mathbb R\ax_{\leq 2k}\to \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ denotes
the quotient map. (Note: due to the finite-dimensionality of $\mathbb R\ax_{\leq 2k}$,
the infimum on the right-hand side of \eqref{eq:qnorm} is attained.)
Since $\Theta_k^2= \pi^{-1} \big( \pi(\Theta_k^2)\big)$, it suffices
to show that $\pi(\Theta_k^2)$ is closed.
Let $d_k=\dim \mathbb R \ax_{\leq 2k}$. Since by Carath\`eodory's theorem \cite[p.~10]{bar} each element
$f\in \mathbb R \ax_{\leq 2k}$ can be written as a convex combination of $d_k+1$ elements
of $\mathbb R \ax_{\leq 2k}$, the image of
\begin{align*}
\varphi:\left(\mathbb R \ax_{\leq k}\right)^{d_k}
&\to
\mathbb R \ax_{2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}\\
(g_i)_{i=0,\dots,d_k}
&\mapsto
\pi\big(\sum_{i=0}^{d_k}g_i^*g_i\big)
\end{align*}
equals $\pi(\Sigma^2_k)=\pi(\Theta_k^2)$. In $\left(\mathbb R \ax_{\leq k}\right)^{d_k}$ we define
$\mathcal S:=\{g=(g_i)\mid \|g\|=1\}$. Note that $\mathcal S$ is compact, thus
$V:=\varphi(\mathcal S)\subseteq \pi(\Theta_k^2)$ is compact as well.
Since $0\notin \mathcal S$,
and a sum of hermitian squares cannot be cyclically equivalent to $0$ by
\cite[Lemma 3.2 (b)]{ksbmv}, we see that
$0\notin V$.
Let $(f_\ell)_\ell$ be a sequence in $\pi(\Theta^2_k)$ which converges to $\pi(f)$
for some $f\in\mathbb R \ax_{\leq 2k}$.
Write $f_\ell=\lambda_\ell v_\ell$ for $\lambda_\ell\in\mathbb R_{\geq 0}$ and $v_\ell\in V$.
Since $V$ is compact there exists a subsequence $(v_{\ell_j})_j$ of $v_\ell$ converging
to $v\in V$. Then
$$\lambda_{\ell_j}=\frac{\|f_{\ell_j}\|}{\|v_{\ell_j}\|}\stackrel{j\rightarrow \infty}{\longrightarrow }\frac{\|f\|}{\|v\|}.$$
Thus $f_\ell\rightarrow f=\frac{\|f\|}{\|v\|}v\in\pi(\Theta^2_k)$.
\end{proof}
\begin{dfn}
To a truncated tracial sequence $(y_ w)_{\leq k}$ we
associate
the \emph{$($tracial$)$ Riesz functional} $L_y:\mathbb R \ax_{\leq k}\to\mathbb R$ defined by
$$L_y(p):=\sum_ w p_ w y_ w\quad\text{for } p=\sum_ w p_ w w\in \mathbb R\ax_{\leq k}.$$
We say that $L_y$ is \emph{strictly positive} ($L_y>0$), if
$$L_y(p)>0 \text{ for all trace-positive } p\in\mathbb R \ax_{\leq k},\, p\stackrel{\mathrm{cyc}}{\nsim} 0.$$
If $L_y(p)\geq0$ for all trace-positive $p\in\mathbb R \ax_{\leq k}$, then
$L_y$ is \emph{positive} ($L_y\geq0$).
\end{dfn}
Equivalently, a tracial Riesz functional $L_y$
is positive (resp., strictly positive) if and only if the map
$\bar L_y$ it induces on $ \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ is
nonnegative (resp., positive) on
the nonzero images of trace-positive polynomials in $ \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$.
We shall prove that strictly positive Riesz functionals lie in the interior of the cone
of positive Riesz functionals,
and that truncated tracial sequences $y$ with \emph{strictly}
positive $L_y$ are truncated tracial moment sequences (Theorem \ref{thm:Lrep} below).
These results are motivated by and resemble the
results of Fialkow and Nie
\cite[Section 2]{fnie} in the commutative context.
\begin{lemma}\label{lem:Linner}
If $L_y>0$ then there exists an $\varepsilon>0$ such that $L_{\tilde y}>0$ for all
$\tilde y$ with $\|y-\tilde y\|_1<\varepsilon$.
\end{lemma}
\begin{proof}
We equip $\mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ with a quotient norm as in \eqref{eq:qnorm}.
Then $$\mathcal S:=\{\pi(p)\in \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}\mid p\in\mathcal C_k,\;\|\pi(p)\|=1\}$$ is compact.
By a scaling argument, it suffices to show that $\bar L_{\tilde y}>0$ on $\mathcal S$ for $\tilde y$ close to $y$.
The map $y\mapsto \bar L_y$ is linear between finite-dimensional vector spaces.
Thus
$$|\bar L_{y'}(\pi(p))-\bar L_{y''}(\pi(p))|\leq C \|y'-y''\|_1$$ for all $\pi(p)\in \mathcal S$,
truncated tracial moment sequences $y',y''$, and some $C\in\mathbb R_{>0}$.
Since $\bar L_y$ is continuous and strictly positive on $\mathcal S$,
there exists an $\varepsilon>0$ such
that $\bar L_y(\pi(p))\geq2\varepsilon$ for all $\pi(p)\in \mathcal S$.
Let $\tilde y$ satisfy $\|y-\tilde y\|_1<\frac {\varepsilon}C$.
Then
\[\bar L_{\tilde y}(\pi(p))\geq \bar L_y(\pi(p))-C \|y-\tilde y\|_1\geq\varepsilon>0. \hfill\qedhere \]
\end{proof}
\begin{thm}\label{thm:Lrep}
Let $y=(y_ w)_{\leq k}$ be a truncated tracial sequence of order $k$.
If $L_y>0$, then $y$ is a truncated tracial moment sequence.
\end{thm}
\begin{proof}
We show first that
$y\in \overline T$, where $\overline T$ is the closure of
$$T=\big\{(y_ w)_{\leq k}\mid \exists \ushort A^{(i)}\;\exists \lambda_i\in \mathbb R_{\geq0} :\; y_ w=\sum \lambda_i\Tr( w(\ushort A^{(i)}))\big\}.$$
Assume $L_y>0$ but $y\notin \overline T$. Since $\overline T$ is a closed
convex cone in $\mathbb R^\eta$ (for some $\eta\in \mathbb N$), by the Minkowski separation
theorem there exists a vector $\vv{p}\in \mathbb R^\eta$ such that $\vv{p}^*y<0$
and $\vv{p}^*w\geq 0$ for all $w\in \overline T$. The non-commutative
polynomial corresponding to $\vv{p}$ is
trace positive since $\vv{p}^*z\geq 0$ for all $z\in \overline T$. Thus
$0<L_y(p)=\vv{p}^*y<0$, a contradiction.
By Lemma \ref{lem:Linner}, $y\in\inte(\overline T)$. Thus $y\in \inte (\overline T)\subseteq T$
\cite[Theorem 25.20]{ber}.
\end{proof}
We remark that assuming only non-strict positivity of $L_y$ in Theorem \ref{thm:Lrep}
would not suffice for the existence of a tracial moment representation \eqref{rep}
for $y$. This is a consequence of Example \ref{expsd}.
\begin{proof}[Proof $(\!$of Theorem {\rm\ref{thm:posdefmm}}$)$]
To show (i) $\Rightarrow$ (ii), assume $f=\sum_ w f_ w w
\in\mathbb R\ax_{\leq 2k}$ is
trace-positive but $f\notin \Theta^2_k$.
By Lemma \ref{lem:thetaclosed}, $\Theta_k^2$ is a closed convex cone in $\mathbb R\ax_{\leq 2k}$, thus
by the Minkowski separation theorem we find a hyperplane which
separates $f$ and $\Theta_k^2$. That is, there is a linear form
$L:\mathbb R\ax_{\leq 2k}\to\mathbb R$ such that $L(f)<0$ and $L(p)\geq0$
for $p\in \Theta_k^2$. In particular, $L(f)=0$ for all $f\stackrel{\mathrm{cyc}}{\thicksim} 0$, i.e.,
without loss of generality, $L$ is tracial.
Since there are tracial states strictly positive on $\Sigma^2_k\setminus\{0\}$, we may assume $L(p)>0$
for all $p\in \Theta_k^2$, $p\stackrel{\mathrm{cyc}}{\nsim} 0$.
Hence
the bilinear form given by $$(p,q)\mapsto L(pq)$$ can be written as
$ L(pq)={\vv q}^*M\vv{p}$ for some truncated tracial moment matrix $M\succ0$.
By assumption, the corresponding truncated tracial sequence
$y$ has a tracial moment representation $$y_ w=\sum \lambda_i \Tr( w(\ushort A^{(i)}))$$
for some tuples $A^{(i)}$ of symmetric matrices $A_j^{(i)}$ and $\lambda_i\in \mathbb R_{\geq0}$
which implies the contradiction
$$0>L(f)=\sum \lambda_i \Tr(f(\ushort A^{(i)}))\geq 0.$$
Conversely, if (ii) holds,
then $L_y>0$ if and only if $M(y)\succ0$. Thus a positive definite moment matrix $M(y)$
defines a strictly positive functional $L_y$ which by Theorem \ref{thm:Lrep} has a tracial
representation.
\end{proof}
As mentioned above, the Motzkin polynomial $M_{\rm nc}$
is trace-positive but $M_{\rm nc}\notin \Theta^2$. Thus by Theorem \ref{thm:posdefmm}
there exists at least one truncated tracial moment matrix which is positive definite but has
no tracial representation.
\begin{example}
Taking the index set
$$(1,X,Y,X^2,XY,YX,Y^2,X^2Y,XY^2,YX^2,Y^2X,X^3,Y^3,XYX,YXY),$$
the
matrix
$$M_3(y):=\left(\begin{smallmatrix}
1 & 0 & 0 & \frac74 & 0 & 0 & \frac74 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac74 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & \frac{19}{16} & \frac{21}4 & 0 & 0 & 0 \\
0 & 0 & \frac74 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & \frac{19}{16} & 0 & 0 & \frac{21}4 & 0 & 0 \\
\frac74 & 0 & 0 & \frac{21}4 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\frac74 & 0 & 0 & \frac{19}{16} & 0 & 0 &\frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{5}6 & 0 & 0 & \frac{9}8 & 0 & 0 \\
0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{5}6 & \frac{9}8 & 0 & 0 & 0 \\
0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & \frac{5}6 & 0 & \frac{9}8 & 0 & 0 & \frac{9}8 & 0 & 0 \\
0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6 & 0 & \frac{9}8 & \frac{9}8 & 0 & 0 & 0 \\
0 & \frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{9}8 & 51 & 0 & 0 & 0 \\
0 & 0 & \frac{21}4 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{9}8 & 0 & 0 & 51 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6
\end{smallmatrix} \right)$$
is a tracial moment matrix of degree 3 in 2 variables and is positive definite.
But $$L_y(M_{\rm nc})=M_{\rm nc}(y)=-\frac5{16}<0.$$ Thus $y$
is not a truncated tracial moment sequence,
since otherwise $L_y(p)\geq 0$ for all trace-positive polynomials $p\in \mathbb R\axy_{\leq 6}$.
On the other hand, the (free) non-commutative moment problem is always
solvable for positive definite moment matrices \cite[Theorem 2.1]{McC}.
In our example this means
there are symmetric matrices $A,B\in\mathbb R^{15\times 15}$ and a vector
$v\in\mathbb R^{15}$ such that
$$y_ w=\langle w(A,B)v,v\rangle$$
for all $ w\in\axy_{\leq 3}$.
\end{example}
\begin{remark}
A trace-positive polynomial $f\in \mathbb R \ax$ of degree $2k$ lies in $\Theta^2_k$ if
and only if $L_y(f)\geq 0$ for all truncated tracial sequences $(y_w)_{\leq 2k}$ with
$M_k(y)\succeq0$.
This condition is obviously satisfied if all truncated tracial sequences $(y_w)_{\leq 2k}$ with
$M_k(y)\succeq0$ have a tracial representation.
Using this we can prove that trace-positive binary quartics, i.e.,
homogeneous polynomials of degree $4$ in $\mathbb R \langle X,Y\rangle$, lie in $\Theta_2^2$.
Equivalently, truncated tracial sequences $(y_w)$ indexed by words of degree $4$ with a
positive definite tracial
moment matrix have a tracial moment representation.
Furthermore,
trace-positive binary biquadratic polynomials, i.e., polynomials $f\in \mathbb R \axy$ with
$\deg_X f, deg_Y f\leq 2$,
are cyclically equivalent to a sum of hermitian squares.
Example \ref{expsd} then shows that a polynomial $f$ can satisfy $L_y(f)\geq 0$ although there
are truncated tracial sequences $(y_w)_{\leq 2k}$ with $M_k(y)\succeq0$ and no
tracial representation.
Studying extremal points of the convex cone $$\{(y_w)_{\leq 2k}\mid M_k(y)\succeq 0\}$$
of truncated tracial sequences with positive semidefinite tracial moment matrices, we are able
to impose a concrete block structure on the matrices needed in a tracial moment representation.
These statements and concrete sum of hermitian squares and commutators representations of trace-positive polynomials
of low degree will be published elsewhere \cite{sb}.
\end{remark}
|
\section{Introduction}
Some tasks, due to their complexity, cannot be carried out by single individuals. They need the concourse of sets of people composing teams. Teams provide a structure and means of bringing together people with a suitable mix of individual properties (such as competences or personality). This can encourage the exchange of ideas, their creativity, their motivation and job satisfaction and can actually extend individual capabilities. In turn, a suitable team can improve the overall productivity, and the quality of the performed tasks. However, sometimes teams work less effectively than initially expected due to several reasons: a bad balance of their capacities, incorrect team dynamics, lack of communication, or difficult social situations. Team composition is thus a problem that has attracted the interest of research groups all over the world, also in the area of multiagent systems. MAS research has widely acknowledged competences as important for performing tasks of different nature \cite{Anagnostopoulos12onlineteam,Chen2015,Okimoto,Rangapuram2015}. However, the majority of the approaches represent capabilities of agents in a Boolean way (i.e., an agent either has a required skill or not). This is a simplistic way to model an agent's set of capabilities as it ignores any skill degree. In real life, capabilities are not binary since every individual (e.g. human or software) shows different performances for each competence. Additionally, the MAS literature has typically disregarded significant organizational psychology findings (with the exception of several recent, preliminary attempts like \cite{FarhangianPPS15} or \cite{alberola2016artificial}). Numerous studies in organizational psychology \cite{Arnold,Mount,White} underline the importance of personality traits or \emph{types} for team composition. Other studies have focused on how team members should differ or converge in their characteristics, such as experience, personality, level of skill, or gender, among others \cite{West}, in order to increase performance.
In this paper, we focus on scenarios where a complex task requires the collaboration of individuals within a team. More precisely, we consider a scenario, where there are \emph{multiple instances of the same complex task}. The task has a task type and a set of competence requests with competence levels needed to solve the task. We have a pool of human agents characterized by gender, personality, and a set of competences with competence levels.
Our goal is to partition agents into teams so that within a task all competence requirements are covered (whenever possible) and team members work well together. That is, each resulting team is both \emph{proficient} (covers the required competences) and \emph{congenial} (balances gender and psychological traits). We refer to these teams as \emph{synergistic teams}. We define the \emph{synergistic value} of a team as its balance in terms of competence, personality and gender. Each synergistic team works on the very same task. This scenario is present in many real-life settings, for instance a classroom or a crowdsourcing task.
With this purpose, we design an algorithm that uses a greedy technique both to match competences with the required ones and at the same time to balance the psychological traits of teams' members.
This paper makes the following contributions. To start with, we formalise the synergistic team formation problem as the problem of partitioning a group of individuals into teams with limited size.
We provide an approximate local algorithm to solve the team composition problem. We empirically evaluate the algorithm using real data. Preliminary results show that our algorithm predicts better the performance of teams than the experts that know students' social situation, background and competences.
\textbf{Outline.} The remaining of this paper is structured as follows. Section~\ref{related} opens with an overview of the related work. Section~\ref{pers} gives the personality background for our model. Section~\ref{sec:model} describes the synergistic team composition problem and Section~\ref{sec:TeamForm} presents our algorithm to solve the synergistic team composition problem. Then, Section~\ref{sec:results} presents results of our algorithm in the context of team composition in the classroom. Finally, Section~\ref{sec:discuss} discusses our approach and future work.
\vspace{-2mm}
\section{Background} \label{related}
To the best of our knowledge, \cite{farhangian2015agent} is the only model that considers both personality and competences while composing teams. There, the influence of personality on different task allocation strategies (minimizing either undercompetence or overcompetence) is studied. Henceforth, this work is the most relevant for us, however there are substantial differences between our work and \cite{farhangian2015agent}. Firstly, authors do not propose an algorithm to compose teams based on \emph{both} personality and competences. Secondly, gender balance is not considered in their setting. Finally, \cite{farhangian2015agent} does not provide an evaluation involving real data (only an agent-based simulation is presented).
The rest of the literature relevant to this article is divided into two categories as proposed in \cite{andrejczuk}: those that consider agent capacities (individual and social capabilities of agents) and those that deal with agent personality (individual behaviour models).
\textbf{Capacity.}
The capacity dimension has been exploited by numerous previous works \cite{Anagnostopoulos12onlineteam,Chalkiadakis2012,Chen2015,Crawford,Liemhetcharat2014,Okimoto,JAR2015,Rangapuram2015}. In contrast to our work, where the competences are graded, in the majority of works agents are assumed to have multiple binary skills (i.e., the agent either has a skill or not). For instance, \cite{Okimoto,Crawford} use agents' capabilities to compose one k-robust team for a single task. A team is $k$-robust if removing any $k$ members from the team does not affect the completion of the task. \cite{Anagnostopoulos12onlineteam} uses competences and communication cost in a context where tasks sequentially arrive and teams have to be composed to perform them. Each task requires a specific set of competences and the team composition algorithm is such that the workload per agent is fair across teams.
\textbf{Personality.}
In the team formation literature, the only two models to our knowledge considering personality to compose teams are \cite{FarhangianPPS15} and \cite{alberola2016artificial}. \cite{alberola2016artificial} uses Belbin theory to obtain human predominant \emph{roles} (we discuss this method in Section \ref{pers}). Additionally, the gender is not taken into account while composing heterogeneous teams, which we believe may be important for team congeniality. Regarding \cite{FarhangianPPS15}, Farhangian et al. use the classical MBTI personality test (this method is discussed in Section \ref{pers}). They look for the best possible team built around a selected leader. In other words, the \emph{best} team for a particular task is composed. Gender balance is not considered in this setting. Finally, although \cite{FarhangianPPS15}'s team composition considered real data, the resulting teams' performance was not validated in any real setting (Bayesian theory was used to predict the probability of success in various team composition conditions).
\vspace{-3mm}
\section{Personality} \label{pers}
In this section, we discuss the most prominent approaches to measure human personality and we explain the details of the method we have decided to examine.
Personality determines people's behaviour, cognition and emotion. Different personality theorists present their own definitions of personality and different ways to measure it based on their theoretical positions.
The most popular approach is to determine personality through a set of questions. There have been several simplified schemes developed over the years to profile human personality. The most populars are:
\begin{enumerate}
\vspace{-1.5mm}
\item the Five Factor Model (aka FFM or ``Big Five''), which uses five broad dimensions to describe human personality \cite{Costa};
\vspace{-1.5mm}
\item Belbin theory \cite{belbin}, which provides a theory on how different role types influence teamwork; and
\vspace{-1.5mm}
\item the Myers-Briggs Type Indicator (MBTI) scheme designed to indicate psychological preferences in how people perceive the world and make decisions \cite{Myers}.
\end{enumerate}
\vspace{-1.5mm}
According to \cite{Poropat}, FFM personality instruments fail to detect significant sex differences in personality structures. It is also argued that the Big Five dimensions are too broad and heterogeneous, and lack the specificity to make accurate predictions in many real-life settings \cite{Boyle,johnson2004genetic}.
Regarding Belbin theory, the results of previous studies considering the correlation between team composition and team performance are ambiguous. Even though some research shows weak support or does not show support for this theory at all \cite{batenburg2013belbin,van2008belbin,partington1999belbin}, it remains popular.
Finally, the MBTI measure consists of four dimensions on a binary scale (e.g. either the person is Extrovert or Introvert). Within this approach, every person falls into one of the sixteen possible combinations of the four letter codes, one letter representing one dimension. This approach is easy to interpret by non-psychologists, though reliance on dichotomous preference scores rather than continuous scores excessively restricts the level of statistical analysis \cite{devito}.
Having considered the arguments above, we have decided to explore a novel method: the Post-Jungian Personality Theory, which is a modified version of the Myers-Briggs Type Indicator (MBTI) \cite{Myers}, the ``Step II'' version of Quenk, Hammer and Majors \cite{Wilde2013}. The questionnaire to determine personality is short, contains only 20 quick questions (compared to the 93 MBTI questions). This is very convenient for both experts wanting to design teams and individuals doing the test since completing the test takes just a few minutes (for details of the questionnaire, see \cite[p.21]{Wilde2013}). Douglass J. Wilde claims that it covers the same psychological territory as MBTI \cite{Wilde2009}. In contrast to the MBTI measure, which consists of four binary dimensions, the Post-Jungian Personality Theory uses the \emph{numerical} data collected using the questionnaire \cite{Wilde2011}. The results of this method seem promising, since within a decade this novel approach has tripled the fraction of Stanford teams awarded national prizes by the Lincoln Foundation \cite{Wilde2009}.
The test is based on the pioneering psychiatrist Carl Gustav Jung's cognitive-mode personality model \cite{PT}. It has two sets of variable pairs called psychological functions:
\vspace{-1.5mm}
\begin{itemize}
\item {\bf Sensing / Intuition (SN)} --- describes the way of approaching problems
\vspace{-1.5mm}
\item {\bf Thinking / Feeling (TF)} --- describes the way of making decisions
\end{itemize}
\vspace{-1.5mm}
and two sets of psychological attitudes:
\vspace{-1.5mm}
\begin{itemize}
\item {\bf Perception / Judgment (PJ)} --- describes the way of living
\vspace{-1.5mm}
\item {\bf Extroversion / Introversion (EI)} --- describes the way of interacting with the world
\end{itemize}
\vspace{-1.5mm}
For instance, for the Feeling-Thinking (TF) dimension, a value between -1 and 0 means that a person is of the feeling type, and a value between 0 and 1 means she is of the thinking type. Psychological functions and psychological attitudes compose together a personality. Every dimension of a personality (EI, SN, TF, PJ) is tested by five multiple choice true/false questions.
\vspace{-2mm}
\section{Team Composition Model}\label{sec:model}
In this section we introduce and formalise our team composition problem. First, section \ref{ssec:basic} introduces the basic notions of agent, personality, competence, and team, upon which we formalise our problem. Next, we formalise the notion of task assignment for a single team and a single task, and we characterise different types of assignments. Sections \ref{ssec:proficiency} and \ref{ssec:congeniality} show how to evaluate the proficiency and congeniality degrees of a team. Based on these measures, in section \ref{ssec:synergisticProblem} we formalise the \emph{synergistic team composition problem}.
\subsection{Basic definitions}
\label{ssec:basic}
In our model, we consider that each agent is a human. We characterise each agent by the following properties:
\begin{itemize}
\vspace{-1.5mm}
\item A unique \emph{identifier} that distinguishes an agent from others (e.g. ID card number, passport number, employee ID, or student ID).
\vspace{-1.5mm}
\item \emph{Gender.} Human agents are either a man or a woman.
\item A \emph{personality} represented by four personality traits. Each personality trait is a number between -1 and 1.
\item A \emph{set of competences}. A competence integrates knowledge, skills, personal values, and attitudes that enable an agent to act correctly in a job, task or situation \cite{roe2002competences}. Each agent is assumed to possess a set of competences with associated competence levels. This set may vary over time as an agent evolves.
\end{itemize}
\vspace{-1.5mm}
Next, we formalise the above-introduced concepts.
\vspace{-1.5mm}
\begin{mydef}
A \emph{personality profile} is a vector $\langle sn, \mathit{tf}, ei, pj \rangle \in [-1, 1]^4$, where each $sn, \mathit{tf}, ei, pj$ represents one personality trait.
\end{mydef}
We denote by $C = \{c_1, \dots , c_m\}$ the whole set of competences, where each element $c_i \in C$ stands for a competence.
\begin{mydef}
A \emph{human agent} is represented as a tuple $\langle id, g, \emph{{\bf p}}, l \rangle$ such that:
\begin{itemize}
\item $id$ is the agent's identifier;
\item $g \in \{man, {\mathit woman}\}$ stands for their gender;
\item $\emph{\bf{p}}$ is a personality profile vector $\langle sn, \mathit{tf}, ei, pj \rangle \in [-1, 1]^4$;
\item $l: C \to{[0,1]}$ is a function that assigns the probability that the agent will successfully show competence $c$. We will refer to $l(c)$ as the \emph{competence level} of the agent for competence $c$. We assume that when an agent does not have a competence (or we do not know about it), the level of this competence is zero.
\end{itemize}
\end{mydef}
Henceforth, we will note the set of agents as $A =\{a_1,\ldots, \linebreak a_n\}$. Moreover, We will use super-indexes to refer to agents' components. For instance, given an agent $a \in A$, $id^{a}$ will refer to the $id$ component of agent $a$. We will employ matrix $L \in [0,1]^{n \times m}$ to represent the competence levels for each agent and each competence.
\vspace{-2mm}
\begin{mydef}[Team] A \emph{team} is any non-empty subset of $A$ with at least two agents. We denote by $\cal{K_A}$ $ = (2^A \setminus \{\emptyset\})\setminus \{\{a_i\}| a_i \in A\}$ the set of all possible teams in $A$.
\end{mydef}
\vspace{-2mm}
We assume that agents in teams coordinate their activities for mutual benefit.
\subsection{The task assignment problem}
\label{ssec:assignment}
In this section we focus on how to assign a team to a task.
A task type determines the competence levels required for the task as well as the importance of each competence with respect to the others. For instance, some tasks may require a high level of creativity because they were never performed before (so there are no qualified agents in this matter). Others may require a highly skilled team with a high degree of coordination and teamwork (as it is the case for rescue teams). Therefore, we define a task type as:
\begin{mydef}
A task type $\tau$ is defined as a tuple \\ $\langle \lambda, \mu, {\{(c_{i},l_{i}, w_{i})\}_{i \in I_{\tau}}} \rangle$ such that:
\begin{itemize}
\item $\lambda \in [0,1]$ importance given to proficiency;
\item $\mu \in [-1,1]$ importance given to congeniality;
\item $c_{i} \in C$ is a competence required to perform the task;
\item $l_{i} \in [0,1]$ is the required competence level for competence $c_i$;
\vspace{-1.5mm}
\item $w_{i} \in [0,1]$ is the importance of competence $c_i$ for the success of task of type $\tau$; and
\vspace{-1.5mm}
\item $\sum_{i \in I_{\tau}} w_i = 1$.
\end{itemize}
\end{mydef}
We will discuss the meaning of $\lambda$ and $\mu$ further ahead when defining synergistic team composition (see subsection \ref{ssec:synergisticProblem}).
Then, we define a task as:
\vspace{-1.5mm}
\begin{mydef}A \emph{task} $t$ is a tuple $\langle \tau, m \rangle$ such that $\tau$ is a task type and $m$ is the required number of agents, where $m\geq 2$.
\end{mydef}
Henceforth, we denote by $T$ the set of tasks and by $\mathcal{T}$ the set of task types. Moreover, we will note as $C_{\tau} =\{c_{i} | i \in I_{\tau}\}$ the set of competences required by task type $\tau$.
Given a team and a task type, we must consider how to assign competences to team members (agents). Our first, weak notion of task assignment only considers that all competences in a task type are assigned to some agent(s) in the team
\begin{mydef}Given a task type $\tau$ and a team $K \in \cal{}K_A$, an assignment is a function $\eta: K \to 2^{C_{\tau}}$ satisfying that
$C_{\tau} \subseteq \bigcup_{a \in K} \eta(a)$.
\end{mydef}
\subsection{Evaluating team proficiency} \label{ssec:prof}
\label{ssec:proficiency}
Given a task assignment for a team, next we will measure the \emph{degree of competence} of the team as a whole. This measure will combine both the degree of under-competence and the degree of over-competence, which we formally define first. Before that, we must formally identify the agents that are assigned to each competence as follows.
\vspace{-1.5mm}
\begin{mydef}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, the set $\delta(c_{i}) = \{a \in K | c_{i} \in \eta(a)\}$ stands for the agents assigned to cover competence $c_{i}$.
\end{mydef}
\vspace{-1.5mm}
Now we are ready to define the degrees of undercompentence and overcompetence.
\vspace{-1.5mm}
\begin{mydef}[Degree of undercompentence] \item
\vspace{-1.6mm}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, we define the degree of undercompetence of the team for the task as:
\vspace{-2.5mm}
\begin{equation*}
u(\eta)=
\sum_{i \in I_{\tau}} w_{i} \cdot \frac{\sum_{a \in \delta(c_{i})} |\min(l^{a}(c_{i}) - l_{i},0)|}{|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} < 0\}|}
\end{equation*}
\end{mydef}
\vspace{-2.5mm}
\begin{mydef}[Degree of overcompetence] \item
\vspace{-1.6mm}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, we define the degree of overcompetence of the team for the task as:
\vspace{-2.5mm}
\begin{equation*}
o(\eta)=
\sum_{i \in I_{\tau}} w_i \cdot \frac{\sum_{a \in \delta(c_{i})} \max(l^{a}(c_{i}) - l_{i},0)}{|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\}|}
\end{equation*}
\end{mydef}
\vspace{-1.5mm}
Given a task assignment for a team, we can calculate its competence degree to perform the task by combining its overcompetence and undercompetence as follows.
\vspace{-1.5mm}
\begin{mydef}Given a task type $\tau$, a team $K$ and an assignment $\eta$, the competence degree of the team to perform the task is defined as:
\begin{equation}
\label{eq:uprof}
u_{\mathit{prof}}(\eta) = 1-(\upsilon \cdot u(\eta)+(1-\upsilon) \cdot o(\eta))
\end{equation}
where $\upsilon \in [0,1]$ is the penalty given to the undercompetence of team $K$.
\end{mydef}
\vspace{-1.5mm}
Notice that the larger the value of $\upsilon$ the higher the importance of the competence degree of team $K$, while the lower the value $\upsilon$, the less important its undercompetence. The intuition here is that we might want to penalize more the undercompetency of teams, as some tasks strictly require teams to be at least as competent as defined in the task type.
\vspace{-1.5mm}
\begin{proposition}
For any $\eta$, $u(\eta) + o(\eta) \in [0,1]$.
\label{prop1}
\end{proposition}
\begin{proof}
Given that (1) $l^{a}(c_{i}) \in [0,1]$ and $l_{i} \in [0,1]$;
(2) If $\min(l^{a}(c_{i}) - l_{i},0)<0$ then $\max(l^{a}(c_{i}) -l_{i},0) = 0$; and
(3) If $\max(l^{a}(c_{i})-l_{i},0) > 0$ then $\min(l^{a}(c_{i}) - l_{i},0)=0$. Thus, from (1--3)
we have
$|\min(l^{a}(c_{i}) - l_{i},0)|$ + $\max(l^{a}(c_{i})-l_{i},0) \in [0,1]$.
Let $n=|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\}|$, then obviously it holds that
$\frac{n \cdot (|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ and as $|\delta(c_i)| \leq n$ then
$\frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ holds; and
since $\sum_{i \in I_{\tau}} w_i = 1$ then \\
$\sum_{i \in I_{\tau}} w_i \cdot \frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$;
Finally, distributing, this equation is equivalent to: \\
$\sum_{i \in I_{\tau}} w_i \frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)|}{n} \\
+ \sum_{i \in I_{\tau}} w_i \frac{\sum_{a \in \delta(c_{i})}(\max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ which in turn is equivalent to $ u(\eta) + o(\eta) \in [0,1]$.
\end{proof}
\vspace{-1.5mm}
Function $u_{\mathit{prof}}$ is used to measure how proficient a team is for a given task assignment. However, counting on the required competences to perform a task does not guarantee that the team will succeed at performing it. Therefore, in the next subsection we present an evaluation function to measure \emph{congeniality} within teams. Unlike our measure for proficiency, which is based on considering a particular task assignment, our congeniality measure will solely rely on the personalities and genders of the members of a team.
\subsection{Evaluating team congeniality} \label{ssec:con}
\label{ssec:congeniality}
Inspired by the experiments of Douglass J. Wilde \cite{Wilde2009} we will define the team utility function for congeniality $u_{con}(K)$, such that:
\begin{itemize}
\vspace{-1.5mm}
\item it values more teams whose SN and TF personality dimensions are as diverse as possible;
\vspace{-1.5mm}
\item it prefers teams with at least one agent with positive EI and TF dimensions and negative PJ dimension, namely an extrovert, thinking and judging agent (called ETJ personality),
\vspace{-1.5mm}
\item it values more teams with at least one introvert agent;
\vspace{-2.5mm}
\item it values gender balance in a team.
\end{itemize}
Therefore, the higher the value of function $u_{con}(K)$, the more diverse the team is.
Formally, this team utility function is defined as follows:
\vspace{-1mm}
\begin{equation}
\label{eq:ucon}
\begin{aligned}
u_{con}(K) = & \sigma_{SN}(K) \cdot \sigma_{TF}(K) + \max_{a_i \in K}{((0,\alpha, \alpha, \alpha) \cdot {\bf p_i}, 0)} \\
& + {\max_{a_i \in K}{((0,0,-\beta,0) \cdot {\bf p_i}, 0)}} + \gamma \cdot \sin{(\pi \cdot g(K))}
\end{aligned}
\vspace{-2.5mm}
\end{equation}
where the different parameters are explained next.
\begin{itemize}
\vspace{-1.5mm}
\item $\sigma_{SN}(K)$ and $\sigma_{TF}(K)$: These variances are computed over the SN and TF personality dimensions of the members of team $K$. Since we want to maximise $u_{con}$, we want these variances to be as large as possible. The larger the values of $\sigma_{SN}$ and $\sigma_{TF}$ the larger their product will be, and hence the larger team diversity too.
\vspace{-4mm}
\item $\alpha$: The maximum variance of any distribution over an interval $[a,b]$ corresponds to a distribution with the elements evenly situated at the extremes of the interval. The variance will always be $\sigma^2 \le ((b-a)/2)^2$. In our case with $b=1$ and $a=-1$ we have $\sigma \le 1$. Then, to make the four factors equally important and given that the maximum value for ${\bf p_i}$ (the personality profile vector of agent $a_i$) would be $(1, 1, 1, 1)$ a maximum value for $\alpha$ would be $3 \alpha = ((1-(-1))/2)^2 = 1$, as we have the factor $\sigma_{SN} \cdot \sigma_{TF}$, so $\alpha \le 0.33(3)$. For values situated in the middle of the interval the variance will be $\sigma^2 \le \frac{(b-a)^2}{12}$, hence a reasonable value for $\alpha$ would be $\alpha = \frac{\sqrt[]{(1-(-1))^2)/12}}{3} = 0.19$
\vspace{-1.5mm}
\item $\beta$: A similar reasoning shows that $\beta \le 1$.
\vspace{-1.5mm}
\item $\gamma$ is a parameter to weigh the importance of a gender balance and $g(K) = \frac{w(K)}{w(K) + m(K)}$. Notice that for a perfectly gender balanced team with $w(K) = m(K)$ we have that
$\sin{(\pi \cdot g(K))} = 1$. The higher the value of $\gamma$, the more important is that team $u_{con}$ is gender balanced. Similarly to reasoning about $\alpha$ and $\beta$, we assess $\gamma \leq 1$. In order to make this factor less important than the others in the equation we experimentally assessed that $\gamma = 0.1$ is a good compromise.
\end{itemize}
\vspace{-1.5mm}
In summary, we will use a utility function $u_{con}$ such that: $\alpha = \frac{\sigma_{SN}(K) \cdot \sigma_{TF}(SK)} 3$, $\beta = 3 \cdot \alpha $ and $\gamma = 0.1$.
\subsection{Evaluating synergistic teams}
Depending on the task type, different importance for congeniality and proficiency should be given. For instance, creative tasks require a high level of communication and exchange of ideas, and hence, teams require a certain level of congeniality. While, repetitive tasks require good proficiency and less communication. The importance of proficiency ($\lambda$) and congeniality ($\mu$) is therefore a fundamental aspect of the task type. Now, given a team, we can combine its competence value (in equation \ref{eq:uprof}) with its congeniality value (in equation \ref{eq:ucon}) to measure its \emph{synergistic value}.
\vspace{-1.5mm}
\begin{mydef}
Given a team $K$, a task type $\tau = \linebreak \langle \lambda, \mu, {\{(c_{i},l_{i}, w_{i})\}_{i \in I_{\tau}}} \rangle$ and a task assignment $\eta: K \rightarrow 2^{C_{\tau}}$, the synergistic value of team $K$ is defined as:
\vspace{-1.5mm}
\begin{equation}
s(K,\eta) = \lambda \cdot u_{\mathit{prof}}(\eta) + \mu \cdot u_{con}(K)
\end{equation}
where $\lambda \in [0,1]$ is the grade to which the proficiency of team $K$ is important, and $\mu \in [-1,1]$ is the grade to which the task requires diverse personalities.
\end{mydef}
\begin{figure}
\caption{Values of congeniality and proficiency with respect to the task type.}
\begin{tikzpicture}
\begin{axis}[
axis line style={->},
x label style={at={(axis description cs:0.5,-0.1)},anchor=north},
y label style={at={(axis description cs:-0.1,.5)},anchor=south},
xlabel=Proficiency ($\lambda$),
ylabel=Congeniality ($\mu$),
xmin=0,
xmax=1,
ymin=-1,
ymax=1,
unit vector ratio=6 1,
]
\node[black] at (axis cs:0.25,0.5) {
\begin{tabular}{c}
Creative \\ General tasks
\end{tabular}};
\node[black] at (axis cs:0.25,-0.5) {\begin{tabular}{c}
Structured \\ General tasks
\end{tabular}};
\node[black] at (axis cs:0.75,0.5) {\begin{tabular}{c}
Creative \\ Specialized tasks
\end{tabular}};
\node[black] at (axis cs:0.75,-0.5) {\begin{tabular}{c}
Structured \\ Specialized tasks
\end{tabular}};
\draw [black, thick] (axis cs:0,-1) rectangle (axis cs:0.5,1);
\draw (0,0) -- (1,0);
\end{axis}
\end{tikzpicture}
\label{tbl:parameters}
\vspace{-6mm}
\end{figure}
Figure \ref{tbl:parameters} shows the relation between the parameters $\lambda$ and $\mu$.
In general, the higher the $\lambda$, the higher importance is given to the proficiency of a team. The higher the $\mu$ the more important is personality diversity. Notice, that the $\mu$ can be lower than zero. Having $\mu$ negative, we impose that the congeniality value will be as low as possible (to maximize $s(K,\eta)$) and so, team homogeneity is preferred. This situation may happen while performing tasks in unconventional performance environments that have serious consequences associated with failure. In order to quickly resolve issues, a team needs to be proficient and have team-mates who understand one another with minimum communication cost (which is associated to homogeneity of a team).
\subsection{The synergistic team composition problem}
\label{ssec:synergisticProblem}
In what follows we consider that there are multiple instances of the same task to perform. Given a set of agents $A$, our goal is to split them into teams so that each team, and the whole partition of agents into teams, is balanced in terms of competences, personality and gender.
We shall refer to these balanced teams as \emph{synergistic teams}, meaning that they are both congenial and proficient.
Therefore, we can regard our team composition problem as a particular type of set partition problem. We will refer to any partition of $A$ as a team partition. However, we are interested in a particular type of team partitions, namely those where teams are constrained by size $m$ as follows.
\begin{mydef}
Given a set of agents $A$, we say that a team partition $P_m$ of $A$ is constrained by size $m$ iff: (i) for every team $K_i \in P_m$, $K_i \in \cal{K}_A$, $\max(m-1, 2) \leq |K| \leq m+1$ holds; and (ii) for every pair of teams $K_i, K_j \in P_m$ $||K_i| - |K_j|| \le 1$.
\end{mydef}
As $|K| / m$ is not necessarily a natural number, we may need to allow for some flexibility in team size within a partition. This is why we introduced above the condition $\max(m-1, 2) \leq |K| \leq m+1$. In practical terms, in a partition we may have teams differing by one agent. We note by ${\cal P}_m(A)$ the set of all team partitions of $A$ constrained by size $m$. Henceforth, we will focus on team partitions constrained by some size. Since our goal is to find the most competence-balanced and psychologically-balanced team partition, we need a way to measure the synergistic value of a team partition, which we define as follows:
\begin{mydef}
Given a task $t = \langle \tau, m \rangle$, a team partition $P_m$ and an assignment $\eta_i$ for each team $K_i \in P_m$, the synergistic value of $P_m$ is computed by:
\vspace{-1.5mm}
\begin{equation}
u(P_m,\bm{\eta}) = \prod_{i =1}^{|P_m|} s(K_i,\eta_i)
\end{equation}
\vspace{-1.5mm}
where $\bm{\eta}$ stands for the vector of task assignments $\eta_1,\ldots, \linebreak \eta_{|P_m|}$.
\end{mydef}
Notice that the use of a Bernoulli-Nash function over the synergistic values of teams will favour team partitions whose synergistic values are balanced.
Now we are ready to cast the synergistic team composition problem as the following optimisation problem:
\begin{mydef}
Given task $t = \langle \tau, m \rangle$ and set of agents $A$ the \textbf{synergistic team formation problem (STFP)} is the problem of finding a team partition constrained by size $m$, together with competence assignment for its teams, whose synergistic value is maximal. Formally, the STFP is the problem of finding the partition in $P \in \mathcal{P}_m(A)$ and the task assignments $\bm{\eta}$ for the teams in $P_m$ that maximises $u(P_m,\bm{\eta})$.
\end{mydef}
\vspace{-2mm}
\section{Solving STFP}\label{sec:TeamForm}
In this section we detail an algorithm, the so-called \emph{SynTeam}, which solves the synergistic team formation problem described above. We will start from describing how to split agents into a partition (see subsection \ref{ssec:dist}). Next, we will move on to the problem of assigning competences in a task to team members (see subsection \ref{ssec:asg}), so that the utility of synergistic function is maximal. Finally, we will explain \emph{SynTeam} that is a greedy algorithm that quickly finds a first, local solution, to subsequently improve it, hoping to reach a global optimum.
\subsection{How do we split agents?} \label{ssec:dist}
We note by $n = |A|$ the number of agents in $A$, by $m \in \mathbb{N}$ the target number of agents in each team, and by $b$ the minimum total number of teams, $b = \left\lfloor n/m\right\rfloor$. We define the quantity distribution of agents in teams of a partition, noted $T: \mathbb{N} \times \mathbb{N} \to \mathbb{N} \times \mathbb{N} \cup (\mathbb{N} \times \mathbb{N})^2 $ as:
\vspace{-2mm}
\begin{equation}
\begin{multlined}
T(n,m) = \\
\begin{cases}
\{(b, m)\} & \text{if } n \geq m \textit{ and } n \bmod m = 0
\\
\{(n \bmod m,m + 1), \\(b - (n \bmod m),m)\}
& \text{if } n \geq m \textit{ and } n \bmod m \le b
\\
\{(b, m),(1, n \bmod m)\} & \text{if } n \geq m \textit{ and } n \bmod m > b
\\
\{(0,m)\} & \text{otherwise}
\end{cases}
\end{multlined}
\end{equation}
Note that depending on the cardinality of $A$ and the desired team size, the number of agents in each team may vary by one individual (for instance if there are $n=7$ agents in $A$ and we want to compose duets ($m=2$), we split agents into two duets and one triplet).
\subsection{Solving an Assignment} \label{ssec:asg}
There are different methods to build an assignment. We have decided to solve our assignment problem by using the minimum cost flow model \cite{ahuja1993network}. This is one of the most fundamental problems within network flow theory and it can be efficiently solved. For instance, in \cite{orlin1993faster}, it was proven that the minimum cost flow problem can be solved in $O(m \cdot log(n) \cdot (m + n \cdot log(n)))$ time with $n$ nodes and $m$ arcs.
Our problem is as follows:
There are a number of agents in team $K$ and a number of competence requests in task $t$. Any agent can be assigned to any competence, incurring some cost that varies depending on the agent competence level of the assigned competence. We want to get each competence assigned to at least one agent and each agent assigned to at least one competence in such a way that the total cost (that is both undercompetence and overcompetence) of the assignment is minimal with respect to all such assignments.
Formally, let $G = (N, E)$ be a directed network defined by a set $N$ of $n$ nodes and a set $E$ of $e$ directed arcs. There are four types of nodes: (1) one source node; (2) $|K|$ nodes that represent agents in team $K$; (3) $|C_{\tau}|$ competence requests that form task type $\tau$; and (4) one sink node. Each $arc$ $(i, j) \in E$ has an associated cost $p_{ij} \in \mathbb{R}^+$ that denotes the cost per unit flow on that $arc$. We also associate with each $arc$ $(i, j) \in E$ a capacity $u_{ij} \in \mathbb{R}^+$ that denotes the maximum amount that can flow on the arc. In particular, we have three kinds of edges: (1) Supply arcs. These edges connect the source to agent nodes. Each of these arcs has zero cost and a positive capacity $u_{ij}$ which define how many competences at most can be assigned to each agent. (2) Transportation arcs. These are used to ship supplies. Every transportation edge $(i, j) \in E$ is associated with a shipment cost $p_{ij}$ that is equal to:
\begin{equation*}
p_{ij} =
\begin{cases}
(l^{a_i}(c_{\mathit{j}}) - l_{\mathit{j}}) \cdot (1-\upsilon) \cdot w_{\mathit{j}} & \text{if } l^{a_i}(c_{\mathit{j}} - l_{\mathit{j}}) > 0\\
-(l^{a_i}(c_{\mathit{j}}) - l_{\mathit{j}}) \cdot \upsilon \cdot w_{\mathit{j}} & \text{if } l^{a_i}(c_{\mathit{j}} - l_{\mathit{j}}) < 0
\end{cases}
\label{costeq}
\end{equation*}
\noindent
where $v \in [0,1]$ is the penalty given to the undercompetence of team $K$(see subsection \ref{ssec:prof} for the definition).
(3) Demand arcs. These arcs connect the competence requests nodes to the sink node. These arcs have zero costs and positive capacities $u_{ij}$ which equal the demand for each competence.
Thus, a network is denoted by $(G, w, u, b)$. We associate with each node $i \in N$ an integer number $b(i)$ representing its supply. If $b(n) > 0$ then $n$ is a source node, if $b(n) < 0$ then $n$ is a sink node. In order to solve a task assignment problem, we use the implementation of \cite{goldberg1990finding} provided in the ort-tools.\footnote{\url{https://github.com/google/or-tools/blob/master/src/graph/min_cost_flow.h}}
\vspace{-2mm}
\begin{figure}
\includegraphics[max size={\textwidth}{10.35cm}]{attach/asg.png}
\caption{An example of an assignment graph $G(N,E)$}\label{asg}
\vspace{-6mm}
\end{figure}
\paragraph{Example} Let us consider a team of three agents $K = \{a_1, a_2, a_3\}$:
\begin{itemize}
\vspace{-1.5mm}
\item $a_1 = \langle id_1, `woman', p_1, [l(c_1) = 0.9, l(c_2) = 0.5]\rangle$
\vspace{-1.5mm}
\item $a_2 = \langle id_2, `man', p_2, [l(c_2) = 0.2, l(c_3) = 0.8]\rangle$
\vspace{-1.5mm}
\item $a_3 = \langle id_3, `man', p_3, [l(c_2) = 0.4, l(c_4) = 0.6]\rangle$
\end{itemize}
and task type $\tau$ containing four competence requests \\ $\{(c_{1},0.8, 0.25), (c_{2}, 0.6, 0.25), (c_{3},0.6, 0.25),(c_{4},0.6, 0.25)\}$. \\ The penalty given to undercompetence is equal to $\upsilon=0.6$.
Our goal is to assign agents to competence requests, so that: (1) every agent is responsible for at least one competence, (2) every competence is covered by at least one agent, (3) the overall ``cost'' in minimal.
As shown in figure \ref{asg}, we build a graph out of $n = 9$ nodes that is: one source node ($N_0$), three agents nodes ($N_1 - N_3$), four competences nodes ($N_4 - N_7$) and a sink node ($N_8$). Next, we add edges: (1) between source node $N_0$ and all agent nodes $N_1 - N_3$ that have a cost $p_{si} = 0$ and capacity $u_{si} = 2$ for all $i$ as the maximum number of competences assigned to one agent cannot be bigger than two if we want to make sure that all agents are assigned to at least one competence; (2) between agent nodes $N_1 - N_3$ and competence nodes ($N_4 - N_7$), where each capacity $u_{ij} = 1$ and we calculate costs according to the equation \ref{costeq}. For instance, the cost between $N_1$ and $N_4$ is equal to: $(0.9 - 0.8) \cdot (1-0.6) \cdot 0.25 = 0.01$. We multiply all costs by $1000$ to meet the requirements of the solver (edges need to be integer). Hence, the final cost $p_{14}=10$; (3) edges between competence nodes $N_4 - N_7$ and sink node $N_8$ that have costs $p_{jw} = 0$ and capacities $u_{jw} = 1$ to impose that each is assigned.
Once the graph is built, we pass it to the solver to get the assignment, and we get $c_1$ and $c_2$ assigned to $a_1$, $c_3$ assigned to $a_2$ and $c_4$ assigned to $a_3$.
\subsection{SynTeam algorithm} \label{ssec:SynTeam}
Algorithm \ref{alg:teamDistribution} shows the SynTeam pseudocode.
Algorithm \ref{alg:teamDistribution} is divided into two parts:
{\bf 1. \textsl{Find a first team partition}}. This part of the algorithm simply builds a partition by randomly assigning agents to teams of particular team sizes. This part goes as follows. Given a list of agents $A$, we start by shuffling the list so that the order of agents in the list is random (line~1). Next, we determine the quantitative distribution of individuals among teams of size $m$ using function $T(|A|,m)$ as defined in section \ref{ssec:dist} (line~2). We start from the top of the shuffled list of agents (line~3). For each number of teams (line~4), we define a temporary set $team$ to store a current team (line~5). We add to $team$ subsequent $size$ agents from the shuffled list of agents (line~7). We add the newly created team to the team partition $P_{\mathit{best}}$ that we intend to build (line~10). When reaching line~14, $P_{\mathit{best}}$ will contain a first disjoint subset of teams (a team partition).
{\bf 2. \textsl{Improve the current best team partition}}. The second part of the algorithm consists in improving the current best team partition. The idea is to obtain a better team partition by performing crossovers of two randomly selected teams to yield two better teams. In this part, we took inspiration from simulated annealing methods, where the algorithm might accept swaps that actually decrease the solution quality with a certain probability. The probability of accepting worse solutions slowly decreases as the algorithm explores the solution space (as the number of iterations increases). The annealing schedule is defined by the $\mathit{cooling\_rate}$ parameter. We have modified this method to store the partition with the highest synergistic evaluation found so far.
In detail, the second part works as follows. First, we select two random teams, $K_1$ and $K_2$, in the current team partition (line~15). Then we compute all team partitions of size $m$ with agents in $K_1 \cup K_2$ (line~19), and we select the best candidate team partition, named $P_{\mathit{bestCandidate}}$ (lines~19~to~26). If the best candidate synergistic utility is larger than the utility contribution of $K_1$ and $K_2$ to the current best partition $P_{\mathit{best}}$ (line~27), then we replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~28). If the best candidate team partition utility is lowe
, then we check if the probability of accepting a worse solution is higher than a uniformly sampled value from $[0,1]$ (line~29).
If so,
we replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~30) and we lower $heat$ by a cooling rate. This part of the algorithm continues until the value of $heat$ reaches $1$ (line~13). We also store the best partition found so far (line~34) to make sure we do not end up with worse solution. Finally, we return found best partition $P_{\mathit{bestEver}}$ as well as the assignment $\eta$ for each team.
\begin{algorithm}[h]
\small
\caption{\quad SynTeam}
\label{alg:teamDistribution}
\begin{algorithmic}[1]
\Require $A$ \Comment{The list of agents}
\Require $T(|A|,m)$ \Comment{Quantitative team distribution}
\Require $P_{\mathit{best}} = \emptyset$ \Comment{Initialize best partition}
\Require $\mathit{heat=10}$ \Comment{Initial temperature for second step}
\Require $\mathit{Cooling\_rate}$ \Comment{Heating decrease}
\Ensure $(P, \bm{\eta})$ \Comment{Best partition found and best assignments}
\State $\mathit{random.shuffle(A)}$
\If {$T(|A|,m) \ne (0,m)$}
\State $\mathit{index} = 0$ \Comment{Used to iterate over the agent list}
\ForAll{$(\mathit{numberOfTeams}, \mathit{size)} \in T(|A|,m)$}
\State $team = \emptyset$
\For {$i \in (0,\dots ,\mathit{(size-1))}$}
\State $team = team \cup A[\mathit{index}]$
\State $\mathit{index}=\mathit{index} + 1$
\EndFor
\State $P_{\mathit{best}} = P_{\mathit{best}} \cup \{team\}$
\EndFor
\State $\bm{ \eta_{\mathit{best}}} = \mathit{assign\_agents}(P_{\mathit{best}})$ \Comment{see Subsection \ref{ssec:asg}}
\State $(P_{\mathit{bestEver}}, \mathit{bestValueEver}) = (P_{\mathit{best}},u(P_{\mathit{best}},\bm{ \eta_{\mathit{best}}}))$
\While{$\mathit{heat} > 1$}
\State $(K_1,K_2) = selectRandomTeams(P_{\mathit{best}}$)
\State $(\eta_1,\eta_2) = \mathit{assign\_agents}(\{K_1,K_2\})$
\State $\mathit{contrValue} = u(\{K_1,K_2\},(\eta_1,\eta_2))$
\State $(P_{\mathit{bestCandidate}}, \mathit{best Candidatevalue}) = (\emptyset,0)$
\ForAll {$P_{\mathit{candidate}} \in P_m(K_1 \cup K_2) \setminus \{K_1,K_2\}$}
\State $(\eta_1,\eta_2) = assign\_agents(P_{\mathit{candidate}})$
\State $\mathit{candidateValue} = u(P_{\mathit{candidate}},(\eta_1,\eta_2))$
\If{$\mathit{candidateValue} > \mathit{bestCandidateValue}$}
\State $P_{\mathit{bestCandidate}} = P_{\mathit{candidate}}$
\State $\mathit{bestCandidateValue} = \mathit{candidateValue}$
\EndIf
\EndFor
\If{$\mathit{bestCandidateValue} > \mathit{contrValue}$}
\State $P_{\mathit{best}} = replace(\{K_1,K_2\},P_{\mathit{bestCandidate}}, P_{\mathit{best}})$
\ElsIf{$\mathbb{P}(\mathit{bestCandidateValue}, \mathit{contrValue}, heat)$ \StatexIndent[2] $\geq \mathit{random}(0, 1)$}
\State $P_{\mathit{best}} = replace(\{K_1,K_2\},P_{\mathit{bestCandidate}},P_{\mathit{best}})$
\EndIf
\State $\bm{ \eta_{\mathit{best}}} = \mathit{assign\_agents}(P_{\mathit{best}})$
\If {$\mathit{bestValueEver} < u(P_{\mathit{best}},\bm{ \eta_{\mathit{best}}})$}
\State $P_{\mathit{bestEver}} = P_{\mathit{best}}$
\EndIf
\State $heat$ = $heat-\mathit{Cooling\_rate}$
\EndWhile
\State $return(P_{\mathit{bestEver}},\mathit{assign\_agents(P_{\mathit{bestEver}}}))$
\EndIf
\end{algorithmic}
\end{algorithm}
\vspace{-4mm}
\section{Experimental Results} \label{sec:results}
\subsection{Experimental Setting}
``Institut Torras i Bages'' is a state school near Barcelona. Collaborative work has been implemented there for the last 5 years in their final assignment (``Treball de S\'{\i}ntesi'') with a steady and significant increase in the scores and quality of the final product that students are asked to deliver. This assignment takes one week and is designed to check if students have achieved, and to what extent, the objectives set in the various curricular areas. It is a work that encourages teamwork, research, and tests relationships with the environment. Students work in teams and at the end of every activity present their work in front of a panel of teachers that assess the content, presentation and cooperation between team members. This is a creative task, although requiring high level of competences.
\subsection{Data Collection}
In current school practice, teachers group students according to their own, manual method based on the knowledge about students, their competences, background and social situation. This year we have used our grouping system based only on personality (SynTeam\ with $\lambda = 0, \mu = 1$) upon two groups of students: `3r ESO A' (24 students), and `3r ESO C' (24 students). Using computers and/or mobile phones, students answered the questionnaire (described in section \ref{pers}) which allowed us to divide them into teams of size three for each class. Tutors have evaluated each team in each partition giving an integer value $v \in [1,10]$ meaning their expectation of the performance of each team.
Each student team was asked to undertake the set of interdisciplinary activities (``Treball de S\'{\i}ntesi'') described above. We have collected each student's final mark for ``Treball de S\'{\i}ntesi'' as well as final marks obtained for all subjects. That is: Catalan, Spanish, English, Nature, Physics and Chemistry, Social Science, Math, Physical Education, Plastic Arts, Technology. We have used a matrix provided by the tutors to relate each subject to different kinds of intelligence (that in education are understood as competences) needed for this subject. There are eight types of human intelligence \cite{gardner1987theory}, each representing different ways of processing information: Naturalist, Interpersonal, Logical/Mathematical, Visual/Spatial, Body/Kinaesthetic, Musical, Intrapersonal and Verbal/Linguistic. This matrix for each subject and each intelligence is shown in figure \ref{matrix}.
\begin{figure}[h]
\centering
$\begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 0 & 1 & 1 & 1
\\0 & 1 & 0 & 0 & 0 & 1 & 1 & 1
\\1 & 1 & 0 & 1 & 1 & 0 & 1 & 1
\\1 & 1 & 1 & 1 & 0 & 0 & 1 & 1
\\1 & 1 & 0 & 0 & 0 & 0 & 1 & 1
\\0 & 1 & 1 & 1 & 0 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 0
\\1 & 1 & 1 & 0 & 1 & 0 & 1 & 1
\end{bmatrix}$
\label{matrix}
\caption{Matrix matching Intelligence with subjects (each row corresponds to a subject, each column to an intelligence)}
\end{figure}
\noindent Subjects are represented by rows and intelligences by columns of the matrix in the order as provided above. Based on this matrix we calculate values of intelligences for every student by averaging all values obtained by her that are relevant for this intelligence. For instance, for Body/Kinaesthetic intelligence, we calculate an average of student marks obtained in Nature, Physical Education, Plastic Arts and Technology. An alternative way to measure students' competences level can be by calculating the collective assessments of each competence (like proposed by \cite{andrejczukCompetences}).
Finally, having competences (Intelligences), personality and actual performance of all students, we are able to calculate synergistic values for each team. We also calculate the average of marks obtained by every student in a team to get teams' performance values.
\subsection{Results}
\noindent
Given several team composition methods, we are interested in comparing them to know which method better predicts team performance. Hence, we generate several team rankings using the evaluation values obtained through different methods. First, we generate a ranking based on actual team performance that will be our base to compare other rankings. Second, we generate a ranking based on the expert evaluations. Finally, we generate several rankings based on calculated synergistic values with varying importance of congeniality and proficiency. Since ``Traball de S\'{\i}ntesi'' is a creative task, we want to examine the evaluation function with parameters $\mu > 0$ and $\lambda = 1-\mu$. In particular, we want to observe how the rankings change when increasing the importance of competences.
Notice that teacher and actual performance rankings may include ties since the pool of possible marks is discrete (which is highly improbable in case of SynTeam\ rankings). Therefore, before generating rankings based on synergistic values, we round them up to two digits to discretize the evaluation space. An ordering with ties is also known as a \emph{partial ranking}.
Next, we compare teacher and SynTeam\ rankings with the actual performance ranking using the standardized Kendall Tau distance. For implementation details, refer to the work by Fagin et al. \cite{Fagin:2004:CAR,fagin2006comparing}, which also provide sound mathematical principles to compare partial rankings. The results of the comparison are shown in Figure \ref{asg}. Notice that the lower the value of Kendall Tau, the more similar the rankings. We observe that the SynTeam\ ranking improves as the importance of competences increases, and it is best at predicting students' performance for $\lambda = 0.8$ and $\mu = 0.2$ (Kendall Tau equal to $0.15$). A standardised Kendall Tau distance for teacher ranking is equal to $0.28$, which shows that SynTeam\ predicts the performance better than teachers, when competences are included ($\lambda > 0.2$). We also calculate the values of Kendall Tau for random ($0.42$) and reversed ($0.9$) rankings to benchmark teacher and SynTeam\ grouping methods. The results show that both teachers and SynTeam\ are better at predicting students' performance than the random method.
\begin{figure}
\includegraphics[max size={\textwidth}{10.35cm}]{attach/KendallTauComparison.png}
\caption{Comparison of Kendall-Tau distances between different methods.}\vspace{-2mm}
\label{asg}
\vspace{-2mm}
\end{figure}
\section{Discussion} \label{sec:discuss}
In this paper we introduced SynTeam, an algorithm for partitioning groups of humans into competent, gender and psychologically balanced teams.
To our knowledge, SynTeam\ is the first computational model to build synergistic teams that not only work well together, but are also competent enough to perform an assignment requiring particular expertise.
We have decided to evaluate our algorithm in the context of a classroom. Besides obvious advantages of observing students work in person, this scenario gave us an opportunity to compare our results with real-life, currently used practice. The results show that SynTeam\ is able to predict team performance better that the experts that know the students, their social background, competences, and cognitive capabilities.
The algorithm is potentially useful for any organisation that faces the need to optimise their problem solving teams (e.g. a classroom, a company, a research unit). The algorithm composes teams in a purely automatic way without consulting experts, which is a huge advantage for environments where there is a lack of experts.
Regarding future work, We would like to investigate how to determine quality guarantees of the algorithm.
Additionally, there is a need to consider richer and more sophisticated models to capture the various factors that influence the team composition process in the real world. We will consider how our problem relates to the constrained coalition formation framework \cite{Rahwan}. This may help add constraints and preferences coming from experts that cannot be established by any algorithm, e.g. Anna cannot be in the same team with Jos\'e as they used to have a romantic relationship.
\newpage
\bibliographystyle{plain}
|
\section{Introduction}
For more than three decades, understanding the mechanism of superconductivity observed at high critical temperature (HTC) in
strongly correlated cuprates~\cite{LaCuO2_Bednorz_86} has been the ``holy grail”
of many theoretical and experimental condensend matter researchers.
In this context, the observation of superconductivity in
nickelates $Ln$NiO$_2$, $Ln$=\{La, Nd and Pr\} ~\cite{li_superconductivity_2019,osada_superconducting_2020,osada_nickelate_2021} upon doping with holes is remarkable.
These superconducting nickelates are isostructural as well as isoelectronic to
HTC cuprate superconductors and thus enable the comparison of
the essential physical features that may be playing a crucial role in the mechanism driving superconductivity.
$Ln$NiO$_2$ family of compounds are synthesized in the so-called infinite-layer structure, where NiO$_2$ and $Ln$ layers are stacked alternatively~\cite{li_superconductivity_2019}.
The NiO$_2$ planes are identical to the CuO$_2$ planes in HTC cuprates which host much of the physics leading to superconductivity~\cite{keimer_quantum_2015}.
A simple valence counting of the these nickelates reveals a {1+} oxidation state for Ni ({2-} for O and {3+} for $Ln$) with 9 electrons in the $3d$ manifold.
In the cuprates, the Cu$^{2+}$ oxidation state gives rise to the same $3d^9$ electronic configuration.
Contrary to many nickel oxides where the Ni atom sits in an octahedral cage of oxygens, in the infinite-layered structure, square planar NiO$_4$ plaques are formed without the apical oxygens.
The crystal field due to square-planar oxygen coordination stabilizes the $d_{z^2}$ orbital of the $e_g$ manifold, making its energy close to the $t_{2g}$ orbitals (the $3d$ orbitals split to 3-fold $t_{2g}$ and 2-fold $e_g$ sub-shells in an octahedral environment). With $d^9$ occupation, a half-filled $d_{x^2-y^2}$-orbital system is realized as in cuprates.
In fact, recent resonant inelastic X-ray scattering (RIXS) experiments~\cite{rossi2020orbital} as well as the {\it ab initio} correlated multiplet calculations~\cite{katukuri_electronic_2020} confirm that the Ni$^{1+}$ $d$-$d$ excitations in NdNiO$_2$\ are similar to the Cu$ ^{2+} $ ions in cuprates~\cite{moretti_sala_energy_2011}.
Several electronic structure calculations based on density-functional theory (DFT) have shown that in monovalent nickelates the Ni 3$d_{x^2-y^2}$ states sit at the Fermi energy level ~\cite{lee_infinite-layer_2004,liu_electronic_njpqm_2020,zhang_effective_prl_2020}.
These calculations further show that the nickelates are more close to the Mott-Hubbard insulating limit with a decreased Ni $3d$- O $2p$ hybridization compared to cuprates.
The latter are considered to be charge transfer insulators~\cite{zsa_mott_charge_transfer_1985} where excitations across the electronic band gap involves O $2p$ to Cu $3d$ electron transfer.
Correlated wavefunction-based calculations~\cite{katukuri_electronic_2020} indeed find that the contribution from the O $2p$ hole configuration to the ground state wavefunction in NdNiO$_2$\ is four times smaller than in the cuprate analogue CaCuO$_2$.
X-ray absorption and photoemission spectroscopy experiments~\cite{hepting2020a,goodge-a} confirm the Mott behavior of nickelates.
In the cuprate charge-transfer insulators, the strong hybridization of the Cu 3$d_{x^2-y^2}$\ and O $2p$ orbitals result in O $2p$ dominated bonding and Cu 3$d_{x^2-y^2}$\ -like antibonding orbitals. As a consequence, the doped holes primarily reside on the bonding O $2p$ orbitals, making them singly occupied.
The unpaired electrons on the Cu $d_{x^2-y^2}$\ and the O $2p$ are coupled antiferromagnetically resulting in the famous Zhang-Rice (ZR) spin singlet state~\cite{zhang_effective_1988}.
In the monovalent nickelates, it is unclear where the doped-holes reside. Do they form a ZR singlet as in cuprates? Instead, if the holes reside on the Ni site, do they form a high-spin local triplet with two singly occupied Ni $3d$ orbitals and aligned ferromagnetically or a low-spin singlet with either both the holes residing in the Ni 3$d_{x^2-y^2}$\ orbital or two singly occupied Ni 3$d$ but aligned anti-parallel.
While Ni L-edge XAS and RIXS measurements~\cite{rossi2020orbital} conclude that an orbitally polarized singlet state is predominant, where doped holes reside on the Ni 3$d_{x^2-y^2}$\ orbital, O K-edge electron energy loss spectroscopy~\cite{goodge-a} reveal that some of the holes also reside on the O $2p$ orbitals.
On the other hand, calculations based on multi-band $d-p$ Hubbard models show that the fate of the doped holes is determined by a subtle interplay of Ni onsite ($U_{dd}$), Ni $d$ - O $2p$ inter-site ($U_{dp}$) Coulomb interactions and the Hund's coupling along with the charge transfer gap~\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021}.
However, with the lack of extensive experimental data, it is difficult to identify the appropriate interaction parameters for a model Hamiltonian study, let alone identifying the model that best describes the physics of superconducting nickelates.
Despite the efforts to discern the similarities and differences between the monovalent nickelates and superconducting cuprates, there is no clear understanding on the nature of doped holes in NdNiO$_2$.
Particularly, there is no reliable parameter-free \textit{ab initio} analysis of the hole-doped situation.
In this work, we investigate the hole-doped ground state in NdNiO$_2$\ and draw parallels to the hole doped ground state of cuprate analogue CaCuO$_2$.
We use fully {\it ab initio} many-body wavefunction-based quantum chemistry methodology
to compute the ground state wavefunctions for the hole doped NdNiO$_2$\ and CaCuO$_2$.
We find that the doped hole in NdNiO$_2$ mainly localizes on the Ni 3$d_{x^2-y^2}$\ orbital to form a closed-shell singlet, and this singlet configuration contributes to $\sim$40\% of the wavefunction.
In contrast, in CaCuO$_2$ the Zhang-Rice singlet configurations contribute to $\sim$65\% of the wavefunction.
The persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects than in CaCuO$_2$,
and consequently the additional hole foot-print is more three-dimensional in NdNiO$_2$.
Our analysis shows that the most commonly used three-band Hubbard model to express the doped scenario in cuprates represents ~ 90\% of the $d^8$ wavefunction for CaCuO$_2$, but such a model grossly approximates the $d^8$ wavefunction for the NdNiO$_2$ as it only stands for $\sim$60\% of the wavefunction.
In what follows, we first describe the computational methodology we employ in this work where we highlight the novel features of the methods and provide all the computational details.
We then present the results of our calculations and conclude with a discussion.
\section{The wavefunction quantum chemistry method}
{\it Ab initio} configuration interaction (CI) wavefunction-based quantum chemistry methods, particularly
the post Hartree-Fock (HF) complete active space self-consistent field (CASSCF) and the multireference perturbation theory (MRPT), are employed.
These methods not only facilitate systematic inclusion of electron correlations, but also enable to quantify different types of correlations, static vs dynamic~\cite{helgaker_molecular_2000}.
These calculations do not use any \textit{ad hoc} parameters to incorporate electron-electron interactions unlike other many-body methods, instead, they are computed fully {\it ab initio} from the kinetic and Coulomb integrals.
Such \textit{ab initio} calculations provide techniques to systematically analyze electron correlation effects and offer insights into the electronic structure of correlated solids that go substantially beyond standard DFT approaches, e.g., see Ref.~\cite{Munoz_afm_htc_qc_prl_2000,CuO2_dd_hozoi11,book_Liviu_Fulde,Bogdanov_Ti_12,katukuri_electronic_2020} for the $ 3d $ TM oxides and Ref.~\cite{katukuri_PRB_2012,Os227_bogdanov_12,213_rixs_gretarsson_2012,Katukuri_ba214_prx_2014,Katukuri_njp_2014} for $ 5d $ compounds.
\subsection{Embedded cluster approach}
Since strong electronic correlations are short-ranged in nature \cite{fulde_new_book}, a local approach for the calculation of the $N$ and $N\pm$1 –electron wavefunction is a very attractive option for transition metal compounds.
In the embedded cluster approach, a finite set of atoms, we call quantum cluster (QC), is cut out from the infinite solid and many-body quantum chemistry methods are used to calculate the electronic structure of the atoms within the QC.
The cluster is ``embedded” in a potential that accounts for the part of the crystal that is not treated explicitly.
In this work, we represent the embedding potential with an array of point charges (PCs) at the lattice positions that are fitted to reproduce the Madelung crystal field in the cluster region~\cite{ewald}.
Such procedure enables the use of quantum chemistry calculations for solids involving transition-metal or lanthanide ions, see Refs.~\cite{katukuri_ab_2012,katukuri_electronic_2014,babkevich_magnetic_2016}.
\subsection{Complete active space self-consistent field}
CASSCF method~\cite{book_QC_00} is a specific type of multi-configurational (MC) self-consistent field technique in which a complete set of Slater determinants or configuration state functions (CSFs) is used in the expansion of the CI wavefunction is defined in a constrained orbital space, called the active space.
In the CASSCF(n,m) approach, a subset of $n$ active electrons are
fully correlated among an active set of $m$ orbitals, leading to a highly multi-configurational (CAS) reference wavefunction.
CASSCF method with a properly chosen active space guarantees a qualitatively correct wavefunction for strongly correlated systems where static correlation~\cite{book_QC_00} effects are taken into account.
%
We consider active spaces as large as CAS(24,30) in this work.
Because the conventional CASSCF implementations based on deterministic CI space (the Hilbert space of all possible configurations within in the active space) solvers are limited to active spaces of 18 active electrons in 18 orbitals,
we use the full configuration interaction quantum Monte Carlo (FCIQMC)~\cite{booth_fermion_2009,cleland_survival_2010,guther_neci_2020} and density matrix renormalization group (DMRG) theory~\cite{chan_density_2011,sharma_spin-adapted_2012} algorithms to solve the eigenvalue problem defined within the active space.
\subsection{Multireference perturbation theory}
While the CASSCF calculation provides a qualitatively correct wavefunction, for a quantitative description of a strongly correlated system, dynamic correlations~\cite{book_QC_00} (contributions to the wavefunction from those configurations related to excitations from inactive to active and virtual, and active to virtual orbitals) are also important and must be accounted for.
A natural choice is variational multireference CI (MRCI) approach where the CI wavefunction is extended with excitations involving orbitals that are doubly occupied and empty in the reference CASSCF wavefunction \cite{book_QC_00}.
An alternative and computationally less demanding approach to take into account dynamic correlations is based on perturbation theory in second- and higher-orders.
In multireference perturbation theory (MRPT) MC zeroth-order wavefunction is employed and excitations to the virtual space are accounted by means of perturbation theory.
If the initial choice of the MC wavefunction is good enough to capture the large part of the correlation energy, then the perturbation corrections are typically small.
The most common variations of MRPT are the complete active space second-order perturbation theory (CASPT2)~\cite{anderson_caspt2_1992} and the n-electron valence second-order perturbation theory (NEVPT2)~\cite{angeli_nevpt2_2001} which differ in the type of zeroth-order Hamiltonian $H_0$ employed.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.450\textwidth]{fig1.pdf}
\caption{Quantum cluster of five NiO$_4$ (a) and CuO$_4$ (b) plaques considered in our calculations. The point-charge embedding is not shown.
The symmetry adapted localized 3$d_{x^2-y^2}$\ and the oxygen Zhang-Rice-like 2$p$ orbitals, the basis in which the wavefunction in Table~\ref{wfn} is presented are shown in yellow and green color. }
\label{fig1}
\end{center}
\end{figure}
\section{The {\em ab initio} model}
Before we describe the {\em ab initio} model we consider, let us summarize the widely used and prominent model Hamiltonian to study the nature of doped hole in HTC cuprates and also employed for monovalent nickelates lately.
It is the three-band Hubbard model~\cite{emery_3b_hubbard_prl_1987} with
three orbital degrees of freedom (bands) which include the $d$ orbital of Cu with $x^2-y^2$ symmetry and the in-plane oxygen $p$ orbitals aligned in the direction of the nearest Cu neighbours.
These belong to the $b_1$ irreducible representation (irrep) of the $D_{4h}$ point group symmetry realized at the Cu site of the CuO$_4$ plaque, the other Cu $d$ orbitals belong to $a_1$ ($d_{z^2}$), $b_2$ ($d_{xy}$) and $e$ ($d_{xz,yz}$) irreps.
The parameters in this Hamiltonian include the most relevant hopping and Coulomb interactions within this set of orbitals.
More recently, the role of the Cu $3d$ multiplet structure on the hole doped ground state is also studied~\cite{jiang_cuprates_prb_2020}.
While this model explains certain experimental observations, there is still a huge debate on what is the minimum model to describe the low-energy physics of doped cuprates.
Nevertheless, this model has also been employed to investigate the character of the doped hole in monovalent nickelates~\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021, Plienbumrung_prb_2021}.
Within the embedded cluster approach described earlier,
we consider a QC of five NiO$_4$ (CuO$_4$) plaques that includes five Ni (Cu) atoms, 16 oxygens and 8 Nd (Ca) atoms. The 10 Ni (Cu) ions neighbouring to the cluster are also included in the QC, however, these are considered as total ion potentials (TIPs).
The QC is embedded in point charges that reproduce the electrostatic field of the solid environment.
We used the crystal structure parameters for the thin film samples reported in Ref.~\cite{li_superconductivity_2019,hayward_synthesis_2003,kobayashi_compounds_1997,karpinski_single_1994}.
We used all-electron atomic natural orbital (ANO)-L basis sets of tripple-$\zeta$ quality with additional polarization functions -- [$7s6p4d2f1g$] for Ni (Cu)~\cite{roos_new_2005}
and [$4s3p2d1f$] for oxygens~\cite{roos_main_2004}.
For the eight Nd (Ca) atoms large core effective potentials~\cite{dolg_energy-adjusted_1989,dolg_combination_1993,kaupp_pseudopotential_1991} and associated [$3s2p2d$] basis functions were used.
In the case of Nd, the $f$-electrons were incorporated in the core.
Cu$ ^{1+} $ (Zn$^{2+}$) total ion potentials (TIPs) with [$2s1p$] functions were used for the 10 Ni$^{1+}$ (Cu$^{2+}$)~\cite{ingelmann_thesis}~\footnote{Energy-consistent Pseudopotentials of Stuttgart/Cologne group, \url{http://www.tc.uni-koeln.de/cgi-bin/pp.pl?language=en,format=molpro,element=Zn,job=getecp,ecp=ECP28SDF}, [Accessed: 15-Sept-2021]}
neighbouring ions of the QC.
\begin{table}[!t]
\caption{The different active spaces (CAS) considered in this work.
NEL is number of active electrons and NORB is the number of active orbitals.
The numbers in parenthesis indicate the orbital numbers in Fig~\ref{activespace_orb}.
}
\label{activespaces}
\begin{center}
\begin{tabular}{lcc}
\hline
\hline\\
CAS & NEL & NORB \\
\hline\\
CAS-1 & 18 & 24 (1-24) \\
CAS-2 & 24 & 30 (25-30) \\
CAS-3\footnote{The four neighbouring Ni$^{1+}$ (Cu$^{2+}$) ions in the quantum cluster are treated as closed shell Cu$^{1+}$ (Zn$^{2+}$) ions.}
& 12 & 14 (1, 6, 11, 16 and 21-30) \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
To investigate the role of different interactions in the $d^8$ ground state,
two different active spaces were considered.
In the first active space, CAS-1 in Table ~\ref{activespaces}, only the orbitals in the $b_1$ and $a_1$ irreps are active.
These are $d_{x^2-y^2}$ and $d_{z^2}$-like orbitals respectively, and the corresponding double-shell $4d$ orbitals of each of the five Ni (Cu) atoms.
CAS-1 also contains the symmetry-adapted ZR-like composite O 2$p$ and the double-shell 3$p$-like orbitals, numbers 1-20 and 21-24 in Fig.~\ref{activespace_orb}.
At the mean-field HF level of theory, there are 16 electrons within this set of orbitals, resulting in CAS(16,22) active space.
In the second active space, CAS-2, orbitals of $b_2$ and the $e$ irreps from the central Ni (Cu) $d$ manifold are also included.
These are the 3$d_{xy}$, 3$d_{xz,yz}$-like orbitals and the corresponding $4d$ orbitals and the six electrons, numbers 25-30 in Fig.~\ref{activespace_orb}, resulting in a CAS(24,30) active space.
The latter active space takes into account the $d^8$ multiplet effects within the $3d$ manifold explicitly.
The two active spaces considered in this work not only describe all the physical effects included in the above mentioned three-band Hubbard model but go beyond.
More importantly, we do have any \textit{ad-hoc} input parameters for the calculation as
all the physical interactions are implicitly included in the {\it ab initio} Hamiltonian describing the actual scenario in the real materials.
We employed {\sc OpenMolcas}~\cite{fdez_galvan_openmolcas_2019} quantum chemistry package for all the calculations.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.480\textwidth]{cas_orbitals.pdf}
\caption{Active orbital basis used in the CASSCF calculations,
plotted using Jmol~\cite{jmol}.
}
\label{activespace_orb}
\end{center}
\end{figure}
\section{Results}
\subsection{Ground state of the \boldmath${d^8}$ configuration}
Starting from the electronic structure of the parent compounds, where each Ni (Cu) is in the $d^9$ configuration, we compute the electron-removal (in the photoemission terminology) $d^8$ state to investigate the hole-doped quasiparticle state.
Since the parent compounds in $d^9$ configuration have strong nearest neighbour antiferromagnetic (AF) correlations~\cite{katukuri_electronic_2020}, the total spin of our QC in undoped case, with five Ni (Cu) sites, in the AF ground state is $S_{QC}=3/2$.
By introducing an additional hole (or removing an electron) from the central Ni (Cu) in our QC, the $S_{QC}$ values range from 0 to 3.
To simplify the analysis of the distribution of the additional hole, we keep the spins on the four neighbouring Ni (Cu) sites parallelly aligned in all our calculations and from now on we only specify the spin multiplicity of the central Ni (Cu)O$_4$ plaque.
The multiplet structure of the $d^8$ configuration thus consists of only spin singlet and triplet states, spanned by the four irreps of the $3d$ manifold.
The active spaces we consider in this work allow us to compute accurately the excitations only within the $b_1$ and $a_1$ irreps
\footnote{For an accurate quantitative description of the multiplet structure spanned by the other two irreps $b_1$ and $e$, one would need to extend the active space and include the $3d$ and $4d$ manifolds of the four neighbouring Ni (Cu) atoms as well as the O 2$p$ orbitals of the same symmetry, resulting in a gigantic 68 electrons in 74 orbitals active space.}
and we address the full multiplet structure elsewhere.
When computing the local excitations, a local singlet state on the central Ni (Cu) corresponds to a total spin on the cluster $S_{QC}=2$.
However, a local triplet state, with central spin aligned parallel to the neighboring spins, corresponds to $S_{QC}=3$ and do not satisfy the AF correlations.
To avoid the spin coupling between the central $d^8$ Ni (Cu) with the neighbouring $d^9$ Ni (Cu) ions, we replace the latter with closed shell, Cu (Zn) $d^{10}$, ions and freeze them at the mean-field HF level.
Such a simplification is justified, as the local excitation energy we compute is an order of magnitude larger than the exchange interaction~\cite{katukuri_electronic_2020}.
%
In Table \ref{d8-excit}, the relative energies of the lowest local spin singlets $^1\!A_{1g}$, $^1\!B_{1g}$ and spin triplet $^3\!B_{1g}$ states are shown.
These are obtained from CASSCF + CASPT2 calculations with CAS(12,14) active space (CAS-3 in Table~\ref{activespaces}) which includes the 3$d$ and $4d$ orbitals of the central Ni (Cu) ion and the in-plane O 2$p $ and $3p$ orbitals in the $b_1$ irrep.
In the CASPT2 calculation, the remaining doubly occupied O $2p$, the central Ni (Cu) $3s$ and $3p$ orbitals and all the unoccupied virtual orbitals are correlated.
\begin{table}[!t]
\caption{Relative energies (in eV) of the electron removal $d^8$ states in NdNiO$_2$\ and the iso-structural CaCuO$_2$\ obtained from CAS(12,14)SCF and CASSCF+CASPT2 calculations.
}
\label{d8-excit}
\begin{center}
\begin{tabular}{lccccl}
\hline
\hline
State & \multicolumn{2}{c}{NdNiO$ _{2} $} & \multicolumn{2}{c}{CaCuO$ _{2} $} \\
& CASSCF & +CASPT2 & CASSCF & +CASPT2 \\
\hline
$^1\!A_{1g}$ & 0.00 & 0.00 & 0.00 & 0.00 \\
$^3\!B_{1g}$ & 1.35 & 1.88 & 2.26 & 2.50 \\
$^1\!B_{1g}$ & 2.98 & 3.24 & 3.21 & 3.33 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
It can be seen that the ground state is of $^1\!A_{1g}$ symmetry and the lowest triplet excited state, with $^3\!B_{1g}$ symmetry, is around 1.88 eV and 2.5 eV for NdNiO$_2$\ and CaCuO$_2$\ respectively.
The AF magnetic exchange in these two compounds is 76 meV and 208 meV respectively~\cite{katukuri_electronic_2020}, and thus we expect that our simplification of making the neighbouring $d^9$ ions closed shell do not over/underestimate the excitation energies.
At the CASSCF level, the $^1\!A_{1g}$-$^3\!B_{1g}$ excitation energy is 1.35 eV in NdNiO$_2$\ while it is 2.26 eV in CaCuO$_2$.
Interestingly, the inclusion of dynamical correlations via the CASPT2 calculation, the $^1\!A_{1g}$ in NdNiO$_2$\ is stabilized by 0.53 eV compared to $^3\!B_{1g}$ state.
However, in CaCuO$_2$, the $^1\!A_{1g}$ state is stabilized by only 0.24 eV.
This indicates that the dynamical correlations are more active in the $^1\!A_{1g}$ state in NdNiO$_2$\ than in CaCuO$_2$.
We note that the hole excitations within the $3d$ orbitals in the irreps $b_2$ and $e$, calculated with this limited active space (CAS-3) results in energies lower than the $^3\!B_{1g}$ and $^1\!B_{1g}$ states.
However, an accurate description of those states requires an enlarged active space that includes not only the same symmetry oxygen 2$p$ and $3p$ orbitals from the central NiO$_4$ plaque but also the 3$d$, 4$d$ manifold of the neighbouring Ni (Cu) ions, making the active space prohibitively large.
Here, we concentrate on the analysis of the $^1\!A_{1g}$ ground state and address the complete $d^8$ multiplet spectrum elsewhere.
\begin{table}[!b]
\caption{
Ni and Cu $3d^8$ $^1\!A_{1g}$ ground state wavefunction: Weights (\%) of the leading configurations
in the wavefunction computed for NdNiO$_2$\ and CaCuO$_2$\ with active spaces CAS-1 and CAS-2 (see Table~\ref{activespaces}).
$d_{b_1}$ and $p_{b_1}$ are the localized Ni (Cu) $3d_{x^2-y^2}$ and the oxygen $2p$ ZR-like orbitals (see Fig.~\ref{fig1}) in the $b_1$ irrep respectively.
Arrows in the superscript indicate the spin of the electrons and a $\square$ indicates two holes.
}
\begin{center}
\begin{tabular}{l llll}
\hline
\hline\\[-0.30cm]
& \multicolumn{2}{c}{NdNiO$ _{2} $} & \multicolumn{2}{c}{CaCuO$ _{2} $} \\
$^1\!A_{1g}$ & CAS-1 & CAS-2 & CAS-1 & CAS-2 \\
\hline
\\[-0.20cm]
$|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$ & 51.87 & 42.40 & 4.20 & 20.25 \\[0.3cm]
$|d_{b_{1}}^{\uparrow}p_{b_{1}}^{\downarrow} \rangle$ & 8.27 & 10.48 & 42.58 & 38.52 \\[0.3cm]
$|d_{b_{1}}^{\downarrow}p_{b_{1}}^{\uparrow} \rangle$ & 6.07 & 7.60 & 25.00 & 25.60 \\[0.3cm]
$|d_{b_{1}}^{\uparrow \downarrow}p_{b_{1}}^\square \rangle$ & 0.09 & 0.23 & 21.56 & 5.14 \\[0.3cm]
\hline
\hline
\end{tabular}
\end{center}
\label{wfn}
\end{table}
\subsection{Wavefunction of the electron-removal \boldmath$d^8$ ground state}
The $^1\!A_{1g}$ ground wavefunction in terms of
the weights of the four leading configurations (in the case of CaCuO$_2$) is shown in Table~\ref{wfn}.
The wavefunctions corresponding to the CASSCF calculations with the active spaces CAS-1 and CAS-2 are shown.
The basis in which the wavefunctions are represented is constructed in two steps:
1) A set of natural orbitals are generated by diagonalising the CASSCF one-body reduced density matrix.
2) To obtain a set of atomic-like symmetry-adapted localized orbital basis, we localize the Ni (Cu) $3d$ and O $2p$ orbitals on the central NiO$_4$ (CuO$_4$) plaque through a unitary transformation.
Such partial localization within the active space keeps the total energy unchanged.
The resulting 3$d_{x^2-y^2}$\ and the ZR-like oxygen 2$p$ orbital basis is shown in Fig~\ref{fig1}.
FCIQMC calculation was performed in this partial localized basis to obtain the wavefunction as a linear combination of Slater determinants.
10 million walkers were used to converge the FCIQMC energy to within 0.1 mHartree.
From Table~\ref{wfn} it can be seen that the electron-removal $d^8$ ground state wavefunction for the two compounds is mostly described by the four configurations spanned by the localized 3$d_{x^2-y^2}$\ ($d_{b_1}$) and the symmetry-adapted ZR-like oxygen 2$p$ ($p_{b_1}$) orbitals that are shown in Fig.~\ref{fig1}.
Let us first discus the wavefunction obtain from the CAS-1 active space.
For NdNiO$_2$, the dominant configuration involves two holes on 3$d_{x^2-y^2}$, $|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$, and contributes to $\sim$52\% of the wavefunction,
while the configurations that make up the ZR singlet, $|d_{b_{1}}^{\uparrow}p_{b_{1}}^{\downarrow} \rangle$ and $|d_{b_{1}}^{\downarrow}p_{b_{1}}^{\uparrow} \rangle$, contributes to only $\sim$14\%.
On the other hand, the $d^8$ $^1\!A_{1g}$ state in CaCuO$_2$\ is predominantly the ZR singlet with $\sim$68\% weight.
In the CASSCF calculation with CAS-2 active space, where all the electrons in the 3$d$ manifold are explicitly correlated,
we find that the character of the wavefunction remains unchanged in NdNiO$_2$\ but weight on the dominant configurations is slightly reduced.
On the other hand, in CaCuO$_2$, while the contribution from the ZR singlet is slightly reduced, the contribution from $|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$ configuration is dramatically increased at the expense of the weight on
$|d_{b_{1}}^{\uparrow \downarrow}p_{b_{1}}^\square \rangle$.
This demonstrates that the additional freedom provided by the $d_{xy}$ and $d_{xz/yz}$ orbitals for the electron correlation helps to accommodate the additional hole on the Cu ion.
We note that the four configurations shown in Table~\ref{wfn} encompass almost 90\% of the $d^8$ wavefunction (with CAS-2 active space) in CaCuO$_2$.
Thus, the use of a three-band Hubbard model~\cite{emery_3b_hubbard_prl_1987,jiang_cuprates_prb_2020} to investigate the role of doped holes in CuO$_2$ planes is a reasonable choice.
However, for NdNiO$_2$\ these configurations cover only 60\% of the $d^8$ wavefunction, hence a three-band Hubbard model is too simple to describe the hole-doped monovalent nickelates.
A more intuitive and visual understanding of the distribution of the additional hole can be obtained by plotting the difference of the $d^8$ and the $d^9$ ground state electron densities as shown in Fig.~\ref{fig2}.
Electron density of a multi-configurational state can be computed as a sum of densities arising from the natural orbitals and corresponding (well-defined) occupation numbers.
We used Multiwfn program \cite{Multiwfn} to perform this summation.
The negative values of the heat map of the electron density difference (blue color) and the positive values (in red) represent respectively the extra hole density and additional electron density in $d^8$ state compared to the $d^9$ state.
From Fig.~\ref{fig2}(a)/(c) that show the density difference in the NiO$_2$/CuO$_2$ planes (xy-plane), we conclude the following:
\begin{enumerate}
\item The hole density is concentrated on the Ni site (darker blue) with $b_1$ ($d_{x^2-y^2}$) symmetry in NdNiO$_2$\ whereas
it is distributed evenly on the four oxygen and the central Cu ions with $b_1$ symmetry in CaCuO$_2$, a result consistent with the wavefunction reported in Table~\ref{wfn}.
\item In NdNiO$_2$, the hole density is spread out around the Ni ion with larger radius, and otherwise in CaCuO$_2$.
This demonstrates that the $3d$ manifold in Cu is much more localized than in Ni and therefore the onsite Coulomb repulsion $U$ is comparatively smaller for Ni.
\item The darker red regions around the Ni site in NdNiO$_2$\ indicate stronger $d^8$ multiplet effects that result in rearrangement of electron density compared to $d^9$ configuration.
\item In CaCuO$_2$, we see darker red regions on the oxygen ions instead, which shows that the significant presence of a hole on these ions results in noticeable electron redistribution.
\end{enumerate}
The electron density difference in the xz-plane (which is perpendicular to the NiO$_2$/CuO$_2$ planes) is quite different in the two compounds.
The hole density in NdNiO$_2$\ is spread out up to 2\,\AA\ in the $z$-direction, unlike in CaCuO$_2$, where it is confined to within 1\,\AA .
We attribute this to the strong radial-type correlations in NdNiO$_2$.
With the creation of additional hole on the 3$d_{x^2-y^2}$\ orbital, the electron density which is spread out in the $d_{z^2}$\ symmetry via the dynamical correlation between 3$d_{z^2}$\ and 4$d_{z^2}$\ orbitals~\cite{katukuri_electronic_2020}, becomes more compact in the $d_{z^2}$\ symmetry through the reverse breathing.
Thus, we see a strong red region with 3$d_{z^2}$\ profile and a blue region with expanded 4$d_{z^2}$\ profile.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.48\textwidth]{Density_difference_2.pdf}
\caption{Electron density difference of the $d^8$ and $d^9$ ground states ($\rho(d^8) - \rho(d^9)$) for NdNiO$_2$\ in the xy-plane (a) and xz-plane (b), and for CaCuO$_2$\ xy-plane (c) and xz-plane (d).
The coordinates of the central Ni (Cu) $d^8$ ion are set to (0,0). The scale of the heat-bar is logarithmic between $\pm$0.001 to $\pm$1.0 and is linear between 0 and $\pm$0.001.
(e) Electron density difference integrated over a sphere centered on on the central Ni(Cu) atoms (full curves) as a function of the radius $r$ shown in (a).
The result of an additional radial integration (dashed curves) as a function of the upper integration limit.}
\label{fig2}
\end{center}
\end{figure}
To obtain a quantitative understanding of the charge density differences for the two compounds, in Fig.~\ref{fig2}(e) we plot the electron density difference integrated over a sphere centered on the central Ni(Cu) atom as a function of the radius $r$ shown in Fig.~\ref{fig2}(a).
Four features, which we marked A-D, clearly demonstrate the contrast in the charge density differences in the two compounds.
From the feature A at $r$ close to Ni (Cu), it is evident that
the extent of hole density around Ni in NdNiO$_2$\ is larger than around Cu in CaCuO$_2$.
The features B and C that are on either side of the position of oxygen ions show that the hole density is significantly larger on oxygen atoms in CaCuO$_2$\ than in the NdNiO$_2$.
It is interesting to note that we see a jump (feature D) in the electron density above zero at $r$ close to the position of Nd ions in NdNiO$_2$, while in CaCuO$_2$\ the curve is flat in the region of Ca ions.
This shows that there is some electron redistribution happening around the Nd ions.
The hole density within a solid sphere (SS) around the central Ni (Cu) atom obtained by additional integration over the radius $r$ is also shown in Fig.~\ref{fig2}(e) with dashed curves.
It can be seen that the total hole density within the SS of $r\sim$4\,\AA, where the neighboring Ni (Cu) ions are located, is only $\sim$0.5 in both the compounds, with slight differences related to the feature D.
This is due to the screening of the hole with the electron density pulled in from the farther surroundings.
As one would expect, a SS with $r$ of the size of the cluster, the total hole density is one in both the compounds.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=0.480\textwidth]{entropy_4.pdf}
\caption{Single orbital entanglement entropy, $s(1)_i$, (dots) and mutual orbital entanglement entropy, $I_{i,j}$, (colored lines) of the orbital basis used to expand the $d^8$ wavefunction in Table~\ref{wfn} for NdNiO$_2$\ (a) and CaCuO$_2$\ (b).
Entanglement entropy of the orbitals centred on the central NiO$_4$/CuO$_4$ plaque are only shown.
The irrep to which the orbitals belong to are also shown.
The green and magenta colors represent the two different set of orbitals, occupied (at the HF level) and the corresponding double-shell (virtual), respectively.
The thickness of the black, blue and green lines denote the strength of $I_{i,j}$, and the size of the dots is proportional to $s(1)_i$.
}
\label{entanglement}
\end{center}
\end{figure}
\subsection{Orbital entanglement entropy }
To analyse the different type of correlations active in the two compounds in $d^8$ configuration, we compute the entanglement entropy~\cite{boguslawski_entanglement_2012,boguslawski_orbital_2013,boguslawski_orbital_2015}.
While the single orbital entropy, $s(1)_i $, quantifies the correlation between $i$-th orbital and the remaining set of orbitals,
the mutual information, $I_{i,j}$ is the two-orbital entropy between $i$ and $j$~\cite{legeza_optimizing_2003,rissler_measuring_2006}, and illustrates the correlation of an orbital with another, in the embedded environment comprising of all other orbitals.
We used {\sc QCMaquis}~\cite{keller_an_2015} embedded in {\sc OpenMolcas}~\cite{fdez_galvan_openmolcas_2019} package to compute the entropies.
In Figure~\ref{entanglement}, $s(1)_i$ and $I_{i,j}$ extracted from CASSCF calculations with CAS-2 active space for NdNiO$_2$\ and CaCuO$_2$\ are shown.
The orbital basis for which the entropy is computed is the same as the basis in which the wavefunction presented in Table~\ref{wfn} is expanded.
As mentioned previously, this orbital basis is obtained from partial localization of the natural orbitals in a way that only the 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals are localized.
Since a large part of electron correlation is compressed in natural orbitals, we see a tiny $s(1)_i$ for all orbitals except for the localized 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals where it is significant. This is consistent with the wavefunction in Table~\ref{wfn}.
The mutual orbital entanglement between pairs of orbitals shows strong entanglement between the 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals for both NdNiO$_2$\ and CaCuO$_2$, a consequence of the dominant weight of the configurations spanned by these two orbitals in the wavefunction.
The next strongest entanglement is between the Ni/Cu 3$d$ valence and their double-shell $4d$ orbitals.
Such strong entanglement also observed for the undoped $d^9$ ground state~\cite{katukuri_electronic_2020}, is a result of dynamical radial correlation \cite{helgaker_molecular_2000} and orbital breathing effects~\cite{gunnarsson_density-functional_1989,bogdanov_natphys_2021}.
Interestingly, the entanglement entropy in the range 0.001-0.01 (green lines) is quite similar in the two compounds, although one sees more entanglement connections in NdNiO$_2$.
A comparison of the entropy information between NdNiO$_2$\ and CaCuO$_2$\ reveals that the Ni 3$d$ and 4$d$-like orbitals contribute rather significantly (thicker blue lines) to the total entropy, in contrast to the Cu 3$d$ and 4$d$-like orbitals, something that is also seen in the undoped compounds~\cite{katukuri_electronic_2020}.
\section{Conclusions and discussion}
In conclusion,
our {\it ab initio} many-body quantum chemistry calculations for the electron removal ($d^8$) states find a low-spin closed-shell singlet ground state in NdNiO$_2$\ and that the additional hole is mainly localized on the Ni 3$d_{x^2-y^2}$\ orbital, unlike in CaCuO$_2$, where a Zhang-Rice singlet is predominant.
We emphasise that the $d^8$ wavefunction is highly multi-configurational where the dominant closed-shell singlet configuration weight is only $\sim$42\%.
This result is consistent with the experimental evidence~\cite{rossi2020orbital,goodge-a} of orbitally polarized singlet state as well as the presence of holes on the O $2p$ orbitals.
Importantly, the persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects in NdNiO$_2$, and consequently the additional hole foot-print is more three dimensional.
In CaCuO$_2$, we find that the electron correlations within the $d_{xy}$ and $d_{xz/yz}$ orbitals changes the hole-doped wavefunction significantly. Specifically, the double hole occupation of Cu $d_{x^2-y^2}$\ is significantly increased and this can influence the transport properties.
It was recently proposed that nickelates could be a legitimate realization of the single-band Hubbard model~\cite{kitatani_nickelate_2020}.
However, our analysis shows that even the three-band Hubbard model~\cite{eskes1991a}, which successfully describes the hole-doped scenario in cuprates, falls short to describe hole-doped nickelates and additional orbital degrees of freedom are indeed necessary for the description of the strong multiplet effects we find.
Much has been discussed about the importance of rare-earth atoms for the electronic structure of superconducting nickelates, e.g. see~\cite{nomura2021superconductivity}.
The three-dimensional nature of the hole density we find in NdNiO$_2$\ might also be hinting at the importance of out-of-plane Nd ions.
It would be interesting to compare the hole density of NdNiO$_2$\ with other iso-structural nickelates such as LaNiO$_2$\ where La $5d$ states are far from the Fermi energy.
Since the infinite-layered monovalent nickelates are thin films and often grown on substrates, one could ask the question of how the electronic structure of the undoped and doped compounds changes with varying Ni-O bond length. Would this influence the role of electronic correlations in $d^9$ nickelates? We will address these in the near future.
\section*{Conflict of Interest Statement}
The authors express no conflict of interests.
\section*{Author Contributions}
VMK and AA designed the project. VMK and NAB performed the calculations. All the authors analysed the data. VMK wrote the paper with inputs from NAB and AA.
\section*{Funding}
We gratefully acknowledge the Max Plank Society for financial support.
\section*{Acknowledgments}
VMK would like to acknowledge Giovanni Li Manni and Oskar Weser for fruitful discussions.
|
\section{Introduction}
For all terms related to digraphs which are not defined below, see Bang-Jensen and Gutin \cite{Bang_Jensen_Gutin}.
In this paper,
by a {\it directed graph} (or simply {\it digraph)}
$D$ we mean a pair $(V,A)$, where
$V=V(D)$ is the set of vertices and $A=A(D)\subseteq V\times V$ is the set of arcs.
For an arc $(u,v)$, the first vertex $u$ is called its {\it tail} and the second
vertex $v$ is called its {\it head}; we also denote such an arc by $u\to v$.
If $(u,v)$ is an arc, we call $v$ an {\it out-neighbor} of $u$, and $u$ an {\it in-neighbor} of $v$.
The number of out-neighbors of $u$ is called the {\it out-degree} of $u$, and the number of in-neighbors of $u$ --- the {\it in-degree} of $u$.
For an integer $k\ge 2$, a {\it walk} $W$ {\it from} $x_1$ {\it to} $x_k$ in $D$ is an alternating sequence
$W = x_1 a_1 x_2 a_2 x_3\dots x_{k-1}a_{k-1}x_k$ of vertices $x_i\in V$ and arcs $a_j\in A$
such that the tail of $a_i$ is $x_i$ and the head of $a_i$ is $x_{i+1}$ for every
$i$, $1\le i\le k-1$.
Whenever the labels of the arcs of a walk are not important, we use the notation
$x_1\to x_2 \to \dotsb \to x_k$ for the walk, and say that we have an $x_1x_k$-walk.
In a digraph $D$, a vertex $y$ is {\it reachable} from a vertex $x$ if $D$ has a walk from $x$ to $y$. In
particular, a vertex is reachable from itself. A digraph $D$ is {\it strongly connected}
(or, just {\it strong}) if, for every pair $x,y$ of distinct vertices in $D$,
$y$ is reachable from $x$ and $x$ is reachable from $y$.
A {\it strong component} of a digraph $D$ is a maximal induced subdigraph of $D$ that is strong.
If $x$ and $y$ are vertices of a digraph $D$, then the
{\it distance from x to y} in $D$, denoted $\dist(x,y)$, is the minimum length of
an $xy$-walk, if $y$ is reachable from $x$, and otherwise $\dist(x,y) = \infty$.
The {\it distance from a set $X$ to a set $Y$} of vertices in $D$ is
\[
\dist(X,Y) = \max
\{
\dist(x,y) \colon x\in X,y\in Y
\}.
\]
The {\it diameter} of $D$ is $\diam(D) = \dist(V,V)$.
Let $p$ be a prime, $e$ a positive integer, and $q = p^e$. Let
$\fq$ denote the finite field of $q$ elements, and $\fq^*=\fq\setminus\{0\}$.
Let $\fq^2$
denote the Cartesian product $\fq \times \fq$, and let
$f\colon\fq^2\to\fq$ be an arbitrary function. We define a digraph $D = D(q;f)$ as follows:
$V(D)=\fq^{2}$, and
there is an arc from a vertex ${\bf x} = (x_1,x_2)$ to a vertex
${\bf y} = (y_1,y_{2})$ if and only if
\[
x_2 + y_2 = f(x_1,y_1).
\]
If $(x,y)$ is an arc in $D$, then ${\bf y}$ is uniquely determined by ${\bf x}$ and $y_1$, and ${\bf x}$ is uniquely determined by ${\bf y}$ and $x_1$.
Hence, each vertex of $D$ has both its in-degree and out-degree equal to $q$.
By Lagrange's interpolation,
$f$ can be uniquely represented by
a bivariate polynomial of degree at most $q-1$ in each of the variables. If ${f}(x,y) = x^m y^n$, $1\le m,n\le q-1$, we call $D$ a {\it monomial} digraph, and denote it also by $D(q;m,n)$. Digraph $D(3; 1,2)$ is depicted in Fig.\ $1.1$. It is clear, that ${\bf x}\to {\bf y}$ in $D(q;m,n)$ if and only if ${\bf y}\to {\bf x}$ in $D(q;n,m)$. Hence, one digraph is obtained from the other by reversing the direction of every arc. In general, these digraphs are not isomorphic, but if one of them is strong then so is the other and their diameters are equal. As this paper is concerned only with the diameter of $D(q;m,n)$, it is sufficient to assume that $1\le m\le n\le q-1$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\sffamily\scriptsize\bfseries}
\tikzset{edge/.style = {->,> = triangle 45}}
\node[vertex] (a) at (0,0) {$(0,2)$};
\node[vertex] (b) at (4,0) {$(1,1)$};
\node[vertex] (c) at (8,0) {$(1,0)$};
\node[vertex] (d) at (0,-4) {$(0,1)$};
\node[vertex] (e) at (4,-4) {$(2,2)$};
\node[vertex] (f) at (8,-4) {$(2,0)$};
\node[vertex] (g) at (4,-1.5) {$(2,1)$};
\node[vertex] (h) at (4,-2.5) {$(1,2)$};
\node[vertex] (i) at (8,-2) {$(0,0)$};
\draw[edge] (a) to (b);
\draw[edge] (b) to (a);
\draw[edge] (a) to (d);
\draw[edge] (d) to (a);
\draw[edge] (b) to (c);
\draw[edge] (c) to (b);
\draw[edge] (g) to (b);
\draw[edge] (h) to (e);
\draw[edge] (c) to (b);
\draw[edge] (d) to (e);
\draw[edge] (e) to (d);
\draw[edge] (e) to (f);
\draw[edge] (f) to (e);
\draw[edge] (c) to (i);
\draw[edge] (i) to (c);
\draw[edge] (f) to (i);
\draw[edge] (i) to (f);
\draw[edge] (g) to (a);
\draw[edge] (a) to (g);
\draw[edge] (c) to (g);
\draw[edge] (d) to (h);
\draw[edge] (h) to (d);
\draw[edge] (f) to (h);
\path
(g) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=330,in=300,looseness=8] node {} (g);
\path
(h) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=160,in=130,looseness=8] node {} (h);
\path
(i) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=210,in=170,looseness=8] node {} (i);
\end{tikzpicture}
\caption{The digraph $D(3;1,2)$: $x_2+y_2 = x_1y_1^2$.}
\end{center}
\end{figure}
The digraphs $D(q; {f})$
and $D(q;m,n)$ are directed analogues
of
some algebraically defined graphs, which have been studied extensively
and have many applications. See
Lazebnik and Woldar \cite{LazWol01} and references therein; for some
subsequent work see Viglione \cite{Viglione_thesis},
Lazebnik and Mubayi \cite{Lazebnik_Mubayi},
Lazebnik and Viglione \cite{Lazebnik_Viglione},
Lazebnik and Verstra\"ete \cite{Lazebnik_Verstraete},
Lazebnik and Thomason \cite{Lazebnik_Thomason},
Dmytrenko, Lazebnik and Viglione \cite{DLV05},
Dmytrenko, Lazebnik and Williford \cite{DLW07},
Ustimenko \cite{Ust07}, Viglione \cite{VigDiam08},
Terlep and Williford \cite{TerWil12}, Kronenthal \cite{Kron12},
Cioab\u{a}, Lazebnik and Li \cite{CLL14},
Kodess \cite{Kod14},
and Kodess and Lazebnik \cite{Kod_Laz_15}.
The questions of strong connectivity of digraphs $D(q;{f})$ and $D(q; m,n)$ and descriptions of their components were completely answered in
\cite{Kod_Laz_15}. Determining the diameter of a component of $D(q;{f})$ for an arbitrary prime power $q$ and an arbitrary $f$ seems to be out of reach, and most of our results below are concerned with some instances of this problem for strong monomial digraphs. The following theorems are the main results of this paper.
\begin{theorem
\label{main}
Let $p$ be a prime, $e,m,n$ be positive integers, $q=p^e$, $1\le m\le n\le q-1$, and $D_q= D(q;m,n)$. Then the following statements hold.
\begin{enumerate}
\item\label{gen_lower_bound} If $D_q$ is strong, then $\diam (D_q)\ge 3$.
\item\label{gen_upper_bound}
If $D_q$ is strong, then
\begin{itemize}
\item for $e = 2$, $\diam(D_q)\le 96\sqrt{n+1}+1$;
\item for $e \ge 3$, $\diam(D_q)\le 60\sqrt{n+1}+1$.
\end{itemize}
\item\label{diam_le_4} If $\gcd(m,q-1)=1$ or $\gcd(n,q-1)=1$, then $\diam(D_q)\le 4$.
If $\gcd(m,q-1) = \gcd(n,q-1) = 1$, then $\diam(D_q) = 3$.
\item \label{main3} If $p$ does not divide $n$, and $q > (n^2-n+1)^2$,
then $\diam(D(q;1,n)) = 3$.
\item If $D_q$ is strong, then:
\begin{enumerate}
\item[(a)\label{bound_q_le25}]
If $q > n^2$, then $\diam(D_q) \le 49$.
\item[(b)\label{bound_q_m4n4}]
If $q > (m-1)^4$, then $\diam(D_q)\le 13$.
\item[(c)]\label{bound_q_le6} If $q > (n-1)^4$, then $\diam(D(q;n,n))\le 9$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
The converse to either of the statements in part (\ref{diam_le_4}) of Theorem \ref{main} is not true. Consider, for instance,
$D(9;2,2)$ of diameter $4$, or $D(29;7,12)$ of diameter $3$.
\end{remark}
\begin{remark}
The result of part \ref{bound_q_le25}a can hold for some $q\le m^2$.
\end{remark}
For prime $q$, some of the results of Theorem \ref{main} can be strengthened.
\begin{theorem}
\label{thm_diam_p}
Let $p$ be a prime, $1\le m \le n\le p-1$, and $D_p= D(p;m,n)$. Then $D_p$ is strong and the following statements hold.
\begin{enumerate}
\item\label{diam_bound_p}
$\diam (D_p) \le 2p-1$ with equality if
and only if
$m=n=p-1$.
\item\label{bound_p_sqrt60}
If $(m,n)\not\in\{((p-1)/2,(p-1)/2),((p-1)/2,p-1), (p-1,p-1)\}$,
then $\diam(D_p)\le 120\sqrt{m}+1$.
\item\label{bound_p_le10}
If $p > (m-1)^3$,
then $\diam(D_p) \le 19$.
\end{enumerate}
\end{theorem}
The paper is organized as follows. In section \ref{preres} we present all results which are needed for our proofs of Theorems \ref{main} and \ref{thm_diam_p} in sections \ref{proofs1} and \ref{proofs2}, respectively. Section \ref{open} contains concluding remarks and open problems.
\section{Preliminary results.}\label{preres}
We begin with a general result that gives necessary and sufficient conditions for a digraph $D(q;m,n)$ to be strong.
\begin{theorem} {\rm [\cite{Kod_Laz_15}, Theorem 2]}
\label{thm_conn}
$D(q;m,n)$ is strong if and only if $\gcd(q-1,m,n)$ is not divisible by any
$q_d = (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.
In particular, $D(p;m,n)$ is strong for any $m,n$.
\end{theorem}
Every walk of length $k$ in $D = D(q; m,n)$ originating at $(a,{b})$ is of the form
\begin{align}
(a, b) &\to (x_1,- b + a^m x_1^n)\nonumber\\
&\to (x_2, b - a^m x_1^n + x_1^m x_2^n)\nonumber\\
&\to \cdots \nonumber\\
&\to(x_k, x_{k-1}^m x_k^n- x_{k-2}^m x_{k-1}^n+\cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b)\nonumber.
\end{align}
Therefore, in order to prove that $\diam(D)\le k$, one can show that for any choice of $a,b,u,v\in\fq$, there exists $(x_1,\dotso,x_k)\in\fq^k$ so that
\begin{equation}
\label{eqn:walk_length_k}
(u,v) = (x_k, x_{k-1}^m x_k^n- \cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b).
\end{equation}
In order to show that $\diam(D)\ge l$, one can show that there exist $a,b,u,v\in~\fq$ such that
(\ref{eqn:walk_length_k}) has no solution in $\fq^k$ for any $k < l$.
\bigskip
\subsection{
Waring's Problem
}
In order to obtain an upper bound on $\diam(D(q; m,n))$ we will use some results concerning Waring's problem over finite fields.
Waring's number $\gamma(r,q)$ over $\fq$ is defined as the smallest positive integer $s$ (should it exist) such that the equation
\[
x_1^r + x_2^r + \dotsb + x_s^r = a
\]
has a solution $(x_1,\dotso,x_s)\in\fq^s$ for any $a\in\fq$.
Similarly, $\delta(r,q)$ is defined as the smallest positive integer $s$ (should it exist) such that
for any $a\in\fq$, there exists $(\epsilon_1,\dotso,\epsilon_s)$,
each $\epsilon_i\in\{-1,1\}\subseteq\mathbb{F}_q$,
for which the equation
\[
\epsilon_1 x_1^r + \epsilon_2 x_2^r + \dotsb + \epsilon_s x_s^r = a
\]
has a solution $(x_1,\dotso,x_s)\in\fq^s$.
It is easy to argue that $\delta(r,q)$ exists if and only if
$\gamma(r,q)$ exists, and in this case $\delta(r,q)\le \gamma(r,q)$.
A criterion on the existence of $\gamma(r,q)$ is the following theorem by Bhashkaran \cite{Bhashkaran_1966}.
\begin{theorem} {\rm [\cite{Bhashkaran_1966}, Theorem G]}
\label{thm:waring_exist}
Waring's number $\gamma(r,q)$ exists if and only if $r$ is not divisible by any $q_d
= (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.
\end{theorem}
The study of various bounds on $\gamma(r,q)$ has drawn considerable attention. We will use the following two upper bounds on Waring's number due to J.~Cipra \cite{Cipra_2009}.
\begin{theorem}{\rm [\cite{Cipra_2009}, Theorem 4]}
\label{thm:waring_bound}
If $e = 2$ and $\gamma(r,q)$ exists,
then $\gamma(r,q)\le 16\sqrt{r+1}$. Also, if
$e \ge 3$ and $\gamma(r,q)$ exists,
then $\gamma(r,q)\le 10\sqrt{r+1}$.
\end{theorem}
\begin{cor} {\rm [\cite{Cipra_2009}, Corollary 7]}
\label{thm:diam_le_8}
If $\gamma(r,q)$ exists and $r < \sqrt{q}$, then $\gamma(r,q)\le 8$.
\end{cor}
For the case $q = p$, the following bound will be of interest.
\begin{theorem}{\rm [Cochrane, Pinner \cite{Cochrane_Pinner_2008}, Corollary 10.3]}
\label{thm:Cochrane_Pinner}
If $|\{x^k\colon x\in\mathbb{F}_p^\ast\}|>2$, then $\delta(k,p)\le 20\sqrt{k}$.
\end{theorem}
The next two statements concerning very strong bounds on Waring's number in large fields follow from the work of Weil \cite{Weil}, and Hua and Vandiver \cite{Hua_Vandiver}.
\begin{theorem}{\rm [Small \cite{Small_1977}]}
\label{thm:waring_Small_estimates}
If $q > (k-1)^4$, then $\gamma(k,q) \le 2$.
\end{theorem}
\begin{theorem} {\rm [Cipra \cite{Cipra_thesis}, p.~4]}
\label{thm:waring_small_estimates}
If $ p > (k-1)^3$, then $\gamma(k,p)\le 3$.
\end{theorem}
For a survey on Waring's number over finite fields, see Castro and Rubio (Section 7.3.4, p.~211),
and Ostafe and Winterhof (Section 6.3.2.3, p.~175)
in Mullen and Panario \cite{Handbook2013}. See also Cipra \cite{Cipra_thesis}.
We will need the following technical lemma.
\begin{lemma}
\label{lemma:alt}
Let $\delta = \delta(r,q)$ exist, and $k \ge 2\delta$.
Then for every $a\in\fq$ the equation
\begin{equation}
\label{eqn:lemma_alt}
x_1^r - x_2^r + x_3^r - \dotsb + (-1)^{k+1} x_k^r = a
\end{equation}
has a solution $(x_1,\dotso,x_k)\in\fq^k$.
\end{lemma}
\begin{proof}
Let $a\in\fq$ be arbitrary. There exist $\varepsilon_1,\dotso,\varepsilon_\delta$, each
$\varepsilon_i\in\{-1,1\}\subseteq \fq$, such that
the equation
$\sum_{i=1}^{\delta} \varepsilon_i y_i^r = a$ has a solution
$(y_1,\dotso,y_{\delta})\in\fq^{\delta}$.
As $k \ge 2\delta$, the alternating sequence
$1,-1,1,\dotso,(-1)^k$ with $k$ terms contains the sequence
$\varepsilon_1,\dotso,\varepsilon_\delta$ as a subsequence.
Let the indices of this subsequence be
$j_1,j_2,\dotso,j_{\delta}$.
For each $l$, $1\le l\le k$, let
$x_l = 0$ if $l\neq j_i$ for any $i$, and
$x_l = y_i$ for $l = j_i$. Then $(x_1,\dotso,x_k)$ is a solution of
(\ref{eqn:lemma_alt}).
\end{proof}
\subsection{The Hasse-Weil bound}
In the next section we will use
the Hasse-Weil bound,
which provides
a bound on the number of $\fq$-points on a plane non-singular absolutely irreducible projective curve over a finite field $\fq$.
If the number of points on the curve $C$ of genus $g$ over the
finite field $\fq$ is $|C(\fq)|$, then
\begin{equation}
\label{hasse_weil_bound}
||C(\fq)| - q -1|
\le
2g\sqrt{q}.
\end{equation}
It is also known that for a non-singular curve
defined by a homogeneous polynomial of degree $k$, $g= (k-1)(k-2)/2$. Discussion of all related notions and a proof of this result can be found in
Hirschfeld, Korchm\'{a}ros, Torres \cite{Hirschfeld} (Theorem 9.18, p.~343) or in Sz\H{o}nyi \cite{Szonyi1997} (p.~197).
\section{Proof of Theorem \ref{main}} \label{proofs1}
\noindent {\bf (\ref{gen_lower_bound}).}
As there is a loop at $(0,0)$, and there are arcs between $(0,0)$ and $(x,0)$ in either direction, for every $x\in \fq^*$, the number of vertices in $D_q$ which are at distance at most 2 from $(0,0)$ is
at most $1+ (q-1)+(q-1)^2 < q^2$. Thus, there are vertices in $D_q$ which are at distance
at least 3 from $(0,0)$, and so $\diam(D_q)\ge 3$.
\bigskip
\noindent {\bf (\ref{gen_upper_bound}).}
As $D_q$ is strong, by Theorem \ref{thm_conn},
for any positive divisor $d$ of $e$, $d<e$,
$q_d\centernot\mid\gcd (p^e-1, m,n)$. As, clearly, $q_d\,|\,(p^e-1)$, either $q_d\centernot\mid m$ or $q_d\centernot\mid n$. This implies by Theorem \ref{thm:waring_exist} that either $\gamma(m,q)$ or $\gamma(n,q)$ exists.
Let $(a,b)$ and $(u,v)$ be arbitrary vertices of $D_q$. By (\ref{eqn:walk_length_k}), there exists a walk of length at most $k$ from $(a,b)$ to $(u,v)$ if the equation
\begin{equation}
\label{eqn:main}
v = x_{k-1}^m u^n- x_{k-2}^m x_{k-1}^n+\cdots +(-1)^{k-1} a^m
x_1^n+(-1)^k b
\end{equation}
has a solution $(x_1,\ldots, x_k)\in \fq^k$.
Assume first that $\gamma_m = \gamma(m,q)$ exists.
Taking $k=6\gamma_m + 1$,
and $x_i = 0$ for $i\equiv 1 \mod 3$, and $x_i = 1$ for $i\equiv 0\mod 3$, we have that (\ref{eqn:main}) is equivalent to
\[
-x_{k-2}^m + x_{k-5}^m -\cdots +(-1)^k x_5^m + (-1)^{k-1}x_2^m = v-(-1)^k b-u^n.
\]
As the number of terms on the left is $(k-1)/3 = 2 \gamma_m$, this equation has a solution in $\fq^{2\gamma_m}$ by Lemma \ref{lemma:alt}.
Hence, (\ref{eqn:main}) has a solution in $\fq^{k}$.
If $\gamma_n = \gamma(n,q)$ exists, then the argument is similar: take $k = 6\gamma_n+1$, $x_i = 0$ for $i\equiv 0 \mod 3$, and $x_i = 1$ for $i\equiv 1\mod 3$.
The result now follows from the bounds on $\gamma(r,q)$ in Theorem \ref{thm:waring_bound}.
\begin{remark}
As $m\le n$, if $\gamma(m,q)$ exists, the upper bounds in Theorem~\ref{main},
part {\bf (\ref{gen_upper_bound})}, can be improved by replacing $n$ by $m$. Also, if a better upper bound on $\delta(m,q)$ than $\gamma(m,q)$ (respectively, on $\delta(n,q)$ than $\gamma(n,q)$) is known,
the upper bounds in Theorem~\ref{main}, {\bf (\ref{gen_upper_bound})},
can be further improved: use $k = 6\delta(m,q)+1$ (respectively, $k = 6\delta(n,q)+1$) in the proof. Similar comments apply to other parts
of Theorem \ref{main} as well as Theorem \ref{thm_diam_p}.
\end{remark}
\bigskip
\noindent {\bf (\ref{diam_le_4}).}
Recall the basic fact $\gcd(r,q-1)=1 \Leftrightarrow \{x^r\colon x \in\fq\} = \fq$.
Let $k=4$. If $\gcd(m,q-1) = 1$, a solution to (\ref{eqn:walk_length_k}) of the form $(0,x_2,1,u)$ is seen to exist for any choice of $a,b,u,v\in\fq$. If $\gcd(n,q-1) = 1$, there exists a solution of the form $(1,x_2,0,u)$. Hence, $\diam (D_q) \le 4$.
Let $k=3$, and $\gcd(m,q-1) = \gcd(n,q-1) = 1$. If $a=0$, then a solution to (\ref{eqn:walk_length_k}) of the form $(x_1,1,u)$ exists. If $a\neq 0$, a solution of the form $(x_1,0,u)$ exists. Hence, $D_q$ is strong and $\diam (D_q) \le 3$. Using the lower bound from part {\bf (\ref{gen_lower_bound})}, we conclude that $\diam (D_q) = 3$.
\bigskip
\noindent {\bf (\ref{main3}).} As was shown in part \ref{diam_le_4}, for any $n$,
$\diam(D(q; 1,n))\le 4$. If, additionally, $\gcd(n,q-1) = 1$, then $\diam(D(q; 1,n)) = 3$.
It turns out that if $p$ does not divide $n$, then only for finitely many $q$ is the diameter of $D(q;1,n)$ actually 4.
For $k=3$, (\ref{eqn:walk_length_k}) is equivalent to
\begin{equation}
\label{eqn:proof_hasse}
(u,v) = (x_3,x_2 x_3^n-x_1 x_2^n + a x_1^n-b),
\end{equation}
which has solution $(x_1,x_2,x_3) = (0,u^{-n}(b+v),u)$, provided $u\neq 0$.
Suppose now that $u = 0$. Aside from the trivial case $a = 0$, the question of the existence of a solution to (\ref{eqn:proof_hasse}) shall be resolved if we prove that the equation
\begin{equation}
\label{eqn:surj}
a x^n - x y^n + c = 0
\end{equation}
has a solution for any $a, c\in\fq^*$ (for $c=0$, (\ref{eqn:surj}) has solutions).
The projective curve corresponding to this equation is the zero locus of the homogeneous polynomial
\[
F(X,Y,Z) = aX^n Z - X Y^n + c Z^{n+1}.
\]
It is easy to see that, provided $p$ does not divide $n$,
\[
F=F_X=F_Y=F_Z =0 \;\; \Leftrightarrow \;\; X=Y=Z=0,
\]
and thus the curve has no singularities and is absolutely irreducible.
Counting the two points $[1:0:0]$ and $[0:1:0]$ on the line at infinity $Z = 0$, we obtain from (\ref{hasse_weil_bound}), the inequality
$N\ge q-1-2g\sqrt{q}$, where $N=N(c)$ is the number of solutions of (\ref{eqn:surj}). As $g= n(n-1)/2$,
solving the inequality $q-1-n(n-1)\sqrt{q}>0$ for $q$, we obtain a lower bound on $q$ for which $N \ge 1$.
\bigskip
\noindent{\bf (\ref{bound_q_le25}a).}
The result follows from Corollary \ref{thm:diam_le_8} by an argument similar to that of the proof of part {\bf (\ref{gen_upper_bound})}.
\bigskip
\noindent {\bf (\ref{bound_q_m4n4}b).}
For $k=13$, (\ref{eqn:walk_length_k}) is equivalent to
\[
(u,v)
=
(x_{13},
-b + a^m x_1^n -x_1^m x_2^n + x_2^m x_3^n -\dotsb - x_{11}^m x_{12}^n + x_{12}^m x_{13}^n).
\]
If $q > (m-1)^4$, set $x_1 = x_4 = x_7 = x_{10} = 0$,
$x_3 = x_6 = x_9 = x_{12} = 1$. Then
$v - u^n + b = -x_{11}^m + x_8^m - x_5^m + x_2^m$, which has a solution $(x_2,x_5,x_8,x_{11})\in\fq^4$ by Theorem \ref{thm:waring_Small_estimates} and Lemma \ref{lemma:alt}.
\bigskip
\noindent {\bf (\ref{bound_q_le6}c).}
For $k=9$, (\ref{eqn:walk_length_k}) is equivalent to
\[
(u,v)
=
(x_9,
-b + a^n x_1^n -x_1^n x_2^n + x_2^n x_3^n -\dotsb - x_7^m x_8^n + x_8^n x_9^n).
\]
If $q > (n-1)^4$, set $x_1 = x_4 = x_5 = x_8 = 0$,
$x_3 = x_7 = 1$. Then
$v + b = x_2^n + x_6^n$, which has a solution $(x_2,x_6)\in\fq^2$ by Theorem \ref{thm:waring_Small_estimates}.
\bigskip
\section{Proofs of Theorem \ref{thm_diam_p}} \label{proofs2}
\begin{lemma}\label{AutoLemma}
Let $D=D(q;m,n)$. Then, for any $\lambda\in\mathbb{F}_q^*$, the function $\phi:V(D) \rightarrow V(D)$ given by $\phi((a,b)) = (\lambda a, \lambda^{m+n} b)$ is
a digraph automorphism of $D$.
\end{lemma}
The proof of the lemma is straightforward. It amounts to showing that $\phi$ is a bijection and that it preserves adjacency: ${\bf x} \to {\bf y}$ if and only if $\phi({\bf x}) \to \phi({\bf y})$. We omit the details. Due to Lemma \ref{AutoLemma}, any walk in $D$ initiated at a vertex $(a,b)$ corresponds to a walk initiated at a vertex $(0,b)$ if $a=0$, or at a vertex $(1,b')$, where $b'= a^{-m-n} b$, if $a\neq 0$. This implies that if we wish to show that $\diam (D_p) \le 2p-1$, it is sufficient to show that the distance from any vertex $(0,b)$ to any other vertex is at most $2p-1$, and that the distance from any vertex $(1,b)$ to any other vertex is at most $2p-1$.
First we note that by Theorem \ref{thm_conn}, $D_p = D(p;m,n)$ is strong for any choice of $m,n$.
For $a\in\mathbb{F}_p$, let integer $\overline{a}$, $0\le \overline{a} \le p-1$, be the representative of the residue class $a$.
It is easy to check that $\diam (D(2; 1,1)) = 3$.
Therefore, for the remainder of the proof, we may assume that $p$ is odd.
\bigskip
\noindent{\bf (\ref{diam_bound_p}).}
In order to show that diam$(D_p) \le 2p-1$, we use (\ref{eqn:walk_length_k}) with $k= 2p-1$, and prove that for any two vertices $(a,b)$ and $(u,v)$ of $D_p$ there
is always a solution $(x_1, \ldots, x_{2p-1})\in \fq^{2p-1}$ of
$$(u,v) = (x_{2p-1}, -b + a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots -
x_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mx_{2p-1}^n),
$$
or, equivalently, a solution ${\bf x} = (x_1, \ldots, x_{2p-2})\in \fq^{2p-2}$ of
\begin{equation} \label{eq:1}
a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots -
x_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mu^n = b+v.
\end{equation}
As the upper bound $2p-1$ on the diameter is exact and holds for all $p$, we need a more subtle argument compared to the ones we used before. The only way we can make it is (unfortunately) by performing a case analysis on $\overline{b+v}$ with a nested case structure. In most of the cases we just exhibit a solution ${\bf x}$ of (\ref{eq:1}) by describing its components $x_i$.
It is always a straightforward verification that ${\bf x}$ satisfies (\ref{eq:1}), and we will suppress our comments as cases proceed.
Our first observation is that if $\overline{b+v} = 0$, then ${\bf x} = (0,\dots, 0)$ is a solution to (\ref{eq:1}).
We may assume now that $\overline{b+v}\ne 0$.\\
\noindent\underline{Case 1.1}: $\overline{b+v}\ge \frac{p-1}{2} + 2$
\noindent
We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(p-(\overline{b+v}))$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$;
if $4(p-(\overline{b+v}))< i \le 2p-2$, then $x_i=0$.
Note that $x_i^mx_{i+1}^n = 0$ unless $i\equiv 3 \mod 4$,
in which case $x_i^mx_{i+1}^n = 1$. If we group the terms
in groups of four so that each group is of the form
\[
-x_i^mx_{i+1}^n+x_{i+1}^mx_{i+2}^n-x_{i+2}^mx_{i+3}^n+x_{i+3}^mx_{i+4}^n,
\]
where $i\equiv 1 \mod 4$, then assuming $i$, $i+1$, $i+2$, $i+3$, and $i+4$ are within the range of
$1\le i<i+4 \le 4(\overline{b+v})$, it is easily seen that one group contributes
$-1$ to
\[
a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots - x_{2p-3}^mx_{2p-2}^n
+ x_{2p-2}^mx_{2p-1}^n.
\]
There are $\frac{4(p-(\overline{b+v}))}{4} = p-(\overline{b+v})$ such
groups, and so the solution provided adds $-1$ exactly
$p-(\overline{b+v})$ times.
Hence, ${\bf x}$ is a solution to (\ref{eq:1}).
\medskip
For the remainder of the proof, solutions to (\ref{eq:1}) will
be given without justification as the justification is similar
to what's been done above.
\vspace{5mm}
\noindent\underline{Case 1.2}: $\overline{b+v}\le \frac{p-1}{2}$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(\overline{b+v})-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(\overline{b+v})-1< i \le 2p-2 $, then $x_i=0$.
\vspace{5mm}
\noindent\underline{Case 1.3}: $\overline{b+v}= \frac{p-1}{2}+1$
This case requires several nested subcases.
\vspace{3mm}
\underline{Case 1.3.1}: $u=x_{2p-1}=0$
Here, there is no need to restrict $x_{2p-2}$ to be
$0$. The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as:
if $1\le i \le 2p-2$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$.
\vspace{3mm}
\underline{Case 1.3.2}: $a=0$
Here, there is no need to restrict $x_1$ to be 0. Therefore, the components of a solution ${\bf x}$ of (\ref{eq:1})
are defined as:
if $1\le i\le 2p-2$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$.
\vspace{5mm}
\underline{Case 1.3.3}: $u\ne 0$ and $a\ne 0$
Because of Lemma \ref{AutoLemma}, we may assume without loss of generality that $a=1$.
Let $x_{2p-2} = 1$, so that $x_{2p-2}^mu^n=u^n\ne 0$ and let $t=\overline{b+v-u^n}$. Note that
$t\ne\frac{p-1}{2}+1$.
\vspace{3mm}
\underline{Case 1.3.3.1}: $t=0$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i < 2p-2 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 1.3.3.2}: $0< t\le \frac{p-1}{2}$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i\le 4(t-1)+1$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0,1 \mod{4}$;
if $4(t-1)+1< i < 2p-2 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 1.3.3.3}: $t\ge \frac{p-1}{2}+2$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i\le 4(p-t)$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$;
if $4(p-t)< i < 2p-2 $, then $x_i=0$.\\
The whole range of possible values $\overline{b+v}$ has been checked. Hence, $\diam(D)\le 2p-1$.
\bigskip
We now show that if $\diam(D)=2p-1$, then $m=n=p-1$. To do so, we assume
that $m\ne p-1$ or $n\ne p-1$ and prove the contrapositive. Specifically, we show that $\diam(D)\le 2p-2<2p-1$ by
again using (\ref{eqn:walk_length_k}) but with $k= 2p-2$. We prove that for any two vertices $(a,b)$ and $(u,v)$ of $D_p$ there
is always a solution $(x_1, \ldots, x_{2p-2})\in \fq^{2p-2}$ of
$$(u,v) = (x_{2p-2}, b - a^mx_1^n + x_1^mx_2^n - \dots -
x_{2p-4}^mx_{2p-3}^n + x_{2p-3}^mx_{2p-2}^n),
$$
or, equivalently, a solution ${\bf x} = (x_1, \ldots, x_{2p-3})\in \fq^{2p-3}$ of
\begin{equation} \label{eq:2}
-a^mx_1^n + x_1^mx_2^n - x_2^mx_3^n + \dots -
x_{2p-4}^mx_{2p-3}^n + x_{2p-3}^mu^n = -b+v.
\end{equation}
We perform a case analysis on $\overline{-b+v}$.
\vspace{5mm}
Our first observation is that if $\overline{-b+v} = 0$, then ${\bf x} = (0,\dots, 0)$ is a solution to (\ref{eq:2}). We may
assume for the remainder of the proof that $\overline{-b+v}\ne 0$.
\vspace{3mm}
\noindent\underline{Case 2.1}: $\overline{-b+v}\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(\overline{-b+v})$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4(\overline{-b+v})< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.2}: $\overline{-b+v}\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(p-(\overline{-b+v}))-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-(\overline{-b+v}))-1< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.3}: $\overline{-b+v}= \frac{p-1}{2}$
\underline{Case 2.3.1}: $a=0$
We define the components of ${\bf x}$ as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$.
\vspace{3mm}
\underline{Case 2.3.2}: $a\ne 0$
Here, we may assume without loss of generality that $a=1$ by Lemma (\ref{AutoLemma}).
\vspace{3mm}
\underline{Case 2.3.2.1}: $n\ne p-1$
If $n\ne p-1$, then there exists $\beta\in\mathbb{F}_p^*$ such that $\beta^n\not\in\{0,1\}$. For such a $\beta$,
let $x_1=\beta$ and consider $t=\overline{-b+v+a^mx_1^n}=\overline{-b+v+\beta^n}\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.3.2.1.1}: $t=0$
\noindent\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.1.2}: $t\le \frac{p-1}{2}-1$
\noindent\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.1.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i\le 4(p-t)+1$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0, 1 \mod{4}$;
if $4(p-t)+1< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2}: $n=p-1$
\vspace{3mm}
\underline{Case 2.3.2.2.1}: $u\in\mathbb{F}_p^*$
Here, we have that $u^n=1$, so that the components of a solution ${\bf x}$ of (\ref{eq:2}) are defined as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$.
\vspace{3mm}
\underline{Case 2.3.2.2.2}: $u=0$
Since $n=p-1$, it must be the case that $m\ne p-1$ so that there exists $\alpha\in\mathbb{F}_p^*$ such that $\alpha^m\not\in\{0.1 \}$.
For such an $\alpha$, let $x_2=\alpha, x_3=1$ and consider $t=\overline{-b+v+x_2^mx_3^n}=\overline{-b+v+\alpha^m}
\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4 \le i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4\le i\le 4(p-t)+3$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)+3< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.4}: $\overline{-b+v}= \frac{p-1}{2}+1$
\vspace{3mm}
\underline{Case 2.4.1}: $u=0$
We define the components of ${\bf x}$ as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$.
\vspace{3mm}
\underline{Case 2.4.2}: $u\ne 0$
Here, we may assume without loss of generality that $u=1$ by Lemma (\ref{AutoLemma}).
\vspace{3mm}
\underline{Case 2.4.2.1}: $m\ne p-1$
If $m\ne p-1$, then there exists $\alpha\in\mathbb{F}_p^*$ such that $\alpha^m\not\in\{0,1\}$. For such an $\alpha$,
let $x_{2p-3}=\alpha$ and consider $t=\overline{-b+v-x_{2p-3}^mu^n}=\overline{-b+v-\alpha^m}\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.4.2.1.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1 \le i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.1.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.1.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1\le i\le 4(p-t)-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)-1< i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2}: $m=p-1$
\vspace{3mm}
\underline{Case 2.4.2.2.1}: $a\in\mathbb{F}_p^*$
Here, we have that $a^m=1$, so that the components of a solution ${\bf x}$ of (\ref{eq:2}) are defined as:
if $1\le i\le 2p-5$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0, 1 \mod{4}$.
\vspace{3mm}
\underline{Case 2.4.2.2.2}: $a=0$
Since $m=p-1$, it must be the case that $n\ne p-1$ so that there exists $\beta\in\mathbb{F}_p^*$ such that $\beta^n\not\in\{0.1 \}$.
For such a $\beta$, let $x_{2p-5}=1, x_{2p-4}=\beta$ and consider $t=\overline{-b+v-x_{2p-5}^mx_{2p-4}^n}=\overline{-b+v-\beta^n}
\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i \le 2p-6 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i\le 4t-2$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$;
if $4t-2< i \le 2p-6 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i\le 4(p-t)-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)-1< i \le 2p-6 $, then $x_i=0$.\\
All cases have been checked, so if $m\ne p-1$ or $n\ne p-1$, then $\diam(D) < 2p-1$.
\vspace{5mm}
We now prove that if $m=n=p-1$, then $d:= \diam (D(p;m,n))=2p-~1$.
In order to do this, we explicitly describe the structure of the digraph $D(p;p-1,p-1)$,
from which the diameter becomes clear. In this description, we
look at sets of vertices of a given distance from the vertex $(0,0)$, and show that some of them are at distance $2p-1$.
We recall the following important general properties of our digraphs that will be used in the proof.
\begin{itemize}
\item Every out-neighbor $(u,v)$ of a vertex $(a,b)$ of $D(q;m,n)$ is completely determined by its first component $u$.
\item Every vertex of $D(q;m,n)$ has its out-degree and in-degree equal $q$.
\item In $D(q; m,m)$, ${\bf x}\to {\bf y}$ if and only if
${\bf y}\to {\bf x}$
\end{itemize}
In $D(p;p-1,p-1)$, we have that $(x_1, y_1)\to
(x_2, y_2)$ if and only if
\[
y_1 + y_2 = x_1^{p-1}x_2^{p-1} = \begin{cases}
0 & \textrm{ if $x_1=0$ or $x_2=0$}, \\
1 & \textrm{ if $x_1$ and $x_2$ are non-zero}. \\
\end{cases}
\]
For notational convenience, we set
\[
(*, a) = \{(x, a): x\in\mathbb{F}_p^*\}
\]
and, for $1\le k\le d$, let
\[
N_k = \{v\in V(D(p;m,n)): \text{dist}((0,0), v) = k \}.
\]
We assume that $N_0=\{(0,0)\}$.
It is clear from this definition that these $d+1$ sets $N_k$ partition the vertex set of $D(p;p-1,p-1)$; for every $k$, $1\le k\le d-1$, every out-neighbor of a vertex from $N_k$ belongs to $N_{k-1}\cup N_k\cup N_{k+1}$, and $N_{k+1}$ is the set of all out-neighbors of all vertices from
$N_k$ which are not in $N_{k-1}\cup N_k$.
Thus we have $N_0=\{(0,0)\}$, $N_1= (*,0)$, $N_2=(*,1)$, $N_3=\{(0,-1)\}$. If $p>2$, $N_4=\{(0,1)\}$, $N_5=(*,-1)$. As there exist two (opposite) arcs between each vertex of $(*,x)$ and each vertex $(*,-x+1)$, these subsets of vertices induce the complete bipartite subdigraph $\overrightarrow{K}_{p-1,p-1}$ if $x\ne -x+1$, and the complete subdigraph $\overrightarrow{K}_{p-1}$ if $x =-x+1$. Note that our $\overrightarrow{K}_{p-1,p-1}$ has no loops, but $\overrightarrow{K}_{p-1}$ has a loop on every vertex.
Digraph $D(5;4,4)$ is depicted in Fig. $1.2$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\sffamily\scriptsize\bfseries}
\tikzset{edge/.style = {->,> = stealth'},shorten >=1pt}
\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,0)$}] (a) at (0,0) {};
\node[vertex] (b1) at (1,1.5) {};
\node[vertex] (b2) at (1,.5) {};
\node[vertex] (b3) at (1,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,0)$}] (b4) at (1,-1.5) {};
\node[vertex] (c1) at (2,1.5) {};
\node[vertex] (c2) at (2,.5) {};
\node[vertex] (c3) at (2,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,1)$}] (c4) at (2,-1.5) {};
\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-1)$}] (d) at (3,0) {};
\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,1)$}] (e) at (4,0) {};
\node[vertex] (f1) at (5,1.5) {};
\node[vertex] (f2) at (5,.5) {};
\node[vertex] (f3) at (5,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,-1)$}] (f4) at (5,-1.5) {};
\node[vertex] (g1) at (6,1.5) {};
\node[vertex] (g2) at (6,.5) {};
\node[vertex] (g3) at (6,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,2)$}] (g4) at (6,-1.5) {};
\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-2)$}] (h) at (7,0) {};
\node[vertex,label={[xshift=-0.3cm, yshift=0.00cm]$(0,2)$}] (i) at (8,0) {};
\node[vertex] (j1) at (9,1.5) {};
\node[vertex] (j2) at (9,.5) {};
\node[vertex] (j3) at (9,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,-2)$}] (j4) at (9,-1.5) {};
\path
(a) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=240,in=270, looseness = 50] node {} (a);
\foreach \x in {b1,b2,b3,b4}
{
\draw [edge] (a) to (\x);
\draw [edge] (\x) to (a);
}
\foreach \x in {b1,b2,b3,b4}
{
\foreach \y in {c1,c2,c3,c4}
{
\draw [edge] (\x) to (\y);
\draw [edge] (\y) to (\x);
}
}
\foreach \x in {c1,c2,c3,c4}
{
\draw [edge] (d) to (\x);
\draw [edge] (\x) to (d);
}
\draw [edge] (d) to (e);
\draw [edge] (e) to (d);
\foreach \x in {f1,f2,f3,f4}
{
\draw [edge] (e) to (\x);
\draw [edge] (\x) to (e);
}
\foreach \x in {f1,f2,f3,f4}
{
\foreach \y in {g1,g2,g3,g4}
{
\draw [edge] (\x) to (\y);
\draw [edge] (\y) to (\x);
}
}
\foreach \x in {g1,g2,g3,g4}
{
\draw [edge] (h) to (\x);
\draw [edge] (\x) to (h);
}
\draw [edge] (h) to (i);
\draw [edge] (i) to (h);
\foreach \x in {j1,j2,j3,j4}
{
\draw [edge] (i) to (\x);
\draw [edge] (\x) to (i);
}
\path
(j1) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j1);
\path
(j2) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j2);
\path
(j3) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j3);
\path
(j4) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j4);
\path
(j1) edge[bend right,<->,>=stealth'] node [left] {} (j2);
\path
(j1) edge[bend right = 60,<->,>=stealth'] node [left] {} (j3);
\path
(j1) edge[bend right = 320,<->,>=stealth'] node [left] {} (j4);
\path
(j2) edge[bend right,<->,>=stealth'] node [left] {} (j3);
\path
(j2) edge[bend right = 60,<->,>=stealth'] node [left] {} (j4);
\path
(j3) edge[bend right,<->,>=stealth'] node [left] {} (j4);
\end{tikzpicture}
\caption{The digraph $D(5;4,4)$: $x_2+y_2 = x_1^4y_1^4$.}
\end{center}
\end{figure}
The structure of $D(p;p-1,p-1)$ for any other prime $p$ is similar. We can describe it as follows: for each $t\in \{0,1, \ldots, (p-1)/2\}$, let
$$
N_{4{\overline t}} = \{(0, t)\}, \;\;
N_{4{\overline t}+1} = (*, -t),
$$
and for each $t\in \{0,1, \ldots, (p-3)/2\}$, let
$$
N_{4{\overline t}+2} = (*, t+1), \;
N_{4{\overline t}+3} = \{(0, -t-1)\}.
$$
Note that for $0\le {\overline t}<(p-1)/2$, $N_{4{\overline t}+1}\neq N_{4{\overline t}+2}$, and for ${\overline t}=(p-1)/2$, $N_{2p-1}=(*,(p+1)/2)$. Therefore, for $p\ge 3$, $D(p;p-1,p-1)$ contains $(p-1)/2$ induced copies of
$\overrightarrow{K}_{p-1,p-1}$ with partitions $N_{4{\overline t}+1}$ and $N_{4{\overline t}+2}$, and a copy of $\overrightarrow{K}_{p-1}$ induced by $N_{2p-1}$. The proof is a trivial induction on $\overline{t}$. Hence, $\diam (D(p;p-1,p-1)) = 2p-1$. This ends the proof of Theorem~\ref{thm_diam_p}~(\ref{diam_bound_p}).
\bigskip
\noindent{\bf (\ref{bound_p_sqrt60}).}
We follow the argument of the proof of Theorem \ref{main}, part {\bf (\ref{gen_upper_bound})} and use Lemma \ref{lemma:alt}, with $k = 6\delta(m,p)+1$. We note, additionally, that if $m\not\in\{p,(p-1)/2\}$, then $\gcd(m,p-1) < (p-1)/2$, which implies $|\{ x^m \colon x\in\mathbb{F}_p^\ast \} | > 2$. The result then follows from Theorem \ref{thm:Cochrane_Pinner}.
\bigskip
\noindent{\bf (\ref{bound_p_le10}).}
We follow the argument of the proof of Theorem \ref{main}, part {\bf (\ref{bound_q_m4n4}b)} and use Lemma \ref{lemma:alt} and Theorem \ref{thm:waring_small_estimates}.
\medskip
This ends the proof of Theorem~\ref{thm_diam_p}.
\bigskip
\section{Concluding remarks.}\label{open}
Many results in this paper follow the same pattern: if Waring's number $\delta(r,q)$ exists and is bounded above by $\delta$, then one can show that $\diam(D(q;m,n))\le 6\delta + 1$. Determining the exact value of $\delta(r,q)$ is an open problem, and it is likely to be very hard. Also, the upper bound $6\delta +1$ is not exact in general. Out of all partial results concerning $\delta(r,q)$, we used only those ones which helped us deal with the cases of the diameter of $D(q; m,n)$ that we considered, especially where the diameter was small. We left out applications of all asymptotic bounds on $\delta(r,q)$. Our computer work demonstrates that some upper bounds on the diameter mentioned in this paper are still far from being tight. Here we wish to mention only a few strong patterns that we observed but have not been able to prove so far. We state them as problems.
\bigskip
\noindent{\bf Problem 1.}
Let $p$ be prime, $q=p^e$, $e \ge 2$, and suppose $D(q;m,n)$ is strong. Let
$r$ be the largest divisor of $q-1$
not divisible by any
$q_d = (p^e-1)/(q^d-1)$
where $d$ is a positive divisor of $e$ smaller than $e$. Is it true that
\[
\max_{1\le m\le n\le q-1}
\{
\diam(D(q;m,n))
\}
=
\diam(D(q;r,r))?
\]
Find an upper bound on $\diam(D(q;r,r))$ better than the one of
Theorem \ref{main}, part {\bf (\ref{bound_q_le6}c)}.
\bigskip
\noindent{\bf Problem 2.} Is it true that for every prime $p$ and $1\le m \le n$,
$(m,n)\neq (p-1,p-1)$, $\diam (D(p;m,n)) \le (p+3)/2$ with the equality if and only if $(m,n)=((p-1)/2, (p-1)/2)$ or $(m,n)=((p-1)/2, p-1)$?
\bigskip
\noindent{\bf Problem 3.} Is it true that for every prime $p$, $\diam (D(p;m,n))$ takes only one of two consecutive values which are completely determined by $\gcd((p-1, m, n)$?
\section{Acknowledgement}
The authors are thankful to the anonymous referee whose careful reading and thoughtful comments led to a number of significant improvements in the paper.
|
\section{Introduction}
In the last decade Machine Learning (ML) has been rapidly evolving due to the profound performance improvements that Deep Learning (DL) has ushered. Deep Learning has outperformed previous state-of-the-art methods in many fields of Machine Learning, such as Natural Language Processing (NLP)~\cite{deng2018feature}, image processing~\cite{larsson2018robust} and speech generation~\cite{van2016wavenet}. As the number of new methods incorporating Deep Learning in many scientific fields increase, the proposed solutions begin to span across other disciplines where Machine Learning was used in a limited capacity. One such example is the quantitative analysis of the stock markets and the usage of Machine Learning to predict price movements or the volatility of the future prices or the detection of anomalous events in the markets.
In the field of quantitative analysis, the mathematical modelling of the markets has been the de facto approach to model stock price dynamics for trading, market making, hedging, and risk management. By utilizing a time series of values, such as the price fluctuations of financial products being traded in the markets, one can construct statistical models which can assist in the extraction of useful information about the current state of the market and a set of probabilities for possible future states, such as price or volatility changes. Many models, such as the Black-Scholes-Merton model~\cite{black1973pricing}, attempted to mathematically deduce the price of options and can be used to provide useful indications of future price movements.
However, at some point as more market participants started using the same model the behaviour of the price changed to the point that it could no longer be taken advantage of. Newer models, such as the stochastic modelling of limit order book dynamics \cite{cont2010stochastic}, the jump-diffusion processes for stock dynamics \cite{bandi2016price} and volatility estimation of market microstructure noise \cite{ait2009estimating} have been attempts predict multiple aspects of the financial markets. However such models are designed to be tractable, even at the cost of reliability and accuracy, and thus they do not necessarily fit empirical data very well.
The aforementioned properties put handcrafted models at a disadvantage, since the financial markets very frequently exhibit irrational behaviour, mainly due to the large influence of human activity, which frequently causes these models to fail. Combining Machine Learning models with handcrafted features usually improves the forecasting abilities of such models, by overcoming some of the aforementioned limitations, and improving predictions about various aspects of financial markets. This led many organizations that participate in the Financial Markets, such as Hedge Funds and investment firms, to increasingly use ML models, along with the conventional mathematical models, to make crucial decisions.
Furthermore, the introduction of electronic trading, that also led to the automation of trading operations, has magnified the volume of exchanges, producing a wealth of data. Deep Learning models are perfect candidates for analyzing such amounts of data, since they perform significantly better than the conventional Machine Learning methodologies when a large amount of data is available. This is one of the reasons that Deep Learning is starting to have a role in analyzing the data coming from financial exchanges. \cite{kercheval2015modelling, tsantekidis2017using}
The most detailed type of data that financial exchanges are gathering is the comprehensive logs of every submitted order and event that is happening within their internal matching engine. This log can be used to reconstruct the Limit Order Book (LOB), which is explained further in Section \ref{data-section}. A basic task that can arise from this data is the prediction of future price movements of an asset by examining the current and past supply and demand of Limit Orders. This type of comprehensive logs kept by the exchanges is excessively large and traditional Machine Learning techniques, such as Support Vector Machines (SVMs) \cite{vapnik1995support}, usually cannot be applied out-of-the-box.
Utilizing this kind of data directly with existing Deep Learning methods is also not possible due to their non-stationary nature. Prices fluctuate and suffer from stochastic drift, so in order for them to be effectively utilized by DL methods a preprocessing step is required to generate stationary features from them.
The main contribution of this work is the proposal of a set of stationary features that can be readily extracted from the Limit Order Book. The proposed features are thoroughly evaluated for predicting future mid price movements from large-scale high-frequency Limit Order data using several different Deep Learning models, ranging from simple Multilayer Perceptrons (MLPs) and CNNs to Recurrent Neural Networks (RNNs). Also we propose a novel Deep Learning model that combines the feature extraction ability of Convolutional Neural Networks (CNNs) with the Long Short Term Memory (LSTM) networks' power to analyze time series.
In Section~2 related work which employs ML models on financial data is briefly presented. Then, the dataset used is described in detail in Section~3. In Section~4 the proposed stationary feature extraction methodology is presented in detail, while in Section~5 the proposed Deep Learning methods are described. In Section~6 the experimental evaluation and comparisons are provided. Finally, conclusions are drawn and future work is discussed in Section~7.
\section{Related Work}
The task of regressing the future movements of financial assets has been the subject of many recent works such as \cite{kazem2013support, hsieh2011forecasting, lei2018wavelet}. Proven models such as GARCH are improved and augmented with machine learning component such as Artificial Neural Networks \cite{michell2018stock}. New hybrid models are employed along with Neural Networks to improve upon previous performance \cite{huang2012hybrid}.
One of the most volatile financial markets is FOREX, the currency markets. In \cite{galeshchuk2016neural}, neural networks are used to predict the future exchange rate of major FOREX pairs such as USD/EUR. The model is tested with different prediction steps ranging from daily to yearly which reaches the conclusion that shorter term predictions tend to be more accurate. Other financial metrics, such as cash flow prediction, are very closely correlated to price prediction.
In \cite{heaton2016deep}, the authors propose the ``Deep Portfolio Theory'' which applies autoencoders in order to produce optimal portfolios. This approach outperforms several established benchmarks, such as the Biotechnology IBB Index. Likewise in \cite{takeuchi2013applying}, another type of autoencoders, known as Restricted Boltzmann Machine (RBM), is applied to encode the end-of-month prices of stocks. Then, the model is fine-tuned to predict whether the price will move more than the median change and the direction of such movement. This strategy is able to outperform a benchmark momentum strategy in terms of annualized returns.
Another approach is to include data sources outside the financial time series, e.g., \cite{xiong2015deep}, where phrases related to finance, such as ``mortgage'' and ``bankruptcy'' were monitored on the Google trends platform and included as an input to a recurrent neural network along with the daily S\&P 500 market fund prices. The training target is the prediction of the future volatility of the market fund's price. This approach can greatly outperform many benchmark methods, such as the autoregressive GARCH and Lasso techniques.
The surge of DL methods has dramatically improved the performance over many conventional machine learning methods on tasks, such as speech recognition \cite{graves2013speech}, image captioning\cite{xu2015show, mao2014deep}, and question answering \cite{zhu2016visual7w}. The most important building blocks of DL are the Convolutional Neural Networks (CNN) \cite{lecun1995convolutional}, and the Recurrent Neural Networks (RNNs). Also worth mentioning is the improvement of RNNs with the introduction of Long Short-Term Memory Units (LSTMs) \cite{hochreiter1997long}, which has made the analysis of time series using DL easier and more performant.
Unfortunately DL methods are prone to overfit especially in tasks such as price regression and many works exist trying to prevent such overfitting \cite{niu2012short, xi2014new}. Some might attribute overfitting to the lack of huge amounts of data that other tasks such as image and speech processing have available to them. A very rich data source for financial forecasting is the Limit Order Book. One of the few applications of ML in high frequency Limit Order Book data is \cite{kercheval2015modelling}, where several handcrafted features are created, including price deltas, bid-ask spreads and price and volume derivatives. An SVM is then trained to predict the direction of future mid price movements using all the handcrafted features. In \cite{tran2017temporal} a neural network architecture incorporating the idea of bilinear projection augmented with a temporal attention mechanism is used to predict LOB mid price.
Similarly in \cite{ntakaris2018mid, tran2017tensor} utilize the Limit Order Book data along with ML methods such as multilinear methods and smart feature selection to predict the future price movements. In our previous work~\cite{tsantekidis2017forecasting, tsantekidis2017using, passalis2017time} we introduced a large-scale high-frequency Limit Order Book dataset, that is also used in this paper, and we employed three simple DL models, the Convolutional Neural Networks (CNN), the Long-Short Term Memory Recurrent Neural Networks (LSTM RNNs) and the Neural Bag-of-Features (N-BoF) model, to tackle the problem of forecasting the mid price movements. However, these approaches directly used the non-stationary raw Order Book data, making them vulnerable to distribution shifts and harming their ability to generalize on unseen data, as we also experimentally demonstrate in this paper.
To the best of our knowledge this is the first work that proposes a structured approach for extracting stationary price features from the Limit Order Book that can be effectively combined with Deep Learning models. We also provide an extensive evaluation of the proposed methods on a large-scale dataset with more than 4 million events. Also, a powerful model, that combines the CNN feature extraction properties with the LSTM's time series modelling capabilities, is proposed in order to improve the accuracy of predicting the price movement of stocks. The proposed combined model is also compared with the previously introduced methods using the proposed stationary price features.
\section{Limit Order Book Data}
\label{data-section}
In an order-driven financial market, a market participant can place two types of buy/sell orders. By posting a {\em limit order}, a trader promises to buy (sell) a certain amount of an asset at a specified price or less (more). The limit order book compromises on the valid limit order that are not executed or cancelled yet.
This Limit Order Book (LOB) contains all existing buy and sell orders that have been submitted and are awaiting to be executed. A limit order is placed on the queue at a given price level, where, in the case of standard limit orders, the execution priority at a given price level is dictated by the arrival time (first in, first out). A {\em market order} is is an order to immediately buy/sell a certain quantity of the asset at the best available price in the limit order book. If the requested price of a limit order is far from the best prices, it may take a long time for the execution of the limit order, in which case, the order can finally be cancelled by the trader. The orders are split between two sides, the bid (buy) and the ask (sell) side. Each side contains the orders sorted by their price, in descending order for the bid side and ascending order for the ask side.
\newcommand{\rho}{\rho}
\newcommand{\upnu}{\upnu}
Following the notation used in \cite{cont2010stochastic}, a price grid is defined as $\{\rho^{(1)}(t),\dots,\rho^{(n)}(t)\}$, where $\rho^{(j)}(t) > \rho^{(i)}(t)$ for all $j>i$. The price grid contains all possible prices and each consecutive price level is incremented by a single tick from the previous price level. The state of the order book is a continuous-time process $v(t) \equiv \left(v^{(1)}(t), v^{(2)}(t), \dots, v^{(n)}(t) \right)_{t \geq 0}$, where $|v^{(i)}(t)|$ is the number of outstanding limit orders at price $\rho^{(i)}(t)$, $1 \leq i \leq n$. If $v^{(i)}(t) < 0$, then there are $-v^{(i)}(t)$ bid orders at price $\rho^{(i)}(t)$; if $v^{(i)}(t)>0$, then there are $v^{(i)}(t)$ ask orders at price $\rho^{(i)}(t)$. That is, $v^{(i)}(t) > 0$ refers to ask orders and $v^{(i)}(t) < 0$ bid orders.
The location of the best ask price in the price grid is defined by:
\[
i_a^{(1)}(t) = \inf\{i = 1, \dots, n\ ;\ v^{(i)}(t)>0 \},
\]
and, correspondingly, the location of the best bid price is defined by:
\[
i_b^{(1)}(t) = \sup\{i = 1, \dots, n\ ;\ v^{(i)}(t)<0 \}.
\]
For simplicity, we denote the best ask and bid prices as $p_a^{(1)}(t) \equiv \rho^{\left(i_a^{(1)}(t) \right)}(t)$ and $p_b^{(1)}(t) \equiv \rho^{\left(i_b^{(1)} (t)\right)}(t)$, respectively. Notice that if there are no ask (bid) orders in the book, the ask (bid) price is not defined.
More generally, given that the $k$th best ask and bid prices exist, their locations are denoted as $i_a^{(k)}(t) \equiv i_a(t) + k-1$ and $i_b^{(k)}(t) \equiv i_b(t) + k-1$. The $k$th best ask and bid prices are correspondingly denoted by $p_a^{(k)}(t) \equiv \rho^{\left(i_a^{(k)}(t) \right)}(t)$ and $p_b^{(k)}(t) \equiv \rho^{\left(i_b^{(k)}(t) \right)}(t)$, respectively. Correspondingly, we denote the number of outstanding limit orders at the $k$th best ask and bid levels by $\upnu_a^{(k)}(t) \equiv v^{\left(i_a^{(k)}(t)\right)}(t)$ and $\upnu_b^{(k)}(t) \equiv v^{\left(i_b^{(k)}(t)\right)}(t)$, respectively.
Limit Order Book data can be used for a variety of tasks, such as the estimation of the future price trend or the regression of useful metrics, like the price volatility. Other possible tasks may include the early prediction of anomalous events, like extreme changes in price which may indicate manipulation in the markets. These examples are a few of multiple applications which can aid investors to protect their capital when unfavourable conditions exist in the markets or, in other cases, take advantage of them to profit.
Most modern methods that utilize financial time series data employ subsampling techniques, such as the well-known OHLC (Open-High-Low-Close) candles \cite{yang2000drift}, in order to reduce the number of features of each time interval. Although the OHLC candles preserve useful information, such as the market trend and movement ranges within the specified intervals, it removes possibly important microstructure information. Since the LOB is constantly receiving new orders in inconsistent intervals, it is not possible to subsample time-interval features from it in a way that preserves all the information it contains. This problem can be addressed, to some extent, using recurrent neural network architectures, such as LSTMs, that are capable of natively handling inputs of varying size. This allows to directly utilize the data fully without using a time interval-based subsampling.
The LOB data used in this work is provided by Nasdaq Nordic and consists of 10 days worth of LOB events for 5 different Finnish company stocks, namely Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki and Wärtsilä Oyj \cite{ntakaris2017benchmark,siikanen2016limit}. The exact time period of the gathered data begins from the 1st of June 2010 to the 14th of June 2010. Also, note that trading only happens during business days.
The data consists of consecutive snapshots of the LOB state after each state altering event takes place. This event might be an order insertion, execution or cancellation and after it interacts with the LOB and change its state a snapshot of the new state is taken. The LOB depth of the data that are used is $10$ for each side of Order Book, which ends up being 10 active orders (consisting of price and volume) for each side adding up to a total of $40$ values for each LOB snapshot. This ends up summing to a total of $4.5$ million snapshots that can be used to train and evaluate the proposed models.
In this work the task we aim to accomplish is the prediction of price movements based on current and past changes occurring in the LOB. This problem is formally defined as follows: Let $\mathbf{x}(t) \in \mathbb{R}^q$ denote the feature vector that describes the condition of the LOB at time $t$ for a specific stock, where $q$ is the dimensionality of the corresponding feature vector. The direction of the mid-price of that stock is defined as $l_k(t) = \{-1, 0, 1\}$ depending on whether the mid price decreased (-1), remained stationary (0) or increased (1) after $k$ LOB events occurred.
The number of orders $k$ is also called \textit{prediction horizon}. We aim to learn a model $f_k(\mathbf{x}(t))$, where $f_k: \mathbb{R}^{n} \rightarrow \{-1, 0, 1\} $, that predicts the direction $l_{k}(t)$ of the mid-price after $k$ orders.
In the following Section the aforementioned features and labels, as well as the procedure to calculate them are explained in depth.
\section{Stationary Feature and Label Extraction}
The raw LOB data cannot be directly used for any ML task without some kind of preprocessing. The order volume values can be gathered for all stocks' LOBs and normalized together, since they are expected to follow the same distribution. However, this is not true for price values, since the value of a stock or asset may fluctuate and increase with time to never before seen levels. This means that the statistics of the price values can change significantly with time, rendering the price time series non-stationary.
Simply normalizing all the price values will not resolve the non-stationarity, since there will always be unseen data that may change the distribution of values to ranges that are not present in the current data. We present two solutions for this problem, one used in past work where normalization is applied constantly using past available statistics and a new approach to completely convert the price data to stationary values.
\subsection{Input Normalization}
\label{sec:input-normalization}
The most common normalization scheme is standardization (z-score):
\begin{equation}
x_{\text{norm}} = \dfrac{{x} - \bar{x}}{\sigma_{\bar{x}}}
\label{zscore-eq},
\end{equation}
where ${x}$ is a feature to be normalized, $\bar{x}$ is the mean and $\sigma_{\bar{x}}$ is the standard deviation across all samples. Such normalization is separately applied to the order size values and the price values. Using this kind of ``global'' normalization allows the preservation of the different scales between prices of different stocks, which we are trying to avoid. The solution presented in \cite{tsantekidis2017forecasting,tsantekidis2017using} is to use z-score to normalize each stock-day worth of data with the means and standard deviations calculated using previous day's data of the same stock. This way a major problem is avoided which is the distribution shift in stock prices, that can be caused by events such as stock splits or the large shifts in price that can happen over longer periods of time.
Unfortunately this presents another important issue for learning. The difference between the price values in different LOB levels are almost always minuscule. Since all the price levels are normalized using z-score with the same statistics, extracting features at that scale is hard. In this work we propose a novel approach to remedy this problem. Instead of normalizing the raw values of the LOB depth, we modify the price values to be their percentage difference to the current mid price of the Order Book. This removes the non-stationarity from the price values, makes the feature extraction process easier and significantly improves the performance of ML models, as it is also experimentally demonstrated in Section~\ref{sec:experiments}. To compensate for the removal of the price value itself we add an extra value to each LOB depth sample which is the percentage change of the mid price since the previous event.
The mid-price is defined as the mid-point between the best bid and the best ask prices at time $t$ by
\begin{equation}
p_m^{(1)} (t) = \dfrac{p_a^{(1)}(t) + p_b^{(1)}(t)}{2}
\label{mid-price-def}.
\end{equation}
Let
\begin{align}
{p'}_a^{(i)}(t) =& \dfrac{p_a^{(i)}(t)}{p_m(t)} - 1, \label{stationary-price-a} \\
{p'}_b^{(i)}(t) =& \dfrac{p_b^{(i)}(t)}{p_m(t)} - 1, \label{stationary-price-b}
\end{align}
and
\begin{equation}
{p'}_m(t) = \dfrac{p_m(t)}{p_m(t-1)} - 1. \label{mid-price-change-def}
\end{equation}
Equations (\ref{stationary-price-a}) and (\ref{stationary-price-b}) serve as statistic features that represent the proportional difference between $i$th price and the mid-price at time $t$. Equation (\ref{mid-price-def}), on the other hand, serves as a dynamic feature that captures the proportional mid-price movement over the time period (that is, it represents asset's return in terms of mid-prices).
We also use the cumulative sum of the sizes of the price levels as a feature, also know as Total Depth:
\begin{align}
\upnu'^{(k)}_a(t) =& \sum_{i=1}^k{\upnu_a^{(i)}(t)}
\vspace{0.1cm} \label{size-cumsum-a}\\
\upnu'^{(k)}_b(t) =& \sum_{i=1}^k{\upnu_b^{(i)}(t)}
\label{size-cumsum-b}
\end{align}
where $\upnu^{(i)}_a(t)$ is number of outstanding limit order at the $i$th best ask price level and $\upnu^{(i)}_b(t)$ is number of outstanding limit order at the $b$th best ask price level.
The proposed stationary features are briefly summarized in Table \ref{features-table}. After constructing these three types of stationary features, each of them is separately normalized using standardization (z-score), as described in (\ref{zscore-eq}), and concatenated into a single feature vector $\myvec{x}_t$, where $t$ denotes the time step.
The input used for the time-aware models, such as the CNN, LSTM and CNN-LSTM, is the sequence of vectors $\myvec{X} = \{\myvec{x}_0, \myvec{x}_1, \dots , \myvec{x}_w\}$, where $w$ is the number of total number of events each one represented by a different time step input. For the models that need all the input into a single vector, such as the SVM and MLP models, the matrix $\myvec{X}$ is flatten into a single dimension so it can be used as input for these models.
\begin{table}[t]
\caption{Brief description of each proposed stationary feature}
\label{features-table}
\begin{center}
\begin{tabular}{ | c | c|}
\hline
\textbf{Feature} & \textbf{Description} \\
\hline\hline
Price level difference & \parbox[c]{10cm}{\vspace{0.2em}The difference of each price level to the current mid price, see Eq. (\ref{stationary-price-a}),(\ref{stationary-price-b})
\[{p'}^{(i)}(t) = \dfrac{p^{(i)}(t)}{p_m(t)} - 1 \]
} \\
\hline
Mid price change & \parbox[c]{10cm}{\vspace{0.2em} The change of the current mid price to the mid price of the previous time step, see Eq. (\ref{mid-price-change-def}) \\
\[
{p'}_m(t) = \dfrac{p_m(t)}{p_m(t-1)} - 1
\]
} \\
\hline
Depth size cumsum & \parbox[c]{10cm}{ \vspace{0.2em} Total depth at each price level, see Eq. (\ref{size-cumsum-a}), (\ref{size-cumsum-b})
\[
\upnu'^{(k)}(t) = \sum_{i=1}^k{\upnu^{(i)}(t)}
\]
} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Labels}
\label{sec:labels}
The proposed models aim to predict the future movements of the mid price. Therefore, the ground truth labels must be appropriately generated to reflect the future mid price movements. Note that the mid price is a ``virtual'' value and no order can be guaranteed to immediately executed if placed at that exact price. However being able to predict its upwards or downwards movement provides a good estimate of the price of the future orders. A set of discrete choices must be constructed from our data to use as target for our classification models. The labels for describing the movement denoted by $y_t \in \{-1, 0, 1\}$, where $t$ denotes the timestep.
Simply using $p_m(t + k) > p_m(t)$ to determine the upward direction of the mid price would introduce unmanageable amount of noise, since the smallest change would be registered as an upward or downward movement. To remedy this, in our previous work \cite{tsantekidis2017forecasting, tsantekidis2017using} the noisy changes of the mid price were filtered by employing two averaging filters. One averaging filter was used on a window of size $k$ of the past values of the mid price and another averaging was applied on a future window $k$:
\begin{align}
m_b(t) =& \dfrac{1}{k+1} \sum_{i=0}^k p_m(t-i) \label{m-b} \\
m_a(t) =& \dfrac{1}{k} \sum_{i=1}^k p_m(t+i) \label{m-a}
\end{align}
where $p_t$ is the mid price as described in Equation~(\ref{mid-price-def}).
The label $l_t$, that expresses the direction of price movement at time $t$, is extracted by comparing the previously defined quantities ($m_b$ and $m_a$). However, using the $m_b$ values to create labels for the samples, as in \cite{tsantekidis2017forecasting, tsantekidis2017using}, is making the problem significantly easier and predictable due to the slower adaptation of the mean filter values to sudden changes in price. Therefore, in this work we remedy this issue by replacing $m_b$ with the mid price. Therefore, the labels are redefined as:
\begin{equation}
l_t =
\begin{cases}
\ \ 1, & \text{if } \dfrac{m_a(t)}{p_m(t)} > 1 + \alpha
\vspace{0.2cm}\\
-1, & \text{if } \dfrac{m_a(t)}{p_m(t)} < 1 - \alpha
\vspace{0.2cm}\\
\ \ 0, & \text{otherwise}
\end{cases}
\label{direction-eq}
\end{equation}
where $\alpha$ is the threshold that determines how significant a mid price change $m_a(t)$ must be in order to label the movement as upward or downward. Values that do not satisfy this inequality are considered as insignificant and are labeled as having no price movement, or in other words being ``stationary''. The resulting labels present the trend to be predicted. This process is applied across all time steps of the dataset to produce labels for all the depth samples.
\section{Machine Learning Models}
In this section we explain the particular inner workings of the CNN and LSTM models that are used and present how they are combined to form the proposed CNN-LSTM model. The technical details of each model are explained along with the employed optimization procedure.
\begin{figure}
\centering
\includegraphics[scale=0.4]{CNN}
\caption{A visual representation of the evaluated CNN model. Each layer includes the filter input size and the number of filters used.}
\label{fig:cnn-model}
\end{figure}
\subsection{Convolutional Neural Networks}
\label{sec:conv-nets}
Convolutional Neural Networks (CNNs) consist of the sequential application of convolutional and pooling layers usually followed by some fully connected layers, as shown in Figure~\ref{fig:cnn-model}. Each convolutional layer $i$ is equipped with a set of filters $\mathbf{W}_i \in \mathbb{R} ^{S \times D \times N}$ that is convolved with an input tensor, where $S$ is the number of used filters, $D$ is the {filter size}, and $N$ is the number of the input channels. The input tensor $\mathbf{X} \in \mathbb{R}^{(B \times T \times F)}$ is consisted by the temporally ordered features described in Section \ref{sec:input-normalization}, where $B$ is the batch size, $T$ is the number of time steps and $F$ is the number of features per time step.
In this work we leverage the causal padding introduced in \cite{van2016wavenet} to avoid using future information to produce features for the current time step. Using a series of convolutional layers allows for capturing the fine temporal dynamics of the time series as well as correlating temporally distant features. After the last convolutional/pooling layer a set of fully connected layers are used to classify the input time series. The network's output expresses the categorical distribution for the three direction labels (upward, downward and stationary), as described in (\ref{direction-eq}), for each time-step.
We also employ a temporal batching technique, similar to the one used in LSTMs, to increase the computational efficiency and reduce memory requirements of our experiments when training with CNNs. Given the above described input tensor $\myvec{X}$ and convolution filters $\myvec{W}_i$ the last convolution produces a tensor with dimensions $ (B,T,S,N) $, which in most uses cases is flattened to a tensor of size $(B, T \times S \times N)$ before being fed to a fully connected layer. Instead we retain the temporal ordering by only reducing the tensor to dimension $(B, T, S \times N) $. An identical fully connected network with a softmax output is applied for each $S \times N$ vectors leading to $T$ different predictions.
Since we are using causal convolutions with "full" padding, all the convolutional layers produce the same time steps $T$, hence we do not need to worry about label alignment to the correct time step. Also the causal convolutions ensure that no information from the future leaks to past time step filters. This technique reduces the receptive field of the employed CNN, but this can be easily remedied by using a greater number of convolutional layers and/or a larger filter size $D$.
\subsection{Long Short Term Memory Recurrent Neural Networks}
One of the most appropriate Neural Network architectures to apply on time series is the Recurrent Neural Network (RNN) architecture. Although powerful in theory, this type of network suffers from the vanishing gradient problem, which makes the gradient propagation through a large number of steps impossible. An architecture that was introduced to solve this problem is the Long Short Term Memory (LSTM) networks~\cite{hochreiter1997long}. This architecture protects its hidden activation from the decay of unrelated inputs and gradients by using gated functions between its ``transaction'' points. The protected hidden activation is the ``cell state'' which is regulated by said gates in the following manner:
\begin{align}
\myvec{f}_t &= \sigma(\myvec{W}_{xf} \cdot \myvec{x} + \myvec{W}_{hf} \cdot \myvec{h}_{t-1} + \myvec{b}_f) \\
\myvec{i}_t &= \sigma(\myvec{W}_{xi} \cdot \myvec{x} + \myvec{W}_{hi} \cdot \myvec{h}_{t-1} + \myvec{b}_i) \\
\myvec{c}'_t &= tanh(\myvec{W}_{hc} \cdot \myvec{h}_{t-1} + \myvec{W}_{xc} \cdot \myvec{x}_t + \myvec{b}_c) \\
\myvec{c}_t &= \myvec{f}_t \cdot \myvec{c}_{t-1} + \myvec{i}_t \cdot \myvec{c}'_t \\
\myvec{o}_t &= \sigma(\myvec{W}_{oc} \cdot \myvec{c}_t + \myvec{W}_{oh} \cdot \myvec{h}_{t-1} + \myvec{b}_o) \\
\myvec{h}_t &= \myvec{o}_t \cdot \sigma(\myvec{c}_t)
\end{align}
where $\myvec{f}_t$, $\myvec{i}_t$ and $\myvec{o}_t$ are the activations of the input, forget and output gates at time-step $t$, which control how much of the input and the previous state will be considered and how much of the cell state will be included in the hidden activation of the network. The protected cell activation at time-step $t$ is denoted by $\myvec{c}_t$, whereas $\myvec{h}_t$ is the activation that will be given to other components of the model. The matrices $\myvec{W}_{xf}, \myvec{W}_{hf}, \myvec{W}_{xi}, \myvec{W}_{hi}, \myvec{W}_{hc}, \myvec{W}_{xc}, \myvec{W}_{oc}, \myvec{W}_{oh}$ are used to denote the weights connecting each of the activations with the current time step inputs and the previous time step activations.
\subsection{Combination of models (CNN-LSTM)}
We also introduce a powerful combination of the two previously described models. The CNN model is identically applied as described in Section \ref{sec:conv-nets}, using causal convolutions and temporal batching to produce a set of features for each time step. In essence the CNN acts as the feature extractor of the LOB depth time series, which produces a new time series of features with the same length as the original one, with each of them having time steps corresponding to one another.
An LSTM layer is then applied on the time series produced by the CNN, and in turn produces a label for each time step. This works in a very similar way to the fully connected layer described in \ref{sec:conv-nets} for temporal batching, but instead of the Fully Connected layer the LSTM allows the model to incorporate the features from past steps. The model architecture is visualized in Figure~\ref{fig:cnnlstm}.
\subsection{Optimization}
\label{sec:optimization}
The parameters of the models are learned by minimizing the categorical cross entropy loss defined as:
\begin{equation}
\mathcal{L}(\myvec{W}) = -\sum_{i=1}^{L} y_i \cdot \log \hat{y}_i,
\end{equation}
where $L$ is the number of different labels and the notation $\myvec{W}$ is used to refer to the parameters of the models. The ground truth vector is denoted by $\mathbf{y}$, while $\hat{\mathbf{y}}$ is the predicted label distribution. The loss is summed over all samples in each batch. Due to the unavoidable class imbalance of this type of dataset, a weighted loss is employed to improve the mean recall and precision across all classes:
\begin{equation}
\label{eq:loss}
\mathcal{L}(\myvec{W}) = -\sum_{i=1}^{L} c_{y_i} \cdot y_i \cdot \log \hat{y}_i,
\end{equation}
where $c_{y_i}$ is the assigned weight for the class of $y_i$. The individual weight $c_i$ assigned to each class $i$ is calculated as:
\begin{equation}
c_i = \dfrac{|\mathcal{D}|}{n \cdot |\mathcal{D}_i|},
\end{equation}
where $ |\mathcal{D}| $ is the total number of samples in our dataset $\mathcal{D}$, $n$ is the total number of classes (which in our case is 3) and $\mathcal{D}_i$ is set of samples from our dataset that have been labeled to belong in class $i$.
The most commonly used method to minimize the loss function defined in (\ref{eq:loss}) and learn the parameters $\myvec{W}$ of the model is gradient descent \cite{werbos1990backpropagation}:
\begin{equation}
\myvec{W}' = \myvec{W} - \eta \cdot \dfrac{\partial \mathcal{L}}{\partial \myvec{W}}
\end{equation}
where $\myvec{W}'$ are the parameters of the model after each gradient descent step and $\eta$ is the learning rate. In this work we utilize the RMSProp optimizer \cite{tieleman2012lecture}, which is an adaptive learning rate method and has been shown to improve the training time and performance of DL models.
\begin{figure}
\centering
\includegraphics[scale=0.3]{CNNLSTM}
\caption{CNN-LSTM model}
\label{fig:cnnlstm}
\end{figure}
The LSTM, CNN and CNN-LSTM models along with all the training algorithms were developed using Keras \cite{chollet2015keras}, which is a framework built on top of the Tensorflow library \cite{tensorflow2015-whitepaper}.
\section{Experimental Evaluation}
\label{sec:experiments}
All the models were tested for step sizes $k = 10, 50, 100,$ and $200$ in (\ref{m-a}), where the $\alpha$ value for each was set at $2 \times 10^{-5},\ 9 \times 10^{-5},\ 3 \times 10^{-4}$ and $ 3.5 \times 10^{-4} $ respectively. The parameter $\alpha$ was chosen in conjunction with the future horizon with the aim to have relatively balanced distribution of labels across classes. In a real trading scenario it is not possible to have a profitable strategy that creates as many trade signals as ``no-trade'' signals, because it would accumulate enormous commission costs. For that reason $\alpha$ is selected with the aim to get a logical ratio of about 20\% long, 20\% short and 60\% stationary labels. The effect of varying the parameter $\alpha$ on the class distribution of labels is shown in Table \ref{alpha-table}. Note that increasing the $\alpha$ allows for reducing the number of trade signals which should be changed depending on the actual commission and slippage costs that are expected to occur.
\begin{table}
\caption{Example of sample distribution across classes depending on $\alpha$ for prediction horizon $k =100$}
\label{alpha-table}
\begin{center}
\begin{tabular}{ | c |c| c| c|}
\hline
\hspace{2em}$\alpha$\hspace{2em} & \hspace{1em}Down\hspace{1em} & Stationary & \hspace{1.5em}Up\hspace{1.5em} \\
\hline\hline
$1.0 \times 10^{-5}$ & $0.39$&$0.17$&$0.45$ \\ \hline
$2.0 \times 10^{-5}$ & $0.38$&$0.19$&$0.43$ \\ \hline
$5.0 \times 10^{-5}$ & $0.35$&$0.25$&$0.41$ \\ \hline
$1.0 \times 10^{-4}$ & $0.30$&$0.33$&$0.36$ \\ \hline
$2.0 \times 10^{-4}$ & $0.23$&$0.49$&$0.28$ \\ \hline
$3.0 \times 10^{-4}$ & $0.18$&$0.60$&$0.22$ \\ \hline
$3.5 \times 10^{-4}$ & $0.15$&$0.66$&$0.19$ \\ \hline
\end{tabular}
\end{center}
\end{table}
We tested the CNN and LSTM models using the raw features and the proposed stationary features separately and compared the results. The architecture of the three models that were tested is described bellow.
The proposed CNN model consists of the following sequential layers:
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{enumerate}
\item 1D Convolution with 16 filters of size $(10,42)$
\item 1D Convolution with 16 filters of size $(10,)$
\item 1D Convolution with 32 filters of size $(8,)$
\item 1D Convolution with 32 filters of size $(6,)$
\item 1D Convolution with 32 filters of size $(4,)$
\item Fully connected layer with 32 neurons
\item Fully connected layer with 3 neurons
\end{enumerate}
\end{minipage}
\end{center}
The activation function used for all the convolutional and fully connected layer of the CNN is the Parametric Rectifying Linear Unit (PRELU) \cite{he2015delving}. The last layer uses the softmax function for the prediction of the probability distribution between the different classes. All the convolutional layers are followed by a Batch Normalization (BN) layer after them.
The LSTM network uses 32 hidden neurons followed by a feed-forward layer with 64 neurons using Dropout and PRELU as activation function. Experimentally we found out that the hidden layer of the LSTM should contain 64 or less hidden neurons to avoid over-fitting the model. Experimenting with higher number of hidden neurons would be feasible if the dataset was even larger.
Finally the CNN-LSTM model applies the convolutional feature extraction layers on the input and then feeds them in the correct temporal order to an LSTM model. The CNN component is comprised of the following layers:
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{enumerate}
\item 1D Convolution with 16 filters of size $(5,42)$
\item 1D Convolution with 16 filters of size $(5,)$
\item 1D Convolution with 32 filters of size $(5,)$
\item 1D Convolution with 32 filters of size $(5,)$
\end{enumerate}
\end{minipage}
\end{center}
Note that the receptive field of each convolutional filter in the CNN module is smaller that the standalone CNN, since the LSTM can capture most of the information from past time steps. The LSTM module has the same exact architecture as the standalone LSTM. A visual representation of this CNN-LSTM model is shown in Figure~\ref{fig:cnnlstm}. Likewise, PRELU is the activation function used for the CNN and the fully connected layers, while the softmax function is used for the output layer of the network to predict the probability distribution of the classes.
\begin{figure}
\centering
\includegraphics[scale=0.70]{lstm_cost_per_step}
\caption{Mean cost per recurrent step of the LSTM network}
\label{bad-step-score}
\end{figure}
\begin{table*}
\caption{Experimental results for different prediction horizons $k$. The values that are reported are the mean of each metric for the last 20 training epochs.}
\label{results-table}
\begin{center}
\footnotesize
\bgroup
\def\arraystretch{0.8
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|}
\hline
\multirow{1}{*}{\textbf{Feature Type}} &
\multicolumn{1}{c|}{\textbf{Model}} &
\multicolumn{1}{c|}{\textbf{Mean Recall}} &
\multicolumn{1}{c|}{\textbf{Mean Precision}} &
\multicolumn{1}{c|}{\textbf{Mean F1}} & \multicolumn{1}{c|}{\textbf{Cohen's} $\kappa$} \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=10$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.43 $ & $0.33 $ & $0.04 $ \\ \cline{2-6}
& MLP & ${ 0.34 }$ & ${ 0.34 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\ \cline{2-6}
& CNN & ${ 0.51 }$ & ${ 0.42 }$ & ${ 0.38 }$ & ${ 0.14 }$ \\ \cline{2-6}
& LSTM & ${ 0.49 }$ & ${ 0.41 }$ & ${ 0.35 }$ & ${ 0.12 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.33 $ & $\mathbf{0.46 }$ & $0.30 $ & $0.011 $ \\ \cline{2-6}
& MLP & ${ 0.34 }$ & ${ 0.35 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\ \cline{2-6}
& CNN & ${ 0.54 }$ & ${ 0.44 }$ & ${ 0.43 }$ & ${ 0.19 }$ \\ \cline{2-6}
& LSTM & ${ 0.55 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.18 }$ \\ \cline{2-6}
& CNNLSTM & $\mathbf{ 0.56 }$ & ${ 0.45 }$ & $\mathbf{ 0.44 }$ & $\mathbf{ 0.21 }$ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=50$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.41 $ & $0.32 $ & $0.03 $ \\ \cline{2-6}
& MLP & ${ 0.41 }$ & ${ 0.38 }$ & ${ 0.21 }$ & ${ 0.04 }$ \\ \cline{2-6}
& CNN & ${ 0.50 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.13 }$ \\ \cline{2-6}
& LSTM & ${ 0.46 }$ & ${ 0.40 }$ & ${ 0.34 }$ & ${ 0.10 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.39 $ & $0.41 $ & $0.38 $ & $0.09 $ \\ \cline{2-6}
& MLP & $0.49 $ & $0.43 $ & $0.38 $ & $0.14 $ \\ \cline{2-6}
& CNN & $0.55 $ & $0.45 $ & $0.43 $ & $0.20 $ \\ \cline{2-6}
&LSTM & $\mathbf{0.56 } $ & $0.46 $ & $0.44 $ & $0.21 $ \\ \cline{2-6}
& CNNLSTM & $\mathbf{0.56 }$ & $\mathbf{0.47 }$ & $\mathbf{0.47 }$ & $\mathbf{0.24 } $ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=100$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.46 $ & $0.33 $ & $0.05 $ \\ \cline{2-6}
& MLP & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.26 }$ & ${ 0.06 }$ \\ \cline{2-6}
& CNN & ${ 0.49 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.12 }$ \\ \cline{2-6}
& LSTM & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.34 }$ & ${ 0.09 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.36 $ & $0.46 $ & $0.35 $ & $0.07 $ \\ \cline{2-6}
& MLP & ${ 0.50 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\ \cline{2-6}
& CNN & ${ 0.54 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.21 }$ \\ \cline{2-6}
& LSTM & $\mathbf{ 0.56 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.20 }$ \\ \cline{2-6}
& CNNLSTM & ${ 0.55 }$ & $\mathbf{ 0.47 }$ & $\mathbf{ 0.48 }$ & $\mathbf{ 0.24 }$ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=200$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.44 $ & $0.31 $ & $0.04 $ \\ \cline{2-6}
& MLP & ${ 0.44 }$ & ${ 0.40 }$ & ${ 0.32 }$ & ${ 0.08 }$ \\ \cline{2-6}
& CNN & ${ 0.47 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\ \cline{2-6}
& LSTM & ${ 0.42 }$ & ${ 0.39 }$ & ${ 0.36 }$ & ${ 0.08 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.38 $ & $0.46 $ & $0.36 $ & $0.10 $ \\ \cline{2-6}
& MLP & ${ 0.49 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.17 }$ \\ \cline{2-6}
& CNN & ${ 0.51 }$ & ${ 0.47 }$ & ${ 0.45 }$ & ${ 0.20 }$ \\ \cline{2-6}
& LSTM & ${ 0.52 }$ & ${ 0.47 }$ & ${ 0.46 }$ & ${ 0.22 }$ \\ \cline{2-6}
& CNNLSTM & $\mathbf{ 0.53 }$ & $\mathbf{ 0.48 }$ & $\mathbf{ 0.49 }$ & $\mathbf{ 0.25 }$ \\ \cline{1-6}
\end{tabular}
\egroup
\end{center}
\end{table*}
One recurring effect we observe when training LSTM networks on LOB data is that for the first steps of observation the predictions $y_i$ yield a bigger cross entropy cost, meaning worse performance in our metrics. We run a set of experiments where the LSTM was trained for all the steps of the input windows $T$. The resulting mean cost per time step can be observed in Figure \ref{bad-step-score}. As a result, trying to predict the price movement using insufficient past information is not possible and should be avoided since it leads to noisy gradients. To avoid this, a ``burn-in'' input is initially used to build its initial perception of the market before actually making correct decisions. In essence the first ``burn-in'' steps of the input are skipped, by not allowing any gradient to alter our model until after the 100th time step. We also apply the same method to the CNN-LSTM model.
\begin{figure*}
\centering
\includegraphics[width=1.02\linewidth]{all_training}
\caption{F1 and Cohen's $\kappa$ metrics during training for prediction horizon $k=100$. Plots are smoothed with a mean filter with window=3 to reduce fluctuations.
}
\label{fig:f1-kappa-training}
\end{figure*}
For training the models, the dataset is split as follows. The first 7 days of each stock are used to train the models, while the final 3 days are used as test data. The experiments were conducted for 4 different prediction horizons $k$, as defined in (\ref{m-a}) and (\ref{direction-eq}).
Performance is measured using Cohen's kappa \cite{cohen1960coefficient}, which is used to evaluate the concordance between sets of given answers, taking into consideration the possibility of random agreements happening. The mean recall, mean precision and mean F1 score between all 3 classes is also reported. Recall is the number of true positives samples divided by the sum of true positives and false negatives, while precision is the number of true positive divided by the sum of true positives and false positives. F1 score is the harmonic mean of the precision and recall metrics.
The results of the experiments are shown in Table \ref{results-table}. The results are compared for the models trained on the raw price features with the ones trained using the extracted stationary features. The results confirm that extracting stationary features from the data significantly improve performance of Deep Learning models such as CNNs and LSTMs.
We also trained a Linear SVM model and a simple MLP model and compared them to the DL models. The SVM model was trained using Stochastic Gradient Descent since the size of the dataset is too large to use a regular Quadratic Programming solver. The SVM model implementation is provided by the sklearn library \cite{pedregosa2011scikit}. The MLP model consists of three fully connected layers with sizes 128, 64, 32, and PRELU as activations for each layers. Dropout is also used to avoid overfitting and the softmax activation function was used in the last layer.
Since both the SVM and the MLP models cannot iterate over timesteps to gain the same amount of information as the CNN and LSTM-based models, a window of 50 depth events is used and is flattened into a single sample. This process is applied in a rolling fashion for all the dataset to generate a dataset upon which the two models can be trained. One important note is the training fluctuations that are observed in Figure \ref{fig:f1-kappa-training}, which are caused by the great class imbalance. Similar issues where observed in initial experiments with CNN and LSTM models but using the weighted loss described in \ref{sec:optimization} the fluctuations subsided.
The proposed stationary price features significantly outperform the raw price features for all the tested models. This can be attributed to a great extent to the stationary nature of the proposed features. The employed price differences provide an intrinsically stationary and normalized price measure that can be directly used. This is in contrast with the raw price values that requires careful normalization to ensure that their values remain into a reasonable range and suffer for significantly non-stationarity issues when the price increases to levels not seen before. By converting the actual prices to the their difference to the mid price and normalize that, this important feature is exaggerated to avoid being suppressed by the much larger price movements through time. The proposed combination model CNN-LSTM also outperforms its separated individual component models as shown in Figure \ref{fig:f1-kappa-training} and Table \ref{results-table} showing that it can better handle the LOB data and use them to take advantage of the microstructure existing within the data to produce more accurate predictions.
\section{Conclusion}
In this paper we proposed a novel method for extracting stationary features from raw LOB data, suitable for use with different DL models. Using different ML models, i.e., SVMs, MLPs, CNNs and LSTMs, it was experimentally demonstrated that the proposed features significantly outperform the raw price features. The proposed stationary features achieve this by making the difference between the prices in the LOB depth the main metric instead of the price itself, which usually fluctuates much more through time than the price level within the LOB. A novel combined CNN-LSTM model was also proposed for time series predictions and it was demonstrated that exhibits more stable behaviour and leads to better results that the CNN and LSTM models.
There are several interesting future research directions. As with all the DL application, more data would enable the use of bigger models that would not be at risk of being overtrained as it was observed in this work. An RNN-type of network could be also used to perform a form of ``intelligent'' r-sampling extracting useful features from a specific and limited time-interval of depth events, which would avoid losing information and allow for the later models produce prediction for a certain time period and not for a number of following events. Another important addition would be an attention mechanism \cite{xu2015show}, \cite{cho2015describing}, which would allow for the better observation of the features by the network allowing it to ignore noisy parts of the data and use only the relevant information.
\section*{Acknowledgment}
The research leading to these results has received funding from the H2020 Project BigDataFinance MSCA-ITN-ETN 675044 (http://bigdatafinance.eu), Training for Big Data in Financial Research and Risk Management.
\bibliographystyle{elsarticle-num}
|
\section{Introduction}
Given a data set and a model with some unknown parameters, the inverse problem aims to find the values of the model parameters that best fit the data.
In this work, in which we focus on systems of interacting elements,
the inverse problem concerns the statistical inference
of the underling interaction network and of its coupling coefficients from observed data on the dynamics of the system.
Versions of this problem are encountered in physics, biology (e.g., \cite{Balakrishnan11,Ekeberg13,Christoph14}), social sciences and finance (e.g.,\cite{Mastromatteo12,yamanaka_15}), neuroscience (e.g., \cite{Schneidman06,Roudi09a,tyrcha_13}), just to cite a few, and are becoming more and more important due to the increase in the amount of data available from these fields.\\
\indent
A standard approach used in statistical inference is to predict the interaction couplings by maximizing the likelihood function.
This technique, however, requires the evaluation of the
partition function that, in the most general case, concerns a number of computations scaling exponentially with the system size.
Boltzmann machine learning uses Monte Carlo sampling to compute the gradients of the Log-likelihood looking for stationary points \cite{Murphy12} but this method is computationally manageable only for small systems. A series of faster approximations, such as naive mean-field, independent-pair approximation \cite{Roudi09a, Roudi09b}, inversion of TAP equations \cite{Kappen98,Tanaka98}, small correlations expansion \cite{Sessak09}, adaptive TAP \cite{Opper01}, adaptive cluster expansion \cite{Cocco12} or Bethe approximations \cite{Ricci-Tersenghi12, Nguyen12} have, then, been developed. These techniques take as input means and correlations of observed variables and most of them assume a fully connected graph as underlying connectivity network, or expand around it by perturbative dilution. In most cases, network reconstruction turns out to be not accurate for small data sizes and/or when couplings are strong or, else, if the original interaction network is sparse.\\
\indent
A further method, substantially improving performances for small data, is the so-called Pseudo-Likelyhood Method (PLM) \cite{Ravikumar10}. In Ref. \cite{Aurell12} Aurell and Ekeberg performed a comparison between PLM and some of the just mentioned mean-field-based algorithms on the pairwise interacting Ising-spin ($\sigma = \pm 1$) model, showing how PLM performs sensitively better, especially on sparse graphs and in the high-coupling limit, i.e., for low temperature.
In this work, we aim at performing statistical inference on a model whose interacting variables are continuous $XY$ spins, i.e., $\sigma \equiv \left(\cos \phi,\sin \phi\right)$ with $\phi \in [0, 2\pi )$. The developed tools can, actually, be also straightforward applied to the $p$-clock model \cite{Potts52} where the phase $\phi$ takes discretely equispaced $p$ values in the $2 \pi$ interval, $\phi_a = a 2 \pi/p$, with $a= 0,1,\dots,p-1$. The $p$-clock model, else called vector Potts model, gives a hierarchy of discretization of the $XY$ model as $p$ increases. For $p=2$, one recovers the Ising model, for $p=4$ the Ashkin-Teller model \cite{Ashkin43}, for $p=6$ the ice-type model \cite{Pauling35,Baxter82} and the eight-vertex model \cite{Sutherland70,Fan70,Baxter71} for $p=8$.
It turns out to be very useful also for numerical implementations of the continuous $XY$ model.
Recent analysis on the multi-body $XY$ model has shown that for a limited number of discrete phase values ($p\sim 16, 32$) the thermodynamic critical properties of the $p\to\infty$ $XY$ limit are promptly recovered \cite{Marruzzo15, Marruzzo16}.
Our main motivation to study statistical inference is that these kind of models have recently turned out to be rather useful in describing the behavior of optical systems,
including standard mode-locking lasers \cite{Gordon02,Gat04,Angelani07,Marruzzo15} and random lasers \cite{Angelani06a,Leuzzi09a,Antenucci15a,Antenucci15b,Marruzzo16}.
In particular, the inverse problem on the pairwise XY model analyzed here might be of help in recovering images from light propagated through random media.
This paper is organized as follows: in Sec. \ref{sec:model} we introduce the general model and we discuss its derivation also as a model for light transmission through random scattering media.
In Sec. \ref{sec:plm} we introduce the PLM with $l_2$ regularization and with decimation, two variants of the PLM respectively introduced in Ref. \cite{Wainwright06} and \cite{Aurell12} for the inverse Ising problem.
Here, we analyze these techniques for continuous $XY$ spins and we test them on thermalized data generated by Exchange Monte Carlo numerical simulations of the original model dynamics. In Sec. \ref{sec:res_reg} we present the results related to the PLM-$l_2$. In Sec. \ref{sec:res_dec} the results related to the PLM with decimation are reported and its performances are compared to the PLM-$l_2$ and to a variational mean-field method analyzed in Ref. \cite{Tyagi15}. In Sec. \ref{sec:conc}, we outline conclusive remarks and perspectives.
\section{The leading $XY$ model}
\label{sec:model}
The leading model we are considering is defined, for a system of $N$ angular $XY$ variables, by the Hamiltonian
\begin{equation}
\mathcal{H} = - \sum_{ik}^{1,N} J_{ik} \cos{\left(\phi_i-\phi_k\right)}
\label{eq:HXY}
\end{equation}
The $XY$ model is well known in statistical mechanics, displaying important physical
insights, starting from the Berezinskii-Kosterlitz-Thouless
transition in two dimensions\cite{Berezinskii70,Berezinskii71,Kosterlitz72} and moving to, e.g., the
transition of liquid helium to its superfluid state \cite{Brezin82}, the roughening transition of the interface of a crystal in equilibrium with its vapor \cite{Cardy96}. In presence of disorder and frustration \cite{Villain77,Fradkin78} the model has been adopted to describe synchronization problems as the Kuramoto model \cite{Kuramoto75} and in the theoretical modeling of Josephson junction arrays \cite{Teitel83a,Teitel83b} and arrays of coupled lasers \cite{Nixon13}.
Besides several derivations and implementations of the model in quantum and classical physics, equilibrium or out of equilibrium, ordered or fully frustrated systems, Eq. (\ref{eq:HXY}), in its generic form,
has found applications also in other fields. A rather fascinating example being the behavior of starlings flocks \cite{Reynolds87,Deneubourg89,Huth90,Vicsek95, Cavagna13}.
Our interest on the $XY$ model resides, though, in optics. Phasor and phase models with pairwise and multi-body interaction terms can, indeed, describe the behavior of electromagnetic modes in both linear and nonlinear optical systems in the analysis of problems such as light propagation and lasing \cite{Gordon02, Antenucci15c, Antenucci15d}. As couplings are strongly frustrated, these models turn out to be especially useful to the study of optical properties in random media \cite{Antenucci15a,Antenucci15b}, as in the noticeable case of random lasers \cite{Wiersma08,Andreasen11,Antenucci15e} and they might as well be applied to linear scattering problems, e.g., propagation of waves in opaque systems or disordered fibers.
\subsection{A propagating wave model}
We briefly mention a derivation of the model as a proxy for the propagation of light through random linear media.
Scattering of light is held responsible to obstruct our view and make objects opaque. Light rays, once that they enter the material, only exit after getting scattered multiple times within the material. In such a disordered medium, both the direction and the phase of the propagating waves are random. Transmitted light
yields a disordered interference pattern typically having low intensity, random phase and almost no resolution, called a speckle. Nevertheless, in recent years it has been realized that disorder is rather a blessing in disguise \cite{Vellekoop07,Vellekoop08a,Vellekoop08b}. Several experiments have made it possible to control the behavior of light and other optical processes in a given random disordered medium,
by exploiting, e.g., the tools developed for wavefront shaping to control the propagation of light and to engineer the confinement of light \cite{Yilmaz13,Riboli14}.
\\
\indent
In a linear dielectric medium, light propagation can be described through a part of the scattering matrix, the transmission matrix $\mathbb{T}$, linking the outgoing to the incoming fields.
Consider the case in which there are $N_I$ incoming channels and $N_O$ outgoing ones; we can indicate with $E^{\rm in,out}_k$ the input/output electromagnetic field phasors of channel $k$. In the most general case, i.e., without making any particular assumptions on the field polarizations, each light mode and its polarization polarization state can be represented by means of the $4$-dimensional Stokes vector. Each $ t_{ki}$ element of $\mathbb{T}$, thus, is a $4 \times 4$ M{\"u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \cite{Goodman85,Popoff10a,Akbulut11}:
\begin{eqnarray}
E^{\rm out}_k = \sum_{i=1}^{N_I} t_{ki} E^{\rm in}_i \qquad \forall~ k=1,\ldots,N_O
\label{eq:transm}
\end{eqnarray}
We recall that the elements of the transmission matrix are random complex coefficients\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\
In the following, for simplicity, we will consider Eq. (\ref{eq:transm}) as our starting point,
where $E^{\rm out}_k$, $E^{\rm in}_i$ and $t_{ki}$ are all complex scalars.
If Eq. \eqref{eq:transm} holds for any $k$, we can write:
\begin{eqnarray}
\int \prod_{k=1}^{N_O} dE^{\rm out}_k \prod_{k=1}^{N_O}\delta\left(E^{\rm out}_k - \sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j \right) = 1
\nonumber
\\
\label{eq:deltas}
\end{eqnarray}
Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way,
rather than looking at the precise solutions of the exact equations (whose parameters are unknown).
To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\ref{eq:deltas}).
Moreover, we move to consider the ensemble of all possible solutions of Eq. (\ref{eq:transm}) at given $\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function:
\begin{eqnarray}
Z &\equiv &\int_{{\cal S}_{\rm in}} \prod_{j=1}^{N_I} dE^{\rm in}_j \int_{{\cal S}_{\rm out}}\prod_{k=1}^{N_O} dE^{\rm out}_k
\label{def:Z}
\\
\times
&&\prod_{k=1}^{N_O}
\frac{1}{\sqrt{2\pi \Delta^2}} \exp\left\{-\frac{1}{2 \Delta^2}\left|
E^{\rm out}_k -\sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j\right|^2
\right\}
\nonumber
\end{eqnarray}
We stress that the integral of Eq. \eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account.
The space of solutions is delimited by the total power ${\cal P}$ received by system, i.e.,
${\cal S}_{\rm in}: \{E^{\rm in} |\sum_k I^{\rm in}_k = \mathcal{P}\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e.,
${\cal S}_{\rm out}:\{E^{\rm out} |\sum_k I^{\rm out}_k=c\mathcal{P}\}$, where the attenuation factor $c<1$ accounts for total losses.
As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \eqref{eq:H_J} since they do not depend on $\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\mathbb{T}$.
Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function
\begin{eqnarray}
\label{eq:z}
&& Z =\int_{\mathcal S} \prod_{j=1}^{N} dE_j \left( \frac{1}{\sqrt{2\pi \Delta^2}} \right)^{N/2}
\hspace*{-.4cm} \exp\left\{
-\frac{ {\cal H} [\{E\};\mathbb{T}] }{2\Delta^2}
\right\}
\\
&&{\cal H} [\{E\};\mathbb{T}] =
- \sum_{k=1}^{N/2}\sum_{j=N/2+1}^{N} \left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^*
\right]
\nonumber
\\
&&\qquad\qquad \qquad + \sum_{j=N/2+1}^{N} |E_j|^2+ \sum_{k,l}^{1,N/2}E_k
U_{kl} E_l^*
\nonumber
\\
\label{eq:H_J}
&&\hspace*{1.88cm } = - \sum_{nm}^{1,N} E_n J_{nm} E_m^*
\end{eqnarray}
where ${\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix
\begin{equation}
U_{kl} \equiv \sum_{j=N/2+1}^{N}t^*_{lj} t_{jk}
\label{def:U}
\end{equation}
and the whole interaction matrix reads (here $\mathbb{T} \equiv \{ t_{jk} \}$)
\begin{equation}
\label{def:J}
\mathbb J\equiv \left(\begin{array}{ccc|ccc}
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\phantom{()}&-\mathbb{U} \phantom{()}&\phantom{()}&\phantom{()}&{\mathbb{T}}&\phantom{()}\\
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\hline
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\phantom{()}& \mathbb T^\dagger&\phantom{()}&\phantom{()}& - \mathbb{I} &\phantom{()}\\
\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}\\
\end{array}\right)
\end{equation}
Determining the electromagnetic complex amplitude configurations that minimize the {\em cost function} ${\cal H}$, Eq. (\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\ref{eq:transm}). As the variance $\Delta^2\to 0$, eventually, the initial set of Eqs. (\ref{eq:transm}) are recovered. The ${\cal H}$ function, thus, plays the role of an Hamiltonian and $\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature''
allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables.
Now, we can express every phasor in Eq. \eqref{eq:z} as $E_k = A_k e^{\imath \phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \textit{quenched} with respect to phases.
The first condition occurs, for instance, to the input intensities $|E^{\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \cite{Popoff11}.
With \textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \eqref{eq:transm} at fixed $\mathbb T$.
We stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere.
If all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\mathbb J$ in Eq. (\ref{def:J}) that will not change the properties of the matrices.
For instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\mathbb U$ will be diagonal.
Otherwise, if intensities are \textit{quenched}, i.e., they can be considered as constants in Eq. (\ref{eq:transm}),
they are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as
\begin{eqnarray}
E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\imath (\phi_n-\phi_m)} \to J_{nm} e^{\imath (\phi_n-\phi_m)}
\nonumber
\end{eqnarray}
and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\propto \delta_{nm}$.
Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as
\begin{eqnarray}
\mathcal{H}& = & - \frac{1}{2} \sum_{nm} J_{nm} e^{-\imath (\phi_n - \phi_m)} + \mbox{c.c.}
\label{eq:h_im}
\\ &=& - \frac{1}{2} \sum_{nm} \left[J^R_{nm} \cos(\phi_n - \phi_m)+
J^I_{nm}\sin (\phi_n - \phi_m)\right]
\nonumber
\end{eqnarray}
where $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric.
\begin{comment}
\textcolor{red}{
F: comment about quenched:
I think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess).
Indeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones.
For this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples
(so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\
}
\end{comment}
\section{Pseudolikelihood Maximization}
\label{sec:plm}
The inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\ref{eq:h_im}).
Given a set of $M$ data configurations of $N$ spins
$\bm\sigma = \{ \cos \phi_i^{(\mu)},\sin \phi_i^{(\mu)} \}$, $i = 1,\dots,N$ and $\mu=1,\dots,M$, we want to \emph{infer} the couplings:
\begin{eqnarray}
\bm \sigma \rightarrow \mathbb{J}
\nonumber
\end{eqnarray}
With this purpose in mind,
in the rest of this section we implement the working equations for the techniques used.
In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model.
The joint probability distribution of the $N$ variables $\bm{\phi}\equiv\{\phi_1,\dots,\phi_N\}$, follows the Gibbs-Boltzmann distribution:
\begin{equation}\label{eq:p_xy}
P(\bm{\phi}) = \frac{1}{Z} e^{-\beta \mathcal{H\left(\bm{\phi}\right)}} \quad \mbox{ where } \quad Z = \int \prod_{k=1}^N d\phi_k e^{-\beta \mathcal{H\left(\bm{\phi}\right)}}
\end{equation}
and where we denote $\beta=\left( 2\Delta^2 \right)^{-1}$ with respect to Eq. (\ref{def:Z}) formalism.
In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\beta / 2$: $\beta J_{ij}/2 \rightarrow J_{ij}$.
The main idea of the PLM is to work with the conditional probability distribution of one variable $\phi_i$ given all other variables,
$\bm{\phi}_{\backslash i}$:
\begin{eqnarray}
\nonumber
P(\phi_i | \bm{\phi}_{\backslash i}) &=& \frac{1}{Z_i} \exp \left \{ {H_i^x (\bm{\phi}_{\backslash i})
\cos \phi_i + H_i^y (\bm{\phi}_{\backslash i}) \sin \phi_i } \right \}
\\
\label{eq:marginal_xy}
&=&\frac{e^{H_i(\bm{\phi}_{\backslash i}) \cos{\left(\phi_i-\alpha_i(\bm{\phi}_{\backslash i})\right)}}}{2 \pi I_0(H_i)}
\end{eqnarray}
where $H_i^x$ and $H_i^y$ are defined as
\begin{eqnarray}
H_i^x (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \cos \phi_j - \sum_{j (\neq i) } J_{ij}^{I} \sin \phi_j \phantom{+ h^R_i} \label{eq:26} \\
H_i^y (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \sin \phi_j + \sum_{j (\neq i) } J_{ij}^{I} \cos \phi_j \phantom{ + h_i^{I} }\label{eq:27}
\end{eqnarray}
and $H_i= \sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\alpha_i = \arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind:
\begin{equation}
\nonumber
I_k(x) = \frac{1}{2 \pi}\int_{0}^{2 \pi} d \phi e^{x \cos{ \phi}}\cos{k \phi}
\end{equation}
Given $M$ observation samples $\bm{\phi}^{(\mu)}=\{\phi^\mu_1,\ldots,\phi^\mu_N\}$, $\mu = 1,\dots, M$, the
pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\ref{eq:marginal_xy}),
\begin{eqnarray}
\label{eq:L_i}
L_i &=& \frac{1}{M} \sum_{\mu = 1}^M \ln P(\phi_i^{(\mu)}|\bm{\phi}^{(\mu)}_{\backslash i})
\\
\nonumber
& =& \frac{1}{M} \sum_{\mu = 1}^M \left[ H_i^{(\mu)} \cos( \phi_i^{(\mu)} - \alpha_i^{(\mu)}) - \ln 2 \pi I_0\left(H_i^{(\mu)}\right)\right] \, .
\end{eqnarray}
The underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$.
The specific maximization scheme differentiates the different techniques.
\subsection{PLM with $l_2$ regularization}
Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of
$J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads:
\begin{equation}\label{eq:plf_i}
{\cal L}_i = L_i
- \lambda \sum_{i \neq j} \left(J_{ij}^R\right)^2 - \lambda \sum_{i \neq j} \left(J_{ij}^I\right)^2
\end{equation}
with $\lambda>0$.
Note that the values of $\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$.
The standard implementation of the PLM consists in maximizing each ${\cal L}_i$, for $i=1\dots N$, separately. The expected values of the couplings are then:
\begin{equation}
\{ J_{i j}^*\}_{j\in \partial i} := \mbox{arg max}_{ \{ J_{ij} \}}
\left[{\cal L}_i\right]
\end{equation}
In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\cal L}_j$, say $J_{ij}^{(j)}$.
Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric.
The final estimate for $J_{ij}$ can then be obtained averaging the two results:
\begin{equation}\label{eq:symm}
J_{ij}^{\rm inferred} = \frac{J_{ij}^{(i)} + \bar{J}_{ij}^{(j)}}{2}
\end{equation}
where with $\bar{J}$ we indicate the complex conjugate.
It is worth noting that the pseudolikelihood $L_i$, Eq. \eqref{eq:L_i}, is characterized by the
following properties: (i) the normalization term of Eq.\eqref{eq:marginal_xy} can be
computed analytically at odd with the {\em full} likelihood case that
in general require a computational time which scales exponentially
with the size of the systems; (ii) the $\ell_2$-regularized pseudolikelihood
defined in Eq.\eqref{eq:plf_i} is strictly concave (i.e. it has a single
maximizer)\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are
generated by a model $P(\phi | J*)$ the maximizer tends to $J*$
for $M\rightarrow\infty$\cite{besag1975}. Note also that (iii) guarantees that
$|J^{(i)}_{ij}-J^{(j)}_{ij}| \rightarrow 0$ for $M\rightarrow \infty$.
In Secs. \ref{sec:res_reg}, \ref{sec:res_dec}
we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known.
\subsection{PLM with decimation}
Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer.
Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$,
\begin{eqnarray}
{\cal L}\equiv \frac{1}{N}\sum_{i=1}^N \mbox{L}_i
\end{eqnarray}
and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\cal L}$. Keeping on with decimation, a point is reached where ${\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place.
Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\cal L}$ is defined as,
\begin{eqnarray}
\mathcal{L}_t &\equiv& \mathcal{L} - x \mathcal{L}_{\textup{max}} - (1-x) \mathcal{L}_{\textup{min}} \label{$t$PLF}
\end{eqnarray}
where
\begin{itemize}
\item $\mathcal{L}_{\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\mathcal{L}_{\textup{min}}=-\ln{2 \pi}$.
\item
$\mathcal{L}_{\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings.
\end{itemize}
At the first step, when $x=1$, $\mathcal{L}$ takes value $\mathcal{L}_{\rm max}$ and $\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\mathcal{L}$ takes the value $\mathcal{L}_{\rm min}$ and, hence, again $\mathcal{L}_t =0$.
In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\cite{Decelle14}. In Fig. \ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as
\begin{eqnarray}\label{eq:errj}
\mbox{err}_J \equiv \sqrt{\frac{\sum_{i<j} (J^{\rm inferred}_{ij} -J^{\rm true}_{ij})^2}{N(N-1)/2}} \label{err}
\end{eqnarray}
We stress that the ${\cal L}_t$ maximum is obtained ignoring the underlying graph, while the err$_J$ minimum can be evaluated once the true graph has been reconstructed.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_new.eps}
\caption{The tilted likelyhood ${\cal L}_t$ curve and the reconstruction error vs the number of decimated couplings for an ordered, real-valued J on 2D XY model with $N=64$ spins. The peak of ${\cal L}_t$ coincides with the dip of the error.}
\label{Jor1-$t$PLF}
\end{figure}
In the next sections we will show the results obtained on the $XY$ model analyzing the performances of the two methods and comparing them also with a mean-field method \cite{Tyagi15}.
\section{Inferred couplings with PLM-$l_2$}
\label{sec:res_reg}
\subsection{$XY$ model with real-valued couplings}
In order to obtain the vector of couplings, $J_{ij}^{\rm inferred}$ the function $-\mathcal{L}_i$ is minimized through the vector of derivatives ${\partial \mathcal{L}_i}/\partial J_{ij}$. The process is repeated for all the couplings obtaining then a fully connected adjacency matrix. The results here presented are obtained with $\lambda = 0.01$.
For the minimization we have used the MATLAB routine \emph{minFunc\_2012}\cite{min_func}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_JR_soJR_TPJR}
\caption{Top panels: instances of single site coupling reconstruction for the case of $N=64$ XY spins on a 2D lattice with ordered $J$ (left column) and bimodal distributed $J$ (right column).
Bottom panels: sorted couplings.}
\label{PL-Jor1}
\end{figure}
To produce the data by means of numerical Monte Carlo simulations a system with $N=64$ spin variables is considered on a deterministic 2D lattice with periodic boundary conditions.
Each spin has then connectivity $4$, i.e., we expect to infer an adjacency matrix with $N c = 256$ couplings different from zero.
The dynamics of the simulated model is based on the Metropolis algorithm and parallel tempering\cite{earl05} is used to speed up the thermalization of the system.
The thermalization is tested looking at the average energy over logarithmic time windows and
the acquisition of independent configurations
starts only after the system is well thermalized.
For the values of the couplings we considered two cases: an ordered case, indicated in the figure as $J$ ordered (e.g., left column of Fig. \ref{PL-Jor1}) where the couplings can take values $J_{ij}=0,J$, with $J=1$,
and a quenched disordered case, indicated in the figures as $J$ disordered (e.g., right column of Fig. \ref{PL-Jor1})
where the couplings can take also negative values, i.e.,
$J_{ij}=0,J,-J$, with a certain probability. The results here presented were obtained with bimodal distributed $J$s:
$P(J_{ij}=J)=P(J_{ij}=-J)=1/2$. The performances of the PLM have shown not to depend on $P(J)$.
We recall that in Sec. \ref{sec:plm} we used the temperature-rescaled notation, i.e., $J_{ij}$ stands for $J_{ij}/T$.
To analyze the performances of the PLM, in Fig. \ref{PL-Jor1} the inferred couplings, $\mathbb{J}^R_{\rm inf}$, are shown on top of the original couplings, $\mathbb{J}^R_{\rm true}$.
The first figure (from top) in the left column shows the $\mathbb{J}^R_{\rm inf}$ (black) and the $\mathbb{J}^R_{\rm tru}$ (green) for a given spin
at temperature $T/J=0.7$ and number of samples $M=1024$. PLM appears to reconstruct the correct couplings, though zero couplings are always given a small inferred non-zero value.
In the left column of Fig. \ref{PL-Jor1}, both the $\mathbb{J}^R_{\rm{inf}}$ and the $\mathbb{J}^R_{\rm{tru}}$ are sorted in decreasing order and plotted on top of each other.
We can clearly see that $\mathbb{J}^R_{\rm inf}$ reproduces the expected step function. Even though the jump is smeared, the difference between inferred couplings corresponding to the set of non-zero couplings
and to the set of zero couplings can be clearly appreciated.
Similarly, the plots in the right column of Fig. \ref{PL-Jor1} show the results obtained for the case with bimodal disordered couplings, for the same working temperature and number of samples.
In particular, note that the algorithm infers half positive and half negative couplings, as expected.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_errJ_varT_varM}
\caption{Reconstruction error $\mbox{err}_J$, cf. Eq. (\ref{eq:errj}), plotted as a function of temperature (left) for three values of the number of samples $M$ and as a function $M$ (right) for three values of temperature in the ordered system, i.e., $J_{ij}=0,1$.
The system size is $N=64$.}
\label{PL-err-Jor1}
\end{figure}
In order to analyze the effects of the number of samples and of the temperature regimes, we plot in Fig. \ref{PL-err-Jor1} the reconstruction error, Eq. (\ref{err}), as a function of temperature for three different sample sizes $M=64,128$ and $512$.
The error is seen to sharply rise al low temperature, incidentally, in the ordered case, for $T<T_c \sim 0.893$, which is the Kosterlitz-Thouless transition temperature of the 2XY model\cite{Olsson92}.
However, we can see that if only $M=64$ samples are considered, $\mbox{err}_J$ remains high independently on the working temperature.
In the right plot of Fig. \ref{PL-err-Jor1}, $\mbox{err}_J$ is plotted as a function of $M$ for three different working temperatures $T/J=0.4,0.7$ and $1.3$. As we expect,
$\mbox{err}_J$ decreases as $M$ increases. This effect was observed also with mean-field inference techniques on the same model\cite{Tyagi15}.
To better understand the performances of the algorithms, in Fig. \ref{PL-varTP-Jor1} we show several True Positive (TP) curves obtained for various values of $M$ at three different temperatures $T$. As $M$ is large and/or temperature is not too small, we are able to reconstruct correctly all the couplings present in the system (see bottom plots).
The True Positive curve displays how many times the inference method finds a true link of the original network as a function of the index of the vector of sorted absolute value of reconstructed couplings $J_{ij}^{\rm inf}$.
The index $n_{(ij)}$ represents the related spin couples $(ij)$. The TP curve is obtained as follows:
first the values $|J^{\rm inf}_{ij}|$ are sorted in descending order and the spin pairs $(ij)$ are ordered according to the sorting position of $|J^{\rm inf}_{ij}|$. Then,
a cycle over the ordered set of pairs $(ij)$, indexed by $n_{(ij)}$, is performed, comparing with the original network coupling $J^{\rm true}_{ij}$ and verifying whether it is zero or not. The true positive curve is computed as
\begin{equation}
\mbox{TP}[n_{(ij)}]= \frac{\mbox{TP}\left[n_{(ij)}-1\right] (n_{ij}-1)+ 1 -\delta_{J^{\rm true}_{ij},0}}{n_{(ij)}}
\end{equation}
As far as $J^{\rm true}_{ij} \neq 0$, TP$=1$. As soon as the true coupling of a given $(ij)$ couple in the sorted list is zero, the TP curve departs from one.
In our case, where the connectivity per spin of the original system is $c=4$ and there are $N=64$ spins, we know that we will have $256$ non-zero couplings.
If the inverse problem is successful, hence, we expect a steep decrease of the TP curve when $n_{ij}=256$ is overcome.
In Fig. \ref{PL-varTP-Jor1}
it is shown that, almost independently of $T/J$, the TP score improves as $M$ increases. Results are plotted for three different temperatures, $T=0.4,1$ and $2.2$, with increasing number of samples $M = 64, 128,512$ and $1024$ (clockwise).
We can clearly appreciate the improvement in temperature if the size of the data-set is not very large: for small $M$, $T=0.4$ performs better.
When $M$ is high enough (e.g., $M=1024$), instead, the TP curves do not appear to be strongly influenced by the temperature.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_TPJR_varT_varM}
\caption{TP curves for 2D short-range ordered $XY$ model with $N=64$ spins at three different values of $T/J$ with increasing - clockwise from top - $M$.}
\label{PL-varTP-Jor1}
\end{figure}
\subsection{$XY$ model with complex-valued couplings}
For the complex $XY$ we have to contemporary infer $2$ apart coupling matrices, $J^R_{i j}$ and $J^I_{i j}$. As before, a system of $N=64$ spins is considered on a 2D lattice.
For the couplings we have considered both ordered and bimodal disordered cases.
In Fig. \ref{PL-Jor3}, a single row of the matrix $J$ (top) and the whole sorted couplings (bottom) are displayed for the ordered model (same legend as in Fig. \ref{PL-Jor1}) for the real, $J^R$ (left column), and the imaginary part, $J^I$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_l2_JRJI_soJRJI_TPJRJI}
\caption{Results related to the ordered complex XY model with $N=64$ spins on a 2D lattice. Top: instances of single site reconstruction for the real, JR (left column), and
the imaginary, JI (right column), part of $J_{ij}$. Bottom: sorted values of JR (left) and JI (right).}
\label{PL-Jor3}
\end{figure}
\section{PLM with Decimation}
\label{sec:res_dec}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_varT_varM}
\caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted as a function of decimated couplings. Top: Different ${\cal L}_t$ curves obtained for different values of $M$ plotted on top of each other. Here $T=1.3$. The black line indicates the expected number of decimated couplings, $x^*=(N (N-1) - N c)/2=1888$. As we can see, as $M$ increases, the maximum point of ${\cal L}_t$ approaches $x^*$. Bottom: Different ${\cal L}_t$ curves obtained for different values of T with $M=2048$. We can see that, with this value of $M$, no differences can be appreciated on the maximum points of the different ${\cal L}_t$ curves.}
\label{var-$t$PLF}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_peak_statistics_varM_prob.eps}
\caption{Number of most likely decimated couplings, estimated by the maximum point of $\mathcal{L}_t$, as a function of the number of samples $M$. We can clearly see that the maximum point of $\mathcal{L}_t$ tends toward $x^*$, which is the right expected number of zero couplings in the system.}
\label{PLF_peak_statistics}
\end{figure}
For the ordered real-valued XY model we show in Fig. \ref{var-$t$PLF}, top panel, the outcome on the tilted pseudolikelyhood, $\mathcal{L}_t$ Eq. \eqref{$t$PLF}, of the progressive decimation: from a fully connected lattice down to an empty lattice. The figure shows the behaviour of $\mathcal{L}_t$ for three different data sizes $M$. A clear data size dependence of the maximum point of $\mathcal{L}_t$, signalling the most likely value for decimation, is shown. For small $M$ the most likely number of couplings is overestimated and for increasing $M$ it tends to the true value, as displayed in Fig. \ref{PLF_peak_statistics}. In the bottom panel of Fig. \ref{var-$t$PLF} we display instead different
$\mathcal{L}_t$ curves obtained for three different values of $T$.
Even though the values of $\mathcal{L}_t$ decrease with increasing temperature, the value of the most likely number of decimated couplings appears to be quite independent on $T$ with $M=2048$ number of samples.
In Fig. \ref{fig:Lt_complex} we eventually display the tilted pseudolikelyhood for a 2D network with complex valued ordered couplings, where the decimation of the real and imaginary coupling matrices proceeds in parallel, that is,
when a real coupling is small enough to be decimated its imaginary part is also decimated, and vice versa.
One can see that though the apart errors for the real and imaginary parts are different in absolute values, they display the same dip, to be compared with the maximum point of $\mathcal{L}_t$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_dec_tPLF_new}
\caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted with the reconstruction errors for the XY model with $N=64$ spins on a 2D lattice. These results refer to the case of ordered and complex valued couplings. The full (red) line indicates ${\cal L}_t$. The dashed (green)
and the dotted (blue) lines show the reconstruction errors (Eq. \eqref{eq:errj}) obtained for the real and the imaginary couplings respectively. We can see that both ${\rm err_{JR}}$ and ${\rm err_{JI}}$ have a minimum at $x^*$.}
\label{fig:Lt_complex}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_JR_soJR_TPJR}
\caption{XY model on a 2D lattice with $N=64$ sites and real valued couplings. The graphs show the inferred (dashed black lines) and true couplings (full green lines) plotted on top of each other. The left and right columns refer to the
cases of ordered and bimodal disordered couplings, respectively. Top figures: single site reconstruction, i.e., one row of the matrix $J$. Bottom figures: couplings are plotted sorted in descending order.}
\label{Jor1_dec}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_dec_JRJI_soJRJI_TPJRJI}
\caption{XY model on a 2D lattice with $N=64$ sites and ordered complex-valued couplings.
The inferred and true couplings are plotted on top of each other. The left and right columns show the real and imaginary parts, respectively, of the couplings. Top figures refer to a single site reconstruction, i.e., one row of the matrix $J$. Bottom figures report the couplings sorted in descending order.}
\label{Jor3_dec}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_TPJR_varT}
\caption{True Positive curves obtained with the three techniques: PLM with decimation, (blue) dotted line, PLM with $l_2$ regularization, (greed) dashed line, and mean-field, (red) full line. These results refer to real valued ordered couplings with $N=64$ spins on a 2D lattice. The temperature is here $T=0.7$ while the four graphs refer to different sample sizes: $M$ increases clockwise.}
\label{MF_PL_TP}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_errJ_varT_varM}
\caption{Variation of reconstruction error, ${\rm err_J}$, with respect to temperature as obtained with the three different techniques, see Fig. \ref{MF_PL_TP}, for four different sample size: clockwise from top $M=512,1024, 2048$ and $4096$.}
\label{MF_PL_err}
\end{figure}
Once the most likely network has been identified through the decimation procedure, we perform the same analysis displayed in Fig. \ref{Jor1_dec} for ordered and then quenched disordered real-valued couplings
and in Fig. \ref{Jor3_dec} for complex-valued ordered couplings. In comparison to the results shown in Sec. \ref{sec:res_reg},
the PLM with decimation leads to rather cleaner results. In Figs. \ref{MF_PL_err} and \ref{MF_PL_TP} we compare the performances of the PLM with decimation in respect to ones of the PLM with $l_2$-regularization. These two techniques are also analysed in respect to a mean-field technique previously implemented on the same XY systems\cite{Tyagi15}.
For what concerns the network of connecting links, in Fig. \ref{MF_PL_TP} we compare the TP curves obtained with the three techniques. The results refer to the case of ordered and real valued couplings, but similar behaviours were obtained for the other cases analysed.
The four graphs are related to different sample sizes, with $M$ increasing clockwise. When $M$ is high enough, all techniques reproduce the true network.
However, for lower values of $M$ the performances of the PLM with $l_2$ regularization and with decimation drastically overcome those ones of the previous mean field technique.
In particular, for $M=256$ the PLM techniques still reproduce the original network while the mean-field method fails to find more than half of the couplings.
When $M=128$, the network is clearly reconstructed only through the PLM with decimation while the PLM with $l_2$ regularization underestimates the couplings.
Furthermore, we notice that the PLM method with decimation is able to clearly infer the network of interaction even when $M=N$ signalling that it could be considered also in the under-sampling regime $M<N$.
In Fig. \ref{MF_PL_err} we compare the temperature behaviour of the reconstruction error.
In can be observed that for all temperatures and for all sample sizes the reconstruction error, ${\rm err_J}$, (plotted here in log-scale) obtained with the PLM+decimation is always smaller than
that one obtained with the other techniques. The temperature behaviour of ${\rm err_J}$ agrees with the one already observed for Ising spins in \cite{Nguyen12b} and for XY spins in \cite{Tyagi15} with a mean-field approach: ${\rm err_J}$ displays a minimum around $T\simeq 1$ and then it increases for very lower $T$; however,
the error obtained with the PLM with decimation is several times smaller than the error estimated by the other methods.
\section{Conclusions}
\label{sec:conc}
Different statistical inference methods have been applied to the inverse problem of the XY model.
After a short review of techniques based on pseudo-likelihood and their formal generalization to the model we have tested their performances against data generated by means of Monte Carlo numerical simulations of known instances
with diluted, sparse, interactions.
The main outcome is that the best performances are obtained by means of the pseudo-likelihood method combined with decimation. Putting to zero (i.e., decimating) very weak bonds, this technique turns out to be very precise for problems whose real underlying interaction network is sparse, i.e., the number of couplings per variable does not scale with number of variables.
The PLM + decimation method is compared to the PLM + regularization method, with $\ell_2$ regularization and to a mean-field-based method. The behavior of the quality of the network reconstruction is analyzed by looking at the overall sorted couplings and at the single site couplings, comparing them with the real network, and at the true positive curves in all three approaches. In the PLM +decimation method, moreover, the identification of the number of decimated bonds at which the tilted pseudo-likelihood is maximum allows for a precise estimate of the total number of bonds. Concerning this technique, it is also shown that the network with the most likely number of bonds is also the one of least reconstruction error, where not only the prediction of the presence of a bond is estimated but also its value.
The behavior of the inference quality in temperature and in the size of data samples is also investigated, basically confirming the low $T$ behavior hinted by Nguyen and Berg \cite{Nguyen12b} for the Ising model. In temperature, in particular, the reconstruction error curve displays a minimum at a low temperature, close to the critical point in those cases in which a critical behavior occurs, and a sharp increase as temperature goes to zero. The decimation method, once again, appears to enhance this minimum of the reconstruction error of almost an order of magnitude with respect to other methods.
The techniques displayed and the results obtained in this work can be of use in any of the many systems whose theoretical representation is given by Eq. \eqref{eq:HXY} or Eq. \eqref{eq:h_im}, some of which are recalled in Sec. \ref{sec:model}. In particular, a possible application can be the field of light waves propagation through random media and the corresponding problem of the reconstruction of an object seen through an opaque medium or a disordered optical fiber \cite{Vellekoop07,Vellekoop08a,Vellekoop08b, Popoff10a,Akbulut11,Popoff11,Yilmaz13,Riboli14}.
|
\section{Introduction}
Model Predictive Control~(MPC) is widely known as an advanced control technique for nonlinear systems that can handle time-varying references with preview information as well as constraints. In the vast body of literature, standard MPC formulations penalize deviations from a set point (typically the origin) or a (feasible) reference trajectory, while providing stability and recursive feasibility guarantees for such settings~\cite{Mayne2000,rawlings2009model,borrelli2017predictive,Grune2011}.
In practice, it is often difficult or cumbersome to pre-compute reference trajectories which are feasible, i.e., which both satisfy path constraints and evolve according to the system dynamics. It is therefore appealing to use reference trajectories (or reference paths) that are simple to compute and only satisfy path constraints, but not necessarily the system dynamics. However, using such trajectories introduces difficulties in providing stability guarantees~\cite{rawlings2012fundamentals}.
This paper aims at (partially) filling a gap between practice and theory that exists for MPC formulations with infeasible references. Previous work~\cite{Rawlings2008a} has considered infeasible set points, and shown that stabilization to the closest feasible set point can be guaranteed. However, in the case of time-varying references, this analysis does directly not apply. To that end, we propose to use an Input-to-State Stability~(ISS) approach instead.
Our contribution in this paper is twofold. First, we prove that MPC formulations that are formulated with an infeasible reference, can actually stabilize towards an optimal trajectory, subject to specific terminal conditions. However, selecting such terminal conditions requires in general that an optimization problem is solved beforehand to compute a feasible reference. Consequently, if this step is performed, one has no reason to provide an infeasible reference to the MPC controller. Therefore, in a second step, we extend this first result to construct an ISS analysis to further show that if sub-optimal terminal conditions are chosen, the controlled system in closed-loop is stabilized around a neighborhood of the optimal trajectory.
This paper is structured as follows. In Section~\ref{sec:mpftc} we introduce the MPC tracking problem, while in Section~\ref{sec:economic_mpc} we prove that, while having an ideal setting in mind, one can design an MPC formulation that stabilizes the closed-loop system to an optimal feasible reference, even if an infeasible reference is used, at the price of defining suitable terminal conditions. Then, in Section~\ref{sec:iss} we further extend the results by proving ISS for practical settings, i.e., in case the terminal conditions are based on the infeasible reference. Finally, in Section~\ref{sec:simulations} we illustrate the derived theory with a numerical example, and draw conclusions in Section~\ref{sec:conclusions}.
\section{Preliminaries}\label{sec:mpftc}
Consider a discrete-time Linear Time-Varying~(LTV) system
\begin{equation}\label{eq:sys}
{\mathbf{x}}_{k+1}=f_k({\mathbf{x}}_k,\u_k)=A_k {\mathbf{x}}_k + B_k \u_k,
\end{equation}
where ${\mathbf{x}}_k\in\mathbb{R}^{n_{\mathbf{x}}}$ and $\u_k\in\mathbb{R}^{n_\u}$ are the state and input vectors at time $k$, and the matrices $A_k\in\mathbb{R}^{n_{\mathbf{x}}\times n_{\mathbf{x}}}$ and $B_k\in\mathbb{R}^{n_{\mathbf{x}}\times n_\u}$ are time-varying. While we only consider LTV systems in this paper, we comment in Remark~\ref{rem:nl_sys} on how the results could be extended to general nonlinear systems. The state and inputs are subject to constraints $h({\mathbf{x}},\u):\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}^{n_\u}\rightarrow\mathbb{R}^{n_h}$, where the inequality $h({\mathbf{x}},\u)\leq 0$ is defined element-wise. The constraint $h({\mathbf{x}},\u)$ models, e.g., regions of the state space which should be avoided, and actuator limitations. Our objective is to control the system such that the state ${\mathbf{x}}_k$ tracks a user-provided parameterized reference trajectory $\r(t)=(\r^{\mathbf{x}}(t),\r^\u(t))$ as closely as possible. We assume that the reference trajectory is parameterized with time parameter $t$, with natural dynamics
\begin{equation}
\label{eq:tau_controlled}
t_{k+1} = t_k + t_\mathrm{s},
\end{equation}
where $t_\mathrm{s}=1$ for discrete-time systems, or the sampling time for sampled-data systems. Throughout the remainder of the paper, we will refer to any time dependence of the reference using notation $(\r^{\mathbf{x}}_k,\r^\u_k):=(\r^{\mathbf{x}}(t_k),\r^\u(t_k))$.
In order to track the reference $(\r^{\mathbf{x}}_k,\r^\u_k)$, we formulate the tracking MPC problem as
\begin{subequations}
\label{eq:nmpc}
\begin{align}
\begin{split}\hspace{-0.5em}V({\mathbf{x}}_k,t_k):=&\min_{\substack{{\x}},\substack{{\u}}} \sum_{n=k}^{k+N-1}
q_\r(\xb,\ub,t_n)\\
&\hspace{3.5em}+p_\r(\xb[k+N],t_{k+N})\hspace{-2em}
\end{split}\label{eq:nmpc_cost}\\
\text{s.t.}\ &\xb[k][k] = {\mathbf{x}}_{k},\label{eq:nmpcState} \\
&\xb[n+1] = f_n(\xb,\ub),\label{eq:nmpcDynamics} & \hspace{-1em}n\in \mathbb{I}_k^{k+N-1},\\
&h(\xb,\ub) \leq{} 0, \label{eq:nmpcInequality_known}& \hspace{-1em}n\in \mathbb{I}_k^{k+N-1},\\
&\xb[k+N] \in\mathcal{X}^\mathrm{f}_\r(t_{k+N})\label{eq:nmpcTerminal},
\end{align}
\end{subequations}
where $k$ is the current time instance and $N$ is the prediction horizon. In tracking MPC, typical choices for the stage and terminal costs are
\begin{align}
q_\r(\xb,\ub,t_n) &:= \matr{c}{\xb-\rx_n\\\ub-\ru_n}^\top{}\hspace{-0.7em}W\matr{c}{\xb-\rx_n\\\ub-\ru_n},\label{eq:stage_cost}\\
p_\r(\xb,t_n) &:= (\xb-\rx_{n})^\top{}P(\xb-\rx_{n}),\label{eq:terminal_cost}
\end{align}
where $W\in\mathbb{R}^{(n_{\mathbf{x}}+n_\u) \times (n_{\mathbf{x}}+n_\u)}$ and $P\in\mathbb{R}^{n_{\mathbf{x}}\times n_{\mathbf{x}}}$ are symmetric positive-definite matrices. In order to avoid further technicalities, we avoid more general costs for the sake of simplicity. The predicted states and controls at the prediction time $n$ given the states at the current time $k$, are defined as $\xb$, and $\ub$, respectively.
The initial condition is enforced by constraint \eqref{eq:nmpcState}, and constraint \eqref{eq:nmpcDynamics} enforces the system dynamics. Constraint \eqref{eq:nmpcInequality_known} denotes constraints, e.g., state and control limits, and constraint \eqref{eq:nmpcTerminal} defines a terminal set containing the reference $\r$. Note that, differently from standard formulations, the terminal constraint depends on the time parameter $t_{k+N}$ relative to the reference.
In the following, we first recall the standard stability properties of tracking MPC. Then, in Sections~\ref{sec:economic_mpc} and~\ref{sec:iss} we will derive
Input-to-State Stability~(ISS) results when the parameterized reference trajectory is not feasible with respect to the system dynamics.
In order to prove stability, we introduce the following standard assumptions, see, e.g.,~\cite{rawlings2009model,Grune2011}.
\begin{Assumption}[System and cost regularity]\label{a:cont}
The system model $f$ is continuous, and the stage cost $q_\r:\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}^{n_\u}\times\mathbb{R}\rightarrow\mathbb{R}_{\geq{}0}$, and terminal cost $p_\r:\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}\rightarrow\mathbb{R}_{\geq{}0}$, are continuous at the origin and satisfy $q_\r(\rx_k,\ru_k,t_k)=0$, and $p_\r(\rx_k,t_k)=0$. Additionally, $q_\r({\x}_k,{\u}_k,t_k)\geq{}\alpha_1(\|{\x}_k-\rx_k\|)$ for all feasible ${\mathbf{x}}_k$, $\u_k$, and $p_\r({\x}_k,t_k)\leq\alpha_2(\|{\x}_k-\rx_k\|)$, where $\alpha_1$ and $\alpha_2$ are $\mathcal{K}_\infty$-functions.
\end{Assumption}
\begin{Assumption}[Reference feasibility] \label{a:rec_ref}
The reference trajectory satisfies the system dynamics~\eqref{eq:nmpcDynamics} and the system constraints~\eqref{eq:nmpcInequality_known}, i.e., $\r^{\mathbf{x}}_{k+1}=f_k(\r^{\mathbf{x}}_k,\r^\u_k)$ and $h(\r^{\mathbf{x}}_k,\r^\u_k) \leq{} 0$, $\forall{}k\in\mathbb{I}_0^\infty$.
\end{Assumption}
\begin{Assumption}[Stabilizing Terminal Conditions] \label{a:terminal}
There exists a parametric stabilizing terminal set $\mathcal{X}^\mathrm{f}_\r(t)$ and a terminal control law $\kappa^\mathrm{f}_\r({\mathbf{x}},t)$ yielding:
\begin{align*}
\mathbf{x}_{+}^\kappa=f_k(\mathbf{x}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t)), && t_+ = t_k + t_\mathrm{s},
\end{align*}
such that
$p_\r({\mathbf{x}}_{+}^\kappa,t_{+}) - p_\r({\mathbf{x}}_k,t_k) \leq{} - q_\r({\mathbf{x}}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t_k),t_k)$,
${\mathbf{x}}_k\in\mathcal{X}^\mathrm{f}_\r(t_k)\Rightarrow {\mathbf{x}}^\kappa_{+}\in\mathcal{X}^\mathrm{f}_\r(t_{+})$, and $h({\mathbf{x}}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t_k)) \leq{} 0$ hold for all $k\in\mathbb{I}_0^\infty$.
\end{Assumption}
\begin{Proposition} [Nominal Asymptotic Stability]\label{prop:stab_feas}
{Suppose that Assumptions \ref{a:cont}, \ref{a:rec_ref}, and \ref{a:terminal} hold,
and that the initial state $({\mathbf{x}}_k,t_k)$ at time $k$ belongs to the feasible set of Problem \eqref{eq:nmpc}. Then the system \eqref{eq:sys} in closed-loop with the solution of~\eqref{eq:nmpc} applied in receding horizon is an asymptotically stable system.} \label{prop:stable}
\begin{proof}
See the standard proof in, e.g., \cite{rawlings2009model,borrelli2017predictive}.
\end{proof}
\end{Proposition}
Proposition~\ref{prop:stab_feas} recalls the known stability results from the existing literature, which apply to tracking MPC schemes. The resulting design procedure for asymptotically stable tracking MPC is indeed complicated by the task of precomputing a feasible reference trajectory $(\r^{\mathbf{x}}_k,\r^\u_k)$ that satisfies Assumption~\ref{a:rec_ref}. However, in practice, it may be convenient to use a reference trajectory that is infeasible w.r.t. the system dynamics, yet simpler to define. While in standard MPC settings the stability with respect to an unreachable set point has been studied in~\cite{Rawlings2008a}, the approach therein applies to time-invariant infeasible references. In order to overcome such a limitation, we consider a setting where the reference can be time-varying and does not need to satisfy Assumption~\ref{a:rec_ref}, and the terminal conditions \eqref{eq:nmpcTerminal} do not need to hold at the reference trajectory, but in a neighborhood. While the results proposed in this paper are developed for a standard MPC formulation, we point out that they hold in other settings as well, including Model Predictive path Following Control~(MPFC)~\cite{Faulwasser2016} or Model Predictive Flexible trajectory Tracking Control~(MPFTC)~\cite{batkovic2020safe}.
\section{Optimal Feasible Reference}
Consider the optimal state and input trajectories obtained as the solution of the optimal control problem (OCP)
\begin{subequations}
\label{eq:ocp}
\begin{align}\begin{split}
\hspace{-1em}({\x}^{\mathrm{r}},{\u}^{\mathrm{r}})\hspace{-.25em}:=\hspace{-.5em}\lim_{M\rightarrow\infty}\hspace{-0.3em}
\arg&\min_{{\boldsymbol{\xi}},{\boldsymbol{\nu}}} \sum_{n=0}^{M-1}
\hspace{-.3em}q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n)\hspace{-0.1em}+\hspace{-0.1em}p_\r({\boldsymbol{\xi}}_M,t_M)\label{eq:ocp_cost} \hspace{-20em}\end{split}\\
\text{s.t.}\ &{\boldsymbol{\xi}}_0={\mathbf{x}}_{0}, \label{eq:ocpState} &\\
&{\boldsymbol{\xi}}_{n+1} = f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n),\label{eq:ocpDynamics} & n\in \mathbb{I}_{0}^{M-1},\\
&h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) \leq{} 0, \label{eq:ocpInequality_known}& \hspace{-1em}n\in \mathbb{I}_0^{M-1},
\end{align}
\end{subequations}
with the corresponding value function
\begin{equation}
V^\mathrm{O}({\mathbf{x}}_k,t_k) := \lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u_n^{\mathrm{r}},t_n)+p_\r({\mathbf{x}}_M^{\mathrm{r}},t_M)
\end{equation}
The terminal cost in~\eqref{eq:ocp_cost} and initial state constraint~\eqref{eq:ocpState} can in principle be omitted or formulated otherwise, e.g., the terminal cost can be replaced with a terminal constraint instead, but we include them in the formulation since they are often taking this form. We use here the same stage cost as in~\eqref{eq:nmpc_cost} and assume it is positive-definite. We exclude positive semi-definite costs solely for the sake of simplicity.
We define the Lagrangian of the OCP~\eqref{eq:ocp} as
\begin{align*}
\mathcal{L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, {\boldsymbol{\lambda}},{\boldsymbol{\mu}},\mathbf{t}) &= {\boldsymbol{\lambda}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) +p_\r({\boldsymbol{\xi}}_M,t_M)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) +{\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} {\boldsymbol{\lambda}}_{n+1}^\top ({\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)),
\end{align*}
and denote the optimal multipliers as ${\boldsymbol{\lambda}}^{\mathrm{r}},{\boldsymbol{\mu}}^{\mathrm{r}}$, and the solution of~\eqref{eq:ocp} as ${\mathbf{y}}^{\mathrm{r}}:=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$.
Hereafter, we will refer to the reference ${\mathbf{y}}^{\mathrm{r}}$ as the \emph{feasible reference}, as it satisfies Assumption~\ref{a:rec_ref}.
\begin{Remark}
Note that Problem~\eqref{eq:ocp} is formulated as an infinite horizon OCP since a reference could be defined over an infinite time horizon. For instance, a stationary reference can be viewed as being infinitely long as it remains at the same point at all times.
\end{Remark}
In the following, we will prove the stability of \eqref{eq:sys} w.r.t. ${\mathbf{y}}^{\mathrm{r}}$ by relying on the trajectories~${\mathbf{y}}^{\mathrm{r}}$ and ${\boldsymbol{\lambda}}^{\mathrm{r}}$ from~\eqref{eq:ocp}, where ${\mathbf{y}}^{\mathrm{r}}$ is used as an auxiliary reference.
Our analysis will proceed as follows. We will first discuss an ideal case in which the terminal conditions are constructed based on ${\mathbf{y}}^{\mathrm{r}}$. By exploiting ideas from economic MPC we will prove that asymptotic stability can be obtained in that case. Since our objective is to avoid using any information on ${\mathbf{y}}^{\mathrm{r}}$, we will then turn to the realistic MPC formulation~\eqref{eq:nmpc}, and we will prove ISS.
\subsection{Ideal MPC and Asymptotic Stability} \label{sec:economic_mpc}
Our analysis builds on tools that are used in the stability analysis of economic MPC schemes. The interested reader is referred to the following most relevant publications related to our analysis~\cite{Diehl2011,Amrit2011a,Zanon2018a,Faulwasser2018}.
Economic and tracking MPC schemes differ in the cost function, which satisfies
\begin{align}
\begin{split}
\label{eq:tracking_cost}
q_\r({\mathbf{x}}^{\mathrm{r}}_k,\u^{\mathrm{r}}_k,t_k) =0,\ &q_\r({\mathbf{x}}_k,\u_k,t_k) >0,\\
&\forall \ {\mathbf{x}}_k\neq{}{\x}^{\mathrm{r}}_k,\ \u_k\neq{\u}^{\mathrm{r}}_k,
\end{split}
\end{align}
in tracking schemes but not in economic ones. Note that~\eqref{eq:tracking_cost} can only hold if $\r={\mathbf{y}}^{\mathrm{r}}$, that is, if Assumption~\ref{a:rec_ref} holds.
Consequently, even if the cost is positive-definite, any MPC scheme formulated with an infeasible reference $\r$ is an economic MPC.
We refer to~\cite{Zanon2018a,Faulwasser2018} for a detailed discussion on the topic.
On the contrary, if ${\mathbf{y}}^{\mathrm{r}}$ is used as reference, we obtain the tracking stage cost $q_{{\mathbf{y}}^{\mathrm{r}}}$. Since precomputing a feasible reference ${\mathbf{y}}^{\mathrm{r}}$ can be impractical or involved, we focus next on the case of \emph{infeasible references}.
Consider the Lagrangian of the OCP~\eqref{eq:ocp}
\begin{align*}
\mathcal{L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, {\boldsymbol{\lambda}},{\boldsymbol{\mu}},\mathbf{t}) &= {\boldsymbol{\lambda}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) +p_\r({\boldsymbol{\xi}}_M,t_M)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) +{\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} {\boldsymbol{\lambda}}_{n+1}^\top ({\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)),
\end{align*}
and denote the optimal multipliers as ${\boldsymbol{\lambda}}^\mathrm{r},{\boldsymbol{\mu}}^\mathrm{r}$, and the solution of~\eqref{eq:ocp} as ${\mathbf{y}}^{\mathrm{r}}:=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$. In order to construct a tracking cost from the economic one, we use the Lagrange multipliers of the OCP~\eqref{eq:ocp} to construct a \emph{rotated} problem, which has the same constraints as the original MPC problem~\eqref{eq:nmpc} and the following \emph{rotated stage and terminal costs}
\begin{align*}
&\bar q_\r(\xb,\ub,t_n):=q_\r(\xb,\ub,t_n)-q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n,t_n)\\
&\hspace{1em}+ {\boldsymbol{\lambda}}_n^{\mathrm{r}\top}(\xb[n][k]-{\mathbf{x}}^{\mathrm{r}}_n)- {\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n+1} (f_n(\xb[n][k],\ub[n][k])-f_n({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n)), \\
&\bar{p}_\r(\xb,t_n):= p_\r(\xb)-p_\r({\mathbf{x}}_{n}^{\mathrm{r}},t_n)+{\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n}(\xb-{\mathbf{x}}^{\mathrm{r}}_{n}).
\end{align*}
As we prove in the following Lemma~\ref{lem:rot_ocp}, adopting the rotated stage cost $\bar q_\r$ and terminal cost $\bar p_\r$ in the OCP~\eqref{eq:ocp} does not change its primal solution. Such property of the rotated costs will be exploited next in the formulation of the \emph{ideal} MPC problem.
\begin{Lemma}
\label{lem:rot_ocp}
If OCP~\eqref{eq:ocp} is formulated using the rotated cost instead of the original one, then the Second Order Sufficient optimality Conditions (SOSC) are satisfied~\cite{Nocedal2006}, and the following claims hold:
\begin{enumerate}
\item[i)] the primal solution is unchanged;
\item[ii)] the rotated cost penalizes deviations from the optimal solution of Problem~\eqref{eq:ocp}, i.e.,
\begin{align*}
\bar q_\r({\mathbf{x}}_n^{\mathrm{r}},\u_n^{\mathrm{r}},t_n) =0,\ \bar q_\r({\mathbf{x}}_n,\u_n,t_n)>0,
\end{align*}
for all $({\mathbf{x}}_n,\u_n) \neq ({\x}_n^{\mathrm{r}},{\u}_n^{\mathrm{r}})$ satisfying $h({\mathbf{x}}_n,\u_n) \leq 0$.
\end{enumerate}
\end{Lemma}
\begin{proof}
First, we prove that if Problem~\eqref{eq:ocp} is formulated using stage cost $\bar q_\r$ and terminal cost $\bar p_\r$ instead of $q_\r$ and $p_\r$, the primal solution remains unchanged.
This is a known result from the literature on economic MPC and is based on the observation that all terms involving ${\boldsymbol{\lambda}}^\mathrm{r}$ in the rotated cost form a telescopic sum and cancel out, such that only ${{\boldsymbol{\lambda}}_0^\mathrm{r}}^\top ({\boldsymbol{\xi}}_0-{\mathbf{x}}_0^\mathrm{r})$ remains. Since the initial state is fixed, the cost only differs by a constant term and the primal solution is unchanged. The cost $\bar q_\r$ being nonnegative is a consequence of the fact that the stage cost Hessian is positive definite by Assumption \ref{a:cont}, the system dynamics are LTV, and the Lagrange multipliers $\bar {\boldsymbol{\lambda}}$ associated with Problem~\eqref{eq:ocp} using cost $\bar q_\r$ are $0$.
To prove the second claim, we define the Lagrangian of the rotated problem as
\begin{align*}
\mathcal{\bar L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, \bar {\boldsymbol{\lambda}},\bar {\boldsymbol{\mu}},\mathbf{t})
= \ & \bar{{\boldsymbol{\lambda}}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) + \bar p_\r ({\boldsymbol{\xi}}_M,t_M)\\
&\hspace{-2em}+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
\bar{q}_\mathrm{\r}({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) + \bar {\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&\hspace{-2em}+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} \bar{{\boldsymbol{\lambda}}}_{n+1}^\top ( {\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) ).
\end{align*}
For compactness we denote next $\nabla_n:=\nabla_{({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)}$. Since by construction $\nabla_n \bar q_\mathrm{\r}=\nabla_n \mathcal{L}^\mathrm{O} - \nabla_n {\boldsymbol{\mu}}_n^{{\mathrm{r}}\top} h $, we obtain
\begin{align*}
\nabla_n \mathcal{\bar L}^\mathrm{O} &= \nabla_n \bar q_\mathrm{\r} + \matr{c}{\bar {\boldsymbol{\lambda}}_n \\ 0} - \nabla_n \bar {\boldsymbol{\lambda}}_{n+1}^\top f_n + \nabla_n \bar {\boldsymbol{\mu}}_n^\top h \\
&\hspace{-1.2em}= \nabla_n \mathcal{L}^\mathrm{O} + \matr{c}{\bar {\boldsymbol{\lambda}}_{n} \\ 0} - \nabla_n \bar {\boldsymbol{\lambda}}_{n+1}^\top f_n + \nabla_n (\bar {\boldsymbol{\mu}}_n-{\boldsymbol{\mu}}_n^\mathrm{r})^\top h.
\end{align*}
Therefore, the KKT conditions of the rotated problem are solved by the same primal variables as the original problem and $\bar {\boldsymbol{\mu}}_n = {\boldsymbol{\mu}}_n^\mathrm{r}$, $\bar {\boldsymbol{\lambda}}_n=0$. With similar steps we show that $\bar{\lambda}_P=0$, since $\nabla_M\bar{p}_\r=\nabla_M \mathcal{L}^\mathrm{O}$.
Because the system dynamics are LTV and the stage cost quadratic, we have that
$\nabla^2_n \bar q_\mathrm{\r} = \nabla^2_n q_\mathrm{\r}\succ0$
Moreover, since the solution satisfies the SOSC,
we directly have that $\bar q_\mathrm{\r}({\mathbf{x}}_n^\mathrm{r},\u_n^\mathrm{r},t_n) =0$ and $\bar q_\mathrm{\r}({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) > 0$ for all $({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\neq({\mathbf{x}}_n^\mathrm{r},\u_n^\mathrm{r})$ s.t. $h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) \leq 0$.
\end{proof}
\begin{Remark}
\label{rem:nl_sys}
The only reason that limits our result to LTV systems is that this entails $\nabla^2_n \bar q_\mathrm{\r} = \nabla^2_n q_\mathrm{\r}\succ0$. It seems plausible that this limitation could be overcome by assuming that OCP~\eqref{eq:ocp} satisfies the SOSC for all initial states at all times. However, because further technicalities would be necessary to obtain the proof, we leave this investigation for future research.
\end{Remark}
\begin{Corollary} The rotated value function of OCP~\eqref{eq:ocp}, i.e.,
\begin{align*}
\bar V^\mathrm{O}({\mathbf{x}}_k,t_k) &=\ V^\mathrm{O}({\mathbf{x}}_k,t_k) + {\boldsymbol{\lambda}}^{{\mathrm{r}}^\top}_k ({\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}})\\
&-\lim_{M\rightarrow\infty}\sum_{n=k}^{k+M-1}q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n,t_n)-p_\r({\mathbf{x}}^{\mathrm{r}}_{k+M},t_M),
\end{align*}
is positive definite, and its minimum is $\bar V^\mathrm{O}({\x}_k^\mathrm{r},t_k)=0$.
\end{Corollary}
\begin{proof}
We note from the proof in Lemma~1, that the rotated stage and terminal costs are positive definite and that they are zero at the feasible reference $({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n)$, hence, the rotated value function is also positive, and zero at ${\mathbf{x}}^\r_k$.
\end{proof}
\paolor{While Proposition~\ref{prop:stab_feas} proves the stability of system~(1) in closed-loop with the solution of~\eqref{eq:nmpc} under Assumptions~\ref{a:rec_ref} and~\ref{a:terminal}, in Theorem~\ref{thm:as_stab_0} we will prove stability in case the reference trajectory does not satisfy Assumption~\ref{a:rec_ref}. The stability proof in Theorem~\ref{thm:as_stab_0}
builds on the following \emph{ideal} formulation}
\begin{subequations}
\label{eq:ideal_nmpc}
\begin{align}
\begin{split}V^\mathrm{i}({\mathbf{x}}_k,t_k) = \min_{{\x},{\u}}&\sum_{n=k}^{k+N-1} q_\r(\xb,\ub,t_n) \\
&\hspace{2em}+p_{\tilde{\mathbf{y}}^{\mathrm{r}}}(\xb[k+N],t_{k+N})
\end{split} \\
\mathrm{s.t.} \ \ &\eqref{eq:nmpcState}-\eqref{eq:nmpcInequality_known}, \ \xb[k+N] \in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_{k+N}),\label{eq:ideal_nmpc_terminal}
\end{align}
\end{subequations}
where
\begin{align}\label{eq:minimizer_tilde_yr}
\tilde{\mathbf{y}}^{\mathrm{r}}_k &:= \arg\min_{{\mathbf{x}}} p_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}},t_k)-{\boldsymbol{\lambda}}_k^{{\mathrm{r}}\top}({\mathbf{x}}-{\mathbf{x}}^{\mathrm{r}}_k).
\end{align}
The Problems~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} only differ in the terminal cost and constraint: in~\eqref{eq:ideal_nmpc} they are written with respect to the solution ${\mathbf{y}}^{\mathrm{r}}$ and ${\boldsymbol{\lambda}}^{\mathrm{r}}$ of~\eqref{eq:ocp} rather than~$\r$. In order to distinguish the solutions of~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc}, we denote the solution of~\eqref{eq:nmpc} by ${\mathbf{x}}^\star$, $\u^\star$, and the solution of~\eqref{eq:ideal_nmpc} by ${\mathbf{x}}^\mathrm{i}$, $\u^\mathrm{i}$. In addition, when the stage cost $\bar{q}_\r$ and terminal cost $\bar p_{{\tilde\y}^{\mathrm{r}}}$ are used, we obtain the corresponding \emph{rotated} formulation of~\eqref{eq:ideal_nmpc}
\begin{align}
\label{eq:ideal_rot_nmpc}
\begin{split}\bar V^\mathrm{i}({\mathbf{x}}_k,t_k) = \min_{{\mathbf{x}},\u} &\sum_{n=k}^{k+N-1} \bar q_\r(\xb,\ub,t_n) \\
&\hspace{2em}+ \bar p_{\tilde{\mathbf{y}}^\mathrm{r}}(\xb[k+N],t_{k+N})
\end{split} \\
\mathrm{s.t.}\hspace{0em} \ \ &\eqref{eq:nmpcState}-\eqref{eq:nmpcInequality_known}, \ \xb[k+N] \in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{r}}(t_{k+N}),\nonumber
\end{align}
where the rotated terminal cost is defined as
\begin{align}\begin{split}\label{eq:rot_tilde_terminal_cost}
\bar{p}_{{\tilde\y}^{\mathrm{r}}}(\xb,t_n)&:= p_{{\tilde\y}^{\mathrm{r}}}(\xb,t_n)-p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_{n}^{\mathrm{r}},t_n)\\
&+{\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n}(\xb-{\mathbf{x}}^{\mathrm{r}}_{n}).\end{split}
\end{align}
Note that by Lemma~\ref{lem:rot_ocp}, the rotated cost $\bar q_\r$ penalizes deviations from ${\mathbf{y}}^{\mathrm{r}}$, i.e., the solution to \eqref{eq:ocp}. We will prove next that $\bar p_{{\tilde\y}^\r}$ also penalizes deviations from ${\mathbf{y}}^\r$, implying that \emph{the rotated ideal MPC formulation is of tracking type}.
\begin{Lemma}
\label{lem:rot_mpc}
Consider the \emph{rotated} \emph{ideal} MPC Problem~\eqref{eq:ideal_rot_nmpc}, formulated using the rotated costs $\bar q_\r$ and $\bar{p}_{{\tilde\y}^\r}$, and the terminal set $\mathcal{X}_{{\mathbf{y}}^\r}^\mathrm{f}$. Then, the primal solution of~\eqref{eq:ideal_rot_nmpc} coincides with the primal solution of the ideal MPC Problem~\eqref{eq:ideal_nmpc}.
\end{Lemma}
\begin{proof}
From~\eqref{eq:minimizer_tilde_yr} and~\eqref{eq:rot_tilde_terminal_cost} we have that $\bar p_{{\tilde\y}^\r}({\mathbf{x}}_k^{\mathrm{r}},t_k) =0$ and that
$\nabla \bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}^{\mathrm{r}}_k,t_k) = \nabla p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}^{\mathrm{r}}_k,t_k) + \nabla p_{{\mathbf{y}}^{\mathrm{r}}}({\tilde\y}^{\mathrm{r}}_k,t_k) = 0$, since the terminal costs are quadratic~\eqref{eq:terminal_cost}. The proof then follows along the same lines as Lemma~\ref{lem:rot_ocp} and~\cite{Diehl2011,Amrit2011a}.
\end{proof}
In order to prove Theorem~\ref{thm:as_stab_0}, we need that the terminal conditions of the rotated ideal formulation~\eqref{eq:ideal_rot_nmpc} satisfy Assumption~\ref{a:terminal}. To that end, we introduce the following assumption.
\begin{Assumption}\label{a:terminal_for_rotated}
There exists a parametric stabilizing terminal set $\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t)$ and a terminal control law $\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}},t)$ yielding:
\begin{align*}
\mathbf{x}_{+}^\kappa=f_k(\mathbf{x}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t)), && t_+ = t_k + t_\mathrm{s},
\end{align*}
so that
$\bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_{+}^\kappa,t_{+})- \bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_k,t_k) \leq{} - \bar q_\r({\mathbf{x}}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t_k),t_k)$, ${\mathbf{x}}_k\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_k)\Rightarrow {\mathbf{x}}^\kappa_{+}\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_{+})$, and $h({\mathbf{x}}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t_k)) \leq{} 0$ hold for all $k\in\mathbb{I}_0^\infty$.
\end{Assumption}
Note that Assumption~\ref{a:terminal_for_rotated} only differs from Assumption~\ref{a:terminal} by the fact that the set and control law are centered on ${\mathbf{y}}^{\mathrm{r}}$ rather than $\r$, and that the costs are rotated.
\begin{Theorem}
\label{thm:as_stab_0}
Suppose that Assumptions \ref{a:cont} and~\ref{a:terminal_for_rotated} hold, and that Problem~\eqref{eq:ocp} is feasible for initial state $({\mathbf{x}}_k,t_k)$. Then, system~\eqref{eq:sys} in closed-loop with the ideal MPC~\eqref{eq:ideal_nmpc} is asymptotically stabilized to the optimal trajectory ${\x}^{\mathrm{r}}$.
\end{Theorem}
\begin{proof}
By Lemma~\ref{lem:rot_mpc}, the rotated ideal MPC problem has positive-definite stage and terminal costs penalizing deviations from the optimal trajectory ${\mathbf{y}}^{\mathrm{r}}$. Hence, the rotated ideal MPC problem is of tracking type.
Assumption~\ref{a:cont} directly entails a lower bounding by a $\mathcal{K}_\infty$ function, and can also be used to prove an upper bound~\cite[Theorem 2.19]{rawlings2009model}, such that the following holds
\begin{equation*}
\alpha_1(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|) \leq{} \bar V^\mathrm{i}({\mathbf{x}}_k,t_k)\leq{} \alpha_2(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|),
\end{equation*}
where $\alpha_1,\alpha_2\in\mathcal{K}_\infty$. Then, solving Problem~\eqref{eq:ideal_rot_nmpc}, we obtain $\bar V^{\mathrm{i}}({\mathbf{x}}_k,t_k)$ and the optimal trajectories $\{\xb[k]^\mathrm{i},...,\xb[k+N]^\mathrm{i}\}$, and $\{\ub[k]^\mathrm{i},...,\ub[k+N-1]^\mathrm{i}\}$. By relying on Assumptions~\ref{a:rec_ref} and~\ref{a:terminal}, using terminal control law $\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}$, we can construct the feasible sub-optimal trajectories $\{\xb[k+1]^\mathrm{i},...,\xb[k+N]^\mathrm{i},f_{k+N}(\xb[k+N]^\mathrm{i},\kappa^\mathrm{f}_{{\mathbf{y}}^\r})\}$ and $\{\ub[k+1]^\mathrm{i},...,\ub[k+N]^\mathrm{i},\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}\}$ at time $k+1$, which can be used to derive the decrease condition following standard arguments~\cite{rawlings2009model,borrelli2017predictive}:
$$\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1},t_{k+1})-\bar{V}^\mathrm{i}({\mathbf{x}}_k,t_k)\leq{}-\alpha_3(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|).$$
This entails that the \emph{rotated} \emph{ideal} value function $\bar{V}^\mathrm{i}({\mathbf{x}}_k,t_k)$ is a Lyapunov function, and that the closed-loop system is asymptotically stabilized to ${\mathbf{x}}^{\mathrm{r}}$.
Finally, using Lemma~\ref{lem:rot_mpc} we establish asymptotic stability also for the \emph{ideal} MPC scheme~\eqref{eq:ideal_nmpc}, since the primal solutions of the two problems coincide.
\end{proof}
Theorem~\ref{thm:as_stab_0} establishes the first step towards the desired result:
an MPC problem can be formulated using an \emph{infeasible reference}, which stabilizes system~\eqref{eq:sys} to the optimal trajectory of Problem~\eqref{eq:ocp} provided that the appropriate terminal conditions are used.
At this stage, the main issue is to express the terminal constraint set as
a positive invariant set containing ${\x}^{\mathrm{r}}$, and the terminal control law stabilizing the system to ${\x}^{\mathrm{r}}$.
To that end, one needs to know the feasible reference trajectory~${\x}^{\mathrm{r}}$, i.e., to solve Problem~\eqref{eq:ocp}. Since solving Problem~\eqref{eq:ocp} is not practical, we prove in the next section how sub-optimal terminal conditions can be used instead to prove ISS for the closed-loop system.
\subsection{Practical MPC and ISS}\label{sec:iss}
In this subsection, we analyze the case in which the terminal conditions are not enforced based on the feasible reference trajectory, but
rather based on an \emph{approximatively feasible} reference (see Assumption~\ref{a:approx_feas}).
Since in that case asymptotic stability cannot be proven, we will prove ISS for the closed-loop system, where the input is some terminal reference ${\mathbf{y}}^{\mathrm{f}}$. In particular, we are interested in the practical approach ${\mathbf{y}}^{\mathrm{f}}=\r(t_{k+N})$ and the ideal setting ${\mathbf{y}}^{\mathrm{f}}={\mathbf{y}}^{\mathrm{r}}(t_{k+N})$.
To that end, we define the following closed-loop dynamics
\begin{align}\label{eq:iss_dynamics}
{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}) = f_k({\mathbf{x}}_{k},\u_\mathrm{MPC}({\mathbf{x}}_{k},{\mathbf{y}}^\mathrm{f})) = \bar f_k({\mathbf{x}}_{k},{\mathbf{y}}^\mathrm{f}),
\end{align}
where we stress that~$\u_\mathrm{MPC}$ is obtained as~$\ub[k]^\star$ solving problem~\eqref{eq:nmpc} (in case one uses ${\mathbf{y}}^\mathrm{f}=\r$ and terminal cost $p_\r$); or as~$\ub[k]^\mathrm{i}$ solving the ideal problem~\eqref{eq:ideal_nmpc} (in case one uses ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$ and terminal cost $p_{\tilde{\mathbf{y}}^\r}$). In the following we will use the notation ${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})$ to stress that the terminal reference ${\mathbf{y}}^\mathrm{f}$ is used in the computation of the control yielding the next state. Additionally, we define
the following quantities
\begin{align*}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_k,t_k) &:= \sum_{n=k}^{N-1} \bar q_\r(\xb^\star,\ub^\star,t_n) + \bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N}), \\
\bar J_\r^{\mathrm{i}}({\mathbf{x}}_k,t_k) &:= \sum_{n=k}^{N-1} \bar q_\r(\xb^\mathrm{i},\ub^\mathrm{i},t_n) + \bar p_\r(\xb[k+N]^\mathrm{i},t_{k+N}),
\end{align*}
and we remind that
\begin{align*}
\bar V({\mathbf{x}}_k,t_k) &= \sum_{n=k}^{N-1} \bar q_\r(\xb^\star,\ub^\star,t_n) + \bar p_\r(\xb[k+N]^\star,t_{k+N}),\\
\bar V^\mathrm{i}({\mathbf{x}}_k,t_k) &= \sum_{n=k}^{N-1} \bar q_\r(\xb^\mathrm{i},\ub^\mathrm{i},t_n) + \bar p _{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N}).
\end{align*}
Before formulating the stability result in the next theorem, we need to introduce an additional assumption on the reference infeasibility.
\begin{Assumption}[Approximate feasibility of the reference]
\label{a:approx_feas}
The reference ${\mathbf{y}}^{\mathrm{f}}$ satisfies the constraints \eqref{eq:nmpcInequality_known}, i.e., $h({\x}^{\mathrm{f}}_n,{\u}^{\mathrm{f}}_n) \leq{} 0$, $n\in \mathbb{I}_k^{k+N-1}$, for all $k\in\mathcal{N}^+$. Additionally, recursive feasibility holds for both Problem~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} when the system is controlled in closed-loop using the feedback from Problem~\eqref{eq:nmpc}.
\end{Assumption}
\begin{Remark}
Assumption~\ref{a:approx_feas} essentially only requires that the reference used in the definition of the terminal conditions (constraint and cost) is feasible with respect to the system constraints, and not the system dynamics. However, recursive feasibility holds if the reference satisfies, e.g., $\|{\mathbf{x}}_{n+1}^\mathrm{f}-f_n({\mathbf{x}}_n^\mathrm{f},\u_n^\mathrm{f})\|\leq{}\epsilon$, for some small $\epsilon$ i.e., if the reference satisfies the system dynamics approximately.
Note that, if $\epsilon=0$, then Assumption~\ref{a:rec_ref} is satisfied and Assumption~\ref{a:approx_feas} is not needed anymore. Finally, the infeasibility due to $\epsilon\neq0$ could be formally accounted for so as to satisfy Assumption~\ref{a:approx_feas} by taking a robust MPC approach, see, e.g.,~\cite{Mayne2005,Chisci2001}.
\end{Remark}
From a practical standpoint, Assumption~\ref{a:approx_feas} sets a rather mild requirement. In fact, it is not uncommon to use infeasible references for simplicity or satisfying approximated system dynamics to capture the most relevant dynamics of the system (keeping $\epsilon$ small).
{We are now ready to state the main result of the paper.}
\begin{Theorem}\label{thm:iss}
Suppose that Problem~\eqref{eq:ocp} is feasible and Assumptions~\ref{a:cont} and~\ref{a:terminal} hold for the reference ${\mathbf{y}}^{\mathrm{r}}$ with costs $\bar{q}_\r$ and $\bar{p}_{{\tilde\y}^\r}$ and terminal set $\mathcal{X}_{{\mathbf{y}}^\r}$. Suppose moreover that Problem~\eqref{eq:nmpc} and Problem~\eqref{eq:ideal_nmpc} are feasible at time $k$ with inital state $({\mathbf{x}}_k,t_k)$, and that reference ${\mathbf{y}}^\mathrm{f}$, with terminal set $\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{f}}$, satisfies Assumption~\ref{a:approx_feas}. Then, system~\eqref{eq:iss_dynamics} obtained from~\eqref{eq:sys} in closed-loop with MPC formulation~\eqref{eq:nmpc} is ISS.
\end{Theorem}
\begin{proof}
We prove the result using the value function $\bar V^\mathrm{i}({\mathbf{x}}_k,t_k)$ of the rotated ideal Problem~\eqref{eq:ideal_rot_nmpc} as an ISS-Lyapunov function candidate \cite{jiang2001input}. From the prior analysis in Theorem \ref{thm:as_stab_0} we know that Assumption~\ref{a:rec_ref} holds for ${\mathbf{y}}^{\mathrm{r}}$ since Problem~\eqref{eq:ocp} is feasible, and that $\bar V^\mathrm{i}({\mathbf{x}}_k,t_k)$ is a Lyapunov function {when the \emph{ideal} terminal conditions} ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$ are used. Hence, when {we apply the ideal control input $\ub[k][k]^\mathrm{i}$, i.e., use \eqref{eq:iss_dynamics} to obtain the next state ${\mathbf{x}}_{k+1}({\mathbf{y}}^{\mathrm{r}})=\bar{f}_k({\mathbf{x}}_k,{\mathbf{y}}^{\mathrm{r}})$}, we have the following relations
\begin{align*}
\alpha_1(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|) \leq \bar V^\mathrm{i}({\mathbf{x}}_k,t_k) \leq \alpha_2(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|),\\
\bar V^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^{\mathrm{r}}),t_{k+1}) - \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) \leq -\alpha_3(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|),
\end{align*}
with $\alpha_i\in \mathcal{K}_\infty$, $i=1,2,3$.
We are left with proving ISS, i.e., that $\exists \, \sigma\in\mathcal{K}$ such that when the reference ${\mathbf{y}}^\mathrm{f}$ is treated as an external input, the next state is given by ${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})=\bar f_k({\mathbf{x}}_k,{\mathbf{y}}^\mathrm{f})$, the following holds
\begin{align}\begin{split}
\label{eq:iss_decrease}
\bar V^\mathrm{i}({{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})},t_{k+1})- \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k)\leq&\sigma( \| {\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^{\mathrm{r}} \| )\\&-\alpha_3(\| {\mathbf{x}}_k-{\x}^{\mathrm{r}}_k \|).
\end{split}\end{align}
In order to bound $\bar V^\mathrm{i}({{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})},t_{k+1}) - \bar V^\mathrm{i}({\mathbf{x}}_{k},t_{k})$, we first derive an upper bound on $\bar J_\r^\mathrm{i}$ which depends on $\bar V^\mathrm{i}$.
To that end, we observe that the rotated cost of the ideal trajectory $\xb^\mathrm{i}$, $\ub^\mathrm{i}$ satisfies
\begin{align*}
\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)&= \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k)-\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N})\\
&+\bar p_\r(\xb[k+N]^\mathrm{i},t_{k+N}).
\end{align*}
Defining
\begin{align*}
\phi({\mathbf{y}}^\mathrm{f})&:=\bar p_{{\mathbf{y}}^\mathrm{f}}(\xb[k+N]^\mathrm{i},t_{k+N})-\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N}),
\end{align*}
there exists a $\sigma_1 \in \mathcal{K}$ such that $\phi({\mathbf{y}}^\mathrm{f}) \leq{} \sigma_1(\|{\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^\r\|)$
since, by~\eqref{eq:terminal_cost}, $\phi({\mathbf{y}}^\mathrm{f})$ is a continuous function of ${\mathbf{y}}^\mathrm{f}$ and $\phi({\mathbf{y}}^\mathrm{r})=0$.
Then, the following upper bound is obtained
\begin{align*}
\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)&\leq \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma_1(\| {\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^\mathrm{r} \| ).
\end{align*}
Upon solving Problem~\eqref{eq:nmpc}, we obtain $\bar V({\mathbf{x}}_{k},t_k)\leq\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)$. Starting from the optimal solution ${\mathbf{x}}^\star$, and $\u^\star$, we will construct an upper bound on the decrease condition. To that end, we first need to evaluate the cost of this trajectory, i.e.,
\begin{align*}\begin{split}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_{k},t_k)&=\bar V({\mathbf{x}}_{k},t_k)-\bar p_\r(\xb[k+N]^\star,t_{k+N})\\
&+\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N}).
\end{split}\end{align*}
Using the same reasoning as before, there exists $\sigma_2 \in \mathcal{K}$ such that
\begin{align*}
&\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N})-\bar p_\r(\xb[k+N]^\star,t_{k+N})\\
&\hspace{5em}\leq \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}^\mathrm{r}_{k+N} \| ).
\end{align*}
Then, we obtain
\begin{align}\label{eq:jbar}
\begin{split}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_{k},t_k) &\leq \bar V({\mathbf{x}}_{k},t_k) + \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ) \\
&\leq \bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ) \\
& \leq \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ),
\end{split}
\end{align}
where we defined $\sigma:=\sigma_1+\sigma_2$.
Proceeding similarly as in the proof of Proposition~\ref{prop:stable}, we apply the control input $\ub[k]^\star$ from~\eqref{eq:nmpc}, i.e., ${\mathbf{y}}^\mathrm{f}=\r$, to obtain $${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})=\bar{f}_k({\mathbf{x}}_k,{\mathbf{y}}^\mathrm{f}).$$
In order to be able to apply this procedure, we first assume that the obtained initial guess is feasible for the ideal problem~\eqref{eq:ideal_nmpc} and proceed as follows.
By Assumption~3, we use the terminal control law {$\kappa_{{\mathbf{y}}^\r}^\mathrm{f}({\mathbf{x}},t)$}, to form a guess at the next time step and upper bound the \emph{ideal} rotated value function. By optimality
\begin{align}\label{eq:iss_value_decrease}
\bar{V}^\mathrm{i}&( {\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1}) \leq{}\\ &\nonumber\leq{}\sum_{n=k+1}^{N-1}\bar{q}_\r(\xb^\star,\ub^\star,t_n)+\bar q_\r(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r},t_{k+N})\\
&\nonumber\hspace{4em}+\bar p_{{\tilde\y}^\r}(\xb[k+N+1]^{\kappa,\star},t_{k+N+1})\\
&\nonumber=\bar{J}_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\bar{q}_\r(\xb[k]^\star,\ub[k]^\star,t_k)-\bar{p}_{{\tilde\y}^\r}(\xb[k+N]^\star,t_{k+N})\\
&\nonumber+\bar{p}_{{\tilde\y}^\r}(\xb[k+N+1]^{\star,\kappa},t_{k+N+1})+\bar{q}_{\r}(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r},t_{k+N}),
\end{align}
where we used
$$\xb[k+N+1]^{\star,\kappa}\hspace{-0.2em}:= \hspace{-0.2em}f_{k+N}(\xb[k+N],\kappa_{{\mathbf{y}}^\r}),\, \kappa_{{\mathbf{y}}^\r}\hspace{-0.2em}:=\hspace{-0.2em}\kappa_{{\mathbf{y}}^\r}(\xb[k+N]^\star,t_{k+N}),$$
{and assumed that $\xb[k+N+1]^{\star,\kappa}\in\mathcal{X}_{{\mathbf{y}}^\r}(t_{k+N+1})$}. Again, using Assumption~3 we can now upper bound the terms
\begin{align*} \bar{p}_{{\tilde\y}^\r}(\xb[k+N+1]^{\star,\kappa},t_{k+N+1})-\bar{p}_{{\tilde\y}^\r}(\xb[k+N]^\star,t_{k+N})\\\
+\bar{q}_\r(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r})\leq{}0,
\end{align*}
so that~\eqref{eq:iss_value_decrease} can be written as
\begin{align}
\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1}) &\leq{}J_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\bar{q}_\r(\xb[k]^\star,\ub[k]^\star,t_{k}),\\
\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})) &\leq{}J_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\alpha_3(\|{\mathbf{x}}_k-{\mathbf{x}}^\r_k\|),\label{eq:iss_bound_decr}
\end{align}
which, in turn, proves~\eqref{eq:iss_decrease}.
In case ${\xb[k+N+1]^{\star,\kappa}\not\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{r}}(t_{k+N+1})}$,
we resort to a relaxation of the terminal constraint with an exact penalty~\cite{Scokaert1999a,Fletcher1987} in order to compute an upper bound to the cost. This relaxation has the property that the solution of the relaxed formulation coincides with the one of the non-relaxed formulation whenever it exists. Then, by construction, the cost of an infeasible trajectory is higher than that of the feasible solution.
{Finally, from Assumption~\ref{a:approx_feas} we know that the value functions $\bar{V}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1})$ and $\bar V^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1})$ are feasible and bounded for time $k+1$.}
\end{proof}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{iss_closed.eps}
\caption{Closed-loop simulation with initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray trajectories show the infeasible reference $\r=(\r^{\mathbf{x}},\r^\u)$, while the black trajectories show the optimal reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$ obtained from Problem~\eqref{eq:ocp}. The orange trajectories show the closed-loop behavior for the practical MPC Problem~\eqref{eq:nmpc}, while the blue trajectories show the closed-loop behavior for the \emph{ideal} MPC Problem~\eqref{eq:ideal_nmpc}.}
\label{fig:mpatc_1_states}
\end{figure*}
This theorem proves that one can use an infeasible reference, at the price of not converging exactly to the (unknown) optimal trajectory from OCP~\eqref{eq:ocp}, with an inaccuracy which depends on how inaccurate the terminal reference is. It is important to remark that, as proven in~\cite{Zanon2018a,Faulwasser2018}, since the MPC formulation has a turnpike, the effect of the terminal condition on the closed-loop trajectory is decreasing as the prediction horizon increases.
\begin{Remark}
We note that similar results may be possible to prove for general nonlinear systems if there exists a storage function such that strict dissipativity holds for the rotated cost functions~\cite{muller2014necessity}. Future research will investigate ways to extend the results of Theorems~1 and~2 for general nonlinear systems.
\end{Remark}
\section{Simulations}\label{sec:simulations}
In this section we implement the robotic example in~\cite{Faulwasser2009} to illustrate the results of Theorems~\ref{thm:as_stab_0} and \ref{thm:iss}. We will use the quadratic stage and terminal costs in \eqref{eq:stage_cost}-\eqref{eq:terminal_cost}, i.e.,
\begin{gather*}
q_\r(\xb,\ub,t_n) := \matr{c}{\xb-\rx_n\\\ub-\ru_n}^\top{}W\matr{c}{\xb-\rx_n\\\ub-\ru_n},\\
p_\r(\xb,t_{n}) := (\xb-\rx_{n})^\top{}P(\xb-\rx_{n}).
\end{gather*}
We consider the system presented in~\cite{Faulwasser2009}, i.e., an actuated planar robot with two degrees of freedom with dynamics
\begin{align}
\matr{c}{\dot{x}_1\\\dot{x}_2} &= \matr{c}{ x_2\\B^{-1}(x_1)(u-C(x_1,x_2)x_2-g(x_1))},\label{eq:robot}
\end{align}
where $x_1=(q_1,q_2)$ are the joint angles, $x_2=(\dot{q}_1,\dot{q}_2)$ the joint velocities, and $B$, $C$, and $g$ are given by
\begin{subequations}\label{eq:modelparams}
\begin{align*}
B(x_1) &:= \matr{cc}{200+50\cos(q_2) & 23.5+25\cos(q_2)\\
23.5+25\cos(q_2) & 122.5},\\
C(x_1,x_2) &:= 25\sin(q_2)\matr{cc}{\dot{q}_1 & \dot{q}_1+\dot{q}_2\\
-\dot{q}_1 & 0}\\
g(x_1) &:= \matr{c}{784.8\cos(q_1)+245.3\cos(q_1+q_2)\\
245.3\cos(q_1+q_2)},
\end{align*}
\end{subequations}
and with following constraints on the state and control
\begin{align}\label{eq:box_constr}
\|x_2\|_\infty\leq{}3/2\pi, && \|u\|_\infty\leq{}4000.
\end{align}
By transforming the control input as
$$u = C(x_1,x_2)x_2+g(x_1)+B(x_1)v,$$
system~\eqref{eq:robot} can be rewritten into a linear system
\begin{align}
\matr{c}{\dot{x}_1\\\dot{x}_2} &= \matr{c}{ x_2\\v},\label{eq:robot_linear}
\end{align}
subject to the non-linear input constraint
\begin{equation}
\|C(x_1,x_2)x_2+g(x_1)+B(x_1)v\|_\infty\leq{}4000.
\end{equation}
Similar to~\cite{Faulwasser2009}, we use
\begin{equation}\label{eq:path}
p(\theta)=\left (\theta-\frac{\pi}{3},\,5\sin\left (0.6 \left (\theta-\frac{\pi}{3}\right )\right )\right ),
\end{equation}
with $\theta\in[-5.3,0]$ as the desired path to be tracked, and define the timing law, with $t_0=0\ \mathrm{s}$, to be given by
\begin{align*}
\theta(t_0) = -5.3,\, \dot{\theta}(t) = \frac{v_\mathrm{ref}(t) }{\left \| \nabla_\theta \rho(\theta(t))\right \|_2},\, v_\mathrm{ref}(t) =\left \{
\begin{array}{@{}ll@{}}
1 & \hspace{-0.5em}\text{if } \theta<0\\
0 & \hspace{-0.5em}\text{if }\theta\geq{}0
\end{array}
\right . .
\end{align*}
This predefined path evolution implies that the norm of the reference trajectory for the joint velocities will be $1\ \mathrm{rad/s}$ for all $\theta<0$ and zero at the end of the path. Hence, we use the following reference trajectories
\begin{align*}
\r^{\mathbf{x}}(t) &= \matr{cc}{p(\theta(t)) &\frac{\partial{p}}{\partial\theta}\dot{\theta}(t)}^\top\hspace{-0.3em},\
\r^\u(t) = \matr{c}{ \frac{\partial^2 p}{\partial\theta^2}\dot{\theta}^2+\frac{\partial p}{\partial \theta}\ddot{\theta}}^\top\hspace{-0.3em},
\end{align*}
which have a discontinuity at $\theta=0$.
For the stage cost we use $W = \mathrm{blockdiag}(Q,R)$ with
\begin{align*}
Q=\mathrm{diag}(10,10,1,1),\
R=\mathrm{diag}(1,1).
\end{align*}
The terminal cost matrix is computed using an LQR controller with the cost defined by $Q$ and $R$ and is given by
$$ P = \matr{cc}{290.34\cdot{}\mathbf{1}^2 &105.42\cdot{}\mathbf{1}^2\\105.42\cdot{}\mathbf{1}^2&90.74\cdot{}\mathbf{1}^2}\in\mathbb{R}^4,$$
where $\mathbf{1}^2\in\mathbb{R}^{2\times2}$ is an identity matrix. Consequently, the corresponding terminal set is then given by
\begin{equation*}
\mathcal{X}^\mathrm{f}_\r(t_n) =\{ {\mathbf{x}}\, |\, ({\mathbf{x}}-\r^{\mathbf{x}}_n)^\top P({\mathbf{x}}-\r^{\mathbf{x}}_n) \leq{} 61.39\}.
\end{equation*}
For detailed derivations of the terminal cost and terminal set, we refer the reader to the Appendix in~\cite{Faulwasser2016,batkovic2020safe}.
In order to obtain the feasible reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$, we approximate the infinite horizon Problem~\eqref{eq:ocp} with a prediction horizon of $M=1200$ and sampling time $t_\mathrm{s}=0.03\ \mathrm{s}$. For the closed-loop simulations, we use the control input obtained from formulations~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} with horizon $N=10$ and sampling time $t_\mathrm{s}= 0.03\ \mathrm{s}$. Note that we used the linear system~\eqref{eq:robot_linear} with its corresponding state and input constraints for all problem formulations. Furthermore, all simulations ran on a laptop computer (i5 2GHz, 16GB RAM) and were implemented in Matlab using the CasADi~\cite{Andersson2019} software together with the IPOPT~\cite{wachter2006implementation} solver.
Figure \ref{fig:mpatc_1_states} shows the closed-loop trajectories for the initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray lines denote the infeasible reference $\r=(\r^{\mathbf{x}},\r^\u)$ for each state while the black lines denote the optimal reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$ from~\eqref{eq:ocp}. The orange lines show the closed-loop evolution for the practical MPC Problem~\eqref{eq:nmpc}, i.e., when the terminal conditions are based on the infeasible reference ${\mathbf{y}}^\mathrm{f}=\r$. The blue lines instead show the closed-loop evolution for the \emph{ideal} MPC Problem~\eqref{eq:ideal_nmpc}, where the terminal conditions are based on the optimal reference from Problem~\eqref{eq:ocp}, i.e, ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$. The bottom right plot of Figure~\ref{fig:mpatc_1_states} shows that the closed-loop error for both the practical MPC (orange lines) and \emph{ideal} MPC (blue lines) stabilize towards the reference $\r$ for times $t\leq{}5\mathrm{s}$. Between $5\ \mathrm{s}\leq{}t\leq{}9\ \mathrm{s}$, we can see that the discontinuity of the reference trajectory $\r$ affects how the two formulations behave. The \emph{ideal} formulation manages to track the optimal reference ${\mathbf{y}}^{\mathrm{r}}$ (black trajectory), while the practical formulation instead tries to track the infeasible reference $\r$ and therefore deviates compared to the \emph{ideal} formulation. After the discontinuity, the rest of the reference trajectory is feasible and both formulations asymptotically stable.
\section{Conclusions}\label{sec:conclusions}
The use of infeasible references in MPC formulations is of great interest due to its convenience and simplicity. In this paper, we have discussed how such references affect the tracking performance for MPC formulations. We have proved that MPC formulations can yield asymptotic stability to an optimal trajectory when terminal conditions are suitably chosen. In addition, we also proved that the stability results can be extend for sub-optimal terminal conditions, in which case the controlled system is stabilized around a neighborhood of the optimal trajectory. Future research will investigate ways to extend the stability results to general nonlinear systems.
\bibliographystyle{IEEEtran}
|
\section{Introduction} \label{sec:intro}
Neutrinos of astrophysical and cosmological origin have been crucial for unraveling neutrino masses and properties. Solar neutrinos provided the first evidence for neutrino oscillations, and hence massive neutrinos. We know that at least two massive neutrinos should exist, as required by the two distinct squared mass differences measured, the atmospheric $\lvert\Delta m^2_{31}\rvert \approx 2.51\cdot 10^{-3}$~eV$^2$ and the solar $\Delta m^2_{21} \approx 7.42\cdot 10^{-5}$~eV$^2$ splittings~\cite{deSalas:2020pgw,Esteban:2020cvm,Capozzi:2021fjo}~\footnote{The current ignorance on the sign of $\lvert\Delta m^2_{31}\rvert$ is translated into two possible mass orderings. In the \emph{normal} ordering (NO), the total neutrino mass is $\sum m_\nu \gtrsim 0.06$~eV, while in the \emph{inverted} ordering (IO) it is $\sum m_\nu \gtrsim 0.10 $~eV.}. However, neutrino oscillation experiments are not sensitive to the absolute neutrino mass scale. On the other hand, cosmological observations provide the most constraining recent upper bound on the total neutrino mass via relic neutrinos, $\sum m_\nu<0.09$~eV at $95\%$~CL~\cite{DiValentino:2021hoh}, where the sum runs over the distinct neutrino mass states. However, this limit is model-dependent, see for example~\cite{DiValentino:2015sam,Palanque-Delabrouille:2019iyz,Lorenz:2021alz,Poulin:2018zxs,Ivanov:2019pdj,Giare:2020vzo,Yang:2017amu,Vagnozzi:2018jhn,Gariazzo:2018meg,Vagnozzi:2017ovm,Choudhury:2018byy,Choudhury:2018adz,Gerbino:2016sgw,Yang:2020uga,Yang:2020ope,Yang:2020tax,Vagnozzi:2018pwo,Lorenz:2017fgo,Capozzi:2017ipn,DiValentino:2021zxy,DAmico:2019fhj,Colas:2019ret}.
The detection of supernova (SN) neutrinos can also provide constraints on the neutrino mass, by exploiting the time of flight delay~\cite{Zatsepin:1968ktq} experienced by a neutrino of mass $m_\nu$ and energy $E_\nu$:
\begin{equation}
\label{eq:delay}
\Delta t = \frac{D}{2c}\left(\frac{m_\nu}{E_{\nu}}\right)^2~,
\end{equation}
\noindent where $D$ is the distance travelled by the neutrino. This method probes the same neutrino mass constrained via laboratory-based kinematic measurements of beta-decay electrons~\footnote{The current limit from the tritium beta decay experiment KATRIN (Karlsruhe Tritium Neutrino) is $m_{\beta}<0.8$~eV~\cite{Aker:2021gma} and the expected sensitivity is 0.2~eV~\cite{Drexlin:2013lha}, both at 90\% CL.}. Using neutrinos from SN1987A~\cite{Kamiokande-II:1989hkh,Kamiokande-II:1987idp,Bionta:1987qt,Alekseev:1988gp,Alekseev:1987ej}, a $95\%$ confidence level (CL) current upper limit of $m_\nu<5.8$~eV~\cite{Pagliaroli:2010ik} has ben derived (see also Ref.~\cite{Loredo:2001rx}). Prospects for future SN explosions may reach the sub-eV level~\cite{Pagliaroli:2010ik,Nardi:2003pr,Nardi:2004zg,Lu:2014zma,Hyper-Kamiokande:2018ofw,Hansen:2019giq}. Nevertheless, these forecasted estimates rely on the detection of inverse $\beta$ decay events in water Cherenkov or liquid scintillator detectors, mostly sensitive to $\bar{\nu}_e$ events. An appealing and alternative possibility is the detection of the $\nu_e$ neutronization burst exploiting the liquid argon technology at the DUNE far detector~\cite{DUNE:2020zfm,Rossi-Torres:2015rla}. The large statistics and the very distinctive neutrino signal in time will ensure a unique sensitivity to the neutrino mass signature via time delays.
\section{Supernova electron neutrino events} \label{sec:events}
Core-collapse supernovae emit $99\%$ of their energy ($\simeq 10^{53}$~ergs) in the form of (anti)neutrinos of all flavors with mean energies of $\mathcal{O}(10~\si{\mega\electronvolt})$. The explosion mechanism of a core-collapse SN can be divided into three main phases: the \emph{neutronization burst}, the \emph{accretion phase} and the \emph{cooling phase}. The first phase, which lasts for 25 milliseconds approximately, is due to a fast \emph{neutronization} of the stellar nucleus via electron capture by free protons, causing an emission of electron neutrinos ($e^- + p\rightarrow \nu_e + n$). The flux of $\nu_e$ stays trapped behind the shock wave until it reaches sufficiently low densities for neutrinos to be suddenly released. Unlike subsequent phases, the neutronization burst phase has little dependence on the progenitor star properties. In numerical simulations, there is a second \emph{accretion} phase of $\sim 0.5$~s in which the shock wave leads to a hot accretion mantle around the high density core of the neutron star. High luminosity $\nu_e$ and $\bar{\nu}_e$ fluxes are radiated via the processes $e^- + p\rightarrow \nu_e + n$ and $e^+ + n \rightarrow \bar{\nu}_e + p$ due to the large number of nucleons and the presence of a quasi-thermal $e^+e^-$ plasma. Finally, in the \emph{cooling} phase, a hot neutron star is formed. This phase is characterized by the emission of (anti)neutrino fluxes of all species within tens or hundreds of seconds.
For numerical purposes, we shall make use here of the following quasi-thermal parametrization, representing well detailed numerical simulations~\cite{Keil:2002in,Hudepohl:2009tyy,Tamborra:2012ac,Mirizzi:2015eza}:
\begin{equation}
\label{eq:differential_flux}
\Phi^{0}_{\nu_\beta}(t,E) = \frac{L_{\nu_\beta}(t)}{4 \pi D^2}\frac{\varphi_{\nu_\beta}(t,E)}{\langle E_{\nu_\beta}(t)\rangle}\,,
\end{equation}
and describing the differential flux for each neutrino flavor $\nu_\beta$ at a time $t$ after the SN core bounce, located at a distance $D$. In Eq.~\ref{eq:differential_flux}, $L_{\nu_\beta}(t)$ is the $\nu_\beta$ luminosity, $\langle E_{\nu_\beta}(t)\rangle$ the mean neutrino energy and $\varphi_{\nu_\beta}(t,E)$ is the neutrino energy distribution, defined as:
\begin{equation}
\label{eq:nu_energy_distribution}
\varphi_{\nu_\beta}(t,E) = \xi_\beta(t) \left(\frac{E}{\langle E_{\nu_\beta}(t)\rangle}\right)^{\alpha_\beta(t)} \exp{\left\{\frac{-\left[\alpha_\beta(t) + 1\right] E}{\langle E_{\nu_\beta}(t)\rangle}\right\}}\,,
\end{equation}
\noindent where $\alpha_\beta(t)$ is a \emph{pinching} parameter and $\xi_\beta(t)$ is a unit-area normalization factor.
The input for luminosity, mean energy and pinching parameter values have been obtained from the \texttt{SNOwGLoBES} software \cite{snowglobes}. \texttt{SNOwGLoBES} includes fluxes from the Garching Core-Collapse Modeling Group~\footnote{\url{https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/index.html}}, providing computationally expensive simulation results for a progenitor star of $8.8 M_\odot$~\cite{Hudepohl:2009tyy}.
Neutrinos experience flavor conversion inside the SN as a consequence of their coherent interactions with electrons, protons and neutrons in the medium, being subject to the MSW (Mikheyev-Smirnov-Wolfenstein) resonances associated to the solar and atmospheric neutrino sectors~\cite{Dighe:1999bi}. After the resonance regions, the neutrino mass eigenstates travel incoherently in their way to the Earth, where they are detected as flavor eigenstates. The neutrino fluxes at the Earth ($\Phi_{\nu_e}$ and $\Phi_{\nu_\mu}=\Phi_{\nu_\tau}=\Phi_{\nu_x}$) can be written as:
\begin{eqnarray}
\label{eq:nue}
\Phi_{\nu_e}&= &p \Phi^{0}_{\nu_e} +(1-p) \Phi^{0}_{\nu_x}~;\\
\Phi_{\nu_\mu}+\Phi_{\nu_\tau} \equiv 2\Phi_{\nu_x} & =& (1-p) \Phi^{0}_{\nu_e} + (1+p) \Phi^{0}_{\nu_x}~,
\end{eqnarray}
\noindent where $\Phi^{0}$ refers to the neutrino flux in the SN interior, and the $\nu_e$ survival probability $p$ is given by $p = |U_{e3}|^2= \sin^2 \theta_{13}$ ($p \simeq |U_{e2}|^2 \simeq \sin^2 \theta_{12}$) for NO (IO), due to adiabatic transitions in the $H$ ($L$) resonance, which refer to flavor conversions associated to the atmospheric $\Delta m^2_{31}$ (solar $ \Delta m^2_{21}$) mass splitting, see e.g.~\cite{Dighe:1999bi}. Here we are neglecting possible non-adiabaticity effects occurring when the resonances occur near the shock wave \cite{Schirato:2002tg,Fogli:2003dw,Fogli:2004ff,Tomas:2004gr,Dasgupta:2005wn,Choubey:2006aq,Kneller:2007kg,Friedland:2020ecy}, as well as the presence of turbulence in the matter density \cite{Fogli:2006xy,Friedland:2006ta,Kneller:2010sc,Lund:2013uta,Loreti:1995ae,Choubey:2007ga,Benatti:2004hn,Kneller:2013ska,Fogli:2006xy}. The presence of non-linear collective effects~\cite{Mirizzi:2015eza,Chakraborty:2016yeg,Horiuchi:2018ofe,Tamborra:2020cul,Capozzi:2022slf} is suppressed by the large flavor asymmetries of the neutronization burst~\cite{Mirizzi:2015eza}.
Earth matter regeneration effects also affect the neutrino propagation in case the SN is shadowed by the Earth for the DUNE detector. The trajectories of the neutrinos depend on the SN location and on the time of the day at which the neutrino burst reaches the Earth. Neutrinos therefore travel a certain distance through the Earth characterized by a zenith angle $\theta$, analogous to the one usually defined for atmospheric neutrino studies. This convention assumes $\cos \theta=-1$ for upward-going events, \emph{i.e.} neutrinos that cross a distance equal to the Earth's diameter, and $\cos \theta\geq 0$ for downward-going neutrinos that are un-shadowed by the Earth. An analytical expression for the electron neutrino fluxes after crossing the Earth~\footnote{In what follows, we shall focus on electron neutrino events, the dominant channel in DUNE.} yields no modifications for NO.
In turn, for IO, an approximate formula for the $\nu_e$ survival probability in Eq.~\ref{eq:nue} and after crossing the Earth, assuming that SN neutrinos have traveled a distance $L(\cos\theta)$ inside the Earth and in a density constant medium, reads as~\cite{Dighe:1999bi,Lunardini:2001pb}:
\begin{widetext}
\begin{eqnarray}
\label{eq:p2e}
p & = & \sin^2\theta_{12} + \sin2\theta^m_{12} \, \label{P2e}
\sin(2\theta^m_{12}-2\theta_{12})
\sin^2\left(
\frac{\Delta m^2_{21} \sin2\theta_{12}}{4 E \,\sin2\theta^m_{12}}\,L(\cos\theta)
\right)\,,
\end{eqnarray}
\end{widetext}
\noindent
where $\theta^m_{12}$ is the effective value of mixing angle $\theta_{12}$ in matter for neutrinos:
\begin{eqnarray}
\sin2\theta^m_{12} = \frac{\sin^2\theta_{12}}
{\sin^2\theta_{12}+ \left(\cos^2\theta_{12}- \frac{2\sqrt{2}G_F N{e} E}{\Delta m^2_{21}}\right)^2}~.
\end{eqnarray}
In the expression above, $N_e$ refers to the electron number density in the medium, $\sqrt{2}G_F N_e (\textrm{eV})\simeq 7.6 \times 10^{-14} Y_e\rho$, with $Y_e$ and $\rho$ the electron fraction and the Earth's density in g/cm$^3$ respectively.
Our numerical results are obtained calculating $p$ in Eq.~\ref{eq:p2e} in the general case of neutrino propagation in multiple Earth layers, with sharp edge discontinuities between different layers and a mild density dependence within a layer, see \cite{Lisi:1997yc,Fogli:2012ua}. Our method consists in evaluating the evolution operator for the propagation in a single layer using the Magnus expansion \cite{Magnus_exp}, where the evolution operator is written as the exponential of an operator series. In our case, we stop at the second order of the series. With the approximation of the electron density being a fourth order polynomial as a function of the Earth radius, the integrals involved in the Magnus expansion become analytical. The evolution operator over the entire trajectory in the Earth is simply the product of the operators corresponding to each crossed layer.
The neutrino interaction rate per unit time and energy in the DUNE far detector is defined as:
\begin{equation}
\label{eq:rate_DUNE_fun}
R(t,E) = N_\text{target}~\sigma_{\nu_e\text{CC}}(E)~\epsilon(E)~\Phi_{\nu_e}(t,E)~,
\end{equation}
\noindent where $t$ is the neutrino emission time, $E$ is the neutrino energy, $N_\text{target}=\num{6.03e32}$ is the number of argon nuclei for a $40$ kton fiducial mass of liquid argon, $\sigma_{\nu_e\text{CC}}(E)$ is the $\nu_e$ cross-section, $\epsilon(E)$ is the DUNE reconstruction efficiency and $\Phi_{\nu_e}(t,E)$ is the electron neutrino flux reaching the detector per unit time and energy. The total number of expected events is given by $R\equiv \int R(t,E)\mathop{}\!\mathrm{d} t \mathop{}\!\mathrm{d} E$.
As far as cross-sections are concerned, liquid argon detectors are mainly sensitive to electron neutrinos via their charged-current interactions with $^{40}$Ar nuclei, $\nu_e + {^{40} Ar} \rightarrow e^{-} + {^{40} K^{*}}~$, through the observation of the final state electron plus the de-excitation products (gamma rays, ejected nucleons) from $^{40} K^{*}$. We use the MARLEY~\footnote{MARLEY (Model of Argon Reaction Low Energy Yields) is a Monte Carlo event generator for neutrino interactions on argon nuclei at energies of tens-of-MeV and below, see \url{http://www.marleygen.org/} and Ref.~\cite{Gardiner:2021qfr}.} charged-current $\nu_e$ cross-section on $^{40}$Ar, implemented in \texttt{SNOwGLoBES} \cite{snowglobes}. Concerning event reconstruction, we assume the efficiency curve as a function of neutrino energy given in Ref.~\cite{DUNE:2020zfm}, for the most conservative case quoted there of 5~MeV as deposited energy threshold.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{IO_eventsperbin_1ms.pdf}
\caption{\label{fig:events}Number of $\nu_e$ events per unit time in the DUNE far detector. The plot zooms into the first 50~ms since core bounce. A SN distance of 10~kpc is assumed. Several histograms are shown: neglecting oscillations (both in the SN and in the Earth), as well as including oscillations for the NO and IO cases. For IO, we show the variation of the Earth matter effects with zenith angle $\theta$.}
\end{center}
\end{figure}
Figure~\ref{fig:events} shows the number of $\nu_e$ events as a function of emission time at the DUNE far detector from a SN explosion at $10$~kpc from Earth, for negligible time delays due to non-zero neutrino masses. We illustrate the case where no oscillations are considered. We also account for oscillations in NO and IO cases, the latter for several possible SN locations with respect to the Earth. The neutronization burst is almost entirely (partially) suppressed in the normal (inverted) mass ordering.
For a SN located at $D=10$~kpc from the Earth and without Earth matter effects, $R$ is found to be 860, 1372 and 1228 for the no oscillations, NO and IO cases, respectively. In other words, the largest total event rate is obtained for the largest swap of electron with muon/tau neutrinos in the SN interior, \emph{i.e.} the smallest value of $p$ in Eq.~\ref{eq:nue}, corresponding to the NO case. This can be understood from the larger average neutrino energy at production of muon/tau neutrinos compared to electron neutrinos, resulting in a higher (on average) neutrino cross-section and reconstruction efficiency.
Finally, as shown in Fig.~\ref{fig:events}, Earth matter effects are expected to have a mild effect on the event rate in all cases. The $\nu_e$ flux is left unchanged in the normal ordering, while Earth matter effects modify slightly the neutronization burst peak in the IO case. The total number of events becomes $R=1206, 1214, 1260, 1200$ for IO and $\cos\theta = -0.3,-0.5,-0.7,-1$, respectively.
\section{Neutrino mass sensitivity} \label{sec:likelihood}
In order to compute the DUNE sensitivity to the neutrino mass, we adopt an 'unbinned' maximum likelihood method similar to the one in \cite{Pagliaroli:2010ik}.
We start by generating many DUNE toy experiment datasets (a few hundreds, typically) for each neutrino oscillation and SN distance scenario, and assuming massless neutrinos. For each dataset, the time/energy information of the $R$ generated events are sampled following the parametrization of Eq.~\ref{eq:rate_DUNE_fun}, and events are sorted in time-ascending order.
Furthermore, we assume a $10\%$ fractional energy resolution in our $\mathcal{O}$(10~MeV) energy range of interest, see~\cite{DUNE:2020zfm}, and smear the neutrino energy of each generated event accordingly. We assume perfect time resolution for our studies. On the one hand, DUNE's photon detection system provides a time resolution better than 1~$\mu$s~\cite{DUNE:2020zfm}, implying a completely negligible smearing effect. On the other hand, even in the more conservative case of non-perfect matching between TPC and optical flash information, the DUNE charge readout alone yields a time resolution of order 1~ms~\cite{DUNE:2020ypp}. While not completely negligible, the time smearing is expected to have a small impact also in this case, considering the typical 25~ms duration of the SN neutronization burst.
Once events are generated for each DUNE dataset, we proceed with our minimization procedure. The two free parameters constrained in our fit are an offset time $t_\text{off}$ between the moment when the earliest SN burst neutrino reaches the Earth and the detection of the first event $i=1$, and the neutrino mass $m_\nu$. The fitted emission times $t_{i,fit}$ for each event $i$ depend on these two fit parameters as follows:
\begin{equation}
\label{eq:emission_t}
t_{i,fit} = \delta t_i - \Delta t_{i}(m_\nu) + t_\text{off}\,,
\end{equation}
where $\delta t_i $ is the time at which the neutrino interaction $i$ is measured in DUNE (with the convention that $\delta t_1\equiv 0$ for the first detected event), $\Delta t_i(m_\nu)$ is the delay induced by the non-zero neutrino mass (see Eq.~\ref{eq:delay}), and $t_\text{off}$ is the offset time. We do not include any free parameter describing the SN emission model uncertainties in our fit.
By neglecting backgrounds and all the constant (irrelevant) factors, our likelihood $\mathcal{L}$ function \cite{Pagliaroli:2008ur} reads as
\begin{equation}
\label{eq:likelihood_fun}
\mathcal{L}(m_{\nu},t_\text{off}) = \prod_{i=1}^{R}\int R(t_i,E_i)G_i(E)\mathop{}\!\mathrm{d} E~,
\end{equation}
\noindent where $G_i$ is a Gaussian distribution with mean $E_i$ and sigma $0.1E_i$, accounting for energy resolution. The estimation of the $m_\nu$ fit parameter is done by marginalizing over the nuisance parameter $t_\text{off}$. For each fixed $m_\nu$ value, we minimize the following $\chi^2$ function:
\begin{equation}
\label{eq:chi2_fun}
\chi^2(m_{\nu}) = -2 \log(\mathcal{L}(m_{\nu},t_\text{off,best}))~,
\end{equation}
\noindent where $\mathcal{L}(m_{\nu},t_\text{off,best})$ indicates the maximum likelihood at this particular $m_\nu$ value.
The final step in our analysis is the combination of all datasets for the same neutrino oscillation and SN distance scenario, to evaluate the impact of statistical fluctuations. For each $m_\nu$ value, we compute the mean and the standard deviation of all toy dataset $\chi^2$ values. In order to estimate the allowed range in $m_\nu$, the $\Delta\chi^2$ difference between all mean $\chi^2$ values and the global mean $\chi^2$ minimum is computed. The mean 95\% CL sensitivity to $m_\nu$ is then defined as the largest $m_\nu$ value satisfying $\Delta \chi^2<3.84$. The $\pm 1\sigma$ uncertainty on the 95\% CL $m_\nu$ sensitivity can be computed similarly, including into the $\Delta\chi^2$ evaluation also the contribution from the standard deviation of all toy dataset $\chi^2$ values.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{chi2_10kpc.pdf}
\caption{\label{fig:chi2}$\Delta\chi^2(m_\nu)$ profiles as a function of neutrino mass $m_\nu$, for DUNE generated samples assuming massless neutrinos and a SN distance of 10~kpc. We show the no oscillations' case together with the results for NO and IO. The mean sensitivities and their $\pm 1\sigma$ uncertainties are shown with solid lines and filled bands, respectively. The horizontal dotted line depicts the $95\%$~CL.}
\end{center}
\end{figure}
\begin{table}
\centering
\caption{Mean and standard deviation of the $95\%$~CL sensitivity on neutrino mass from a sample of DUNE SN datasets at $D=10$~kpc, for different neutrino oscillation scenarios. For the IO case, we give sensitivities for different zenith angles $\theta$.}
\label{tab:m_nu_mass_bounds}
\begin{tabular}{@{\extracolsep{0.5cm}}ccc@{\extracolsep{0cm}}}
\toprule
Neutrino mass ordering & $\cos\theta$ & $m_\nu$(eV) \\
\midrule
No oscillations & $0$ & $0.51^{+0.20}_{-0.20}$ \\
\midrule
Normal Ordering & $0$ & $2.01^{+0.69}_{-0.55}$ \\
\midrule
\multirow{5}*{Inverted Ordering} & $0$ & $0.91^{+0.31}_{-0.33}$ \\
& $-0.3$ & $0.85^{+0.33}_{-0.30}$ \\
& $-0.5$ & $0.88^{+0.29}_{-0.33}$ \\
& $-0.7$ & $0.91^{+0.30}_{-0.32}$ \\
& $-1$ & $0.87^{+0.32}_{-0.28}$ \\
\bottomrule
\end{tabular}
\end{table}
Our statistical procedure, and its results for a SN distance of $D=10$~kpc, can be seen in Fig.~\ref{fig:chi2}. The $\Delta\chi^2$ profiles as a function of neutrino mass are shown for no oscillations, and oscillations in SN environment assuming either NO or IO. Earth matter effects have been neglected in all cases. After including Earth matter effects as previously described, only the IO expectation is affected. Table~\ref{tab:m_nu_mass_bounds} reports our results on the mean and standard deviation of the $m_{\nu}$ sensitivity values for different $\cos\theta$ values, that is, for different angular locations of the SN with respect to the Earth.
As can be seen from Fig.~\ref{fig:chi2} and Tab.~\ref{tab:m_nu_mass_bounds}, 95\% CL sensitivities in the 0.5--2.0~eV range are expected. The best, sub-eV reach, results are expected for the no oscillations and IO scenarios. Despite the largest overall event statistics, $R=1372$, the NO reach is the worst among the three cases, of order 2.0~eV. This result clearly indicates the importance of the shape information, in particular of the sharp neutronization burst time structure visible in Fig.~\ref{fig:events} only for the no oscillations and IO cases. Table~\ref{tab:m_nu_mass_bounds} also shows that oscillations in the Earth's interior barely affect the neutrino mass sensitivity.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{mass_sensitivity_comparison_errbar.pdf}
\caption{\label{fig:distance}Dependence of the $95\%$~CL neutrino mass sensitivity with the distance $D$ from Earth at which the SN explodes. The mean and standard deviation of the expected sensitivity values are shown with solid lines and filled bands, respectively.}
\end{center}
\end{figure}
Figure ~\ref{fig:distance} shows how the $95\%~$CL sensitivity on the neutrino mass varies with the SN distance $D$. Both the mean and the standard deviation of the expected sensitivity values are shown. In all scenarios, the sensitivities to $m_\nu$ worsen by about a factor of 2 as the SN distance increases from 5 to 25~kpc. As is well known, as the distance $D$ increases, the reduced event rate ($R\propto 1/D^2$) tends to be compensated by the increased time delays for a given $m_\nu$ ($\Delta t_i(m_\nu)\propto D$). Our analysis shows that this compensation is only partial, and better sensitivities are obtained for nearby SNe.
\section{Conclusions} \label{sec:conclusions}
The capability to detect the electron neutrino flux component from a core-collapse SN in our galactic neighborhood makes large liquid argon detectors powerful observatories to obtain constraints on the absolute value of neutrino mass via time of flight measurements.
Exploiting the signal coming from charged-current interactions of $\nu_e$ with argon nuclei, a 0.9~eV sensitivity on the absolute value of neutrino mass has been obtained in DUNE for the inverted ordering (IO) of neutrino masses, a SN distance of 10~kpc and at 95\% CL. The sensitivity is expected to be significantly worse in the normal ordering (NO) scenario, 2.0~eV for the same SN distance and confidence level. The sensitivity difference between the two orderings demonstrates the benefit of detecting the $\nu_e$ neutronization burst, whose sharp time structure would be almost entirely suppressed in NO while it should be clearly observable in DUNE if the mass ordering is IO. The mild effects of oscillations induced by the Earth matter, affecting only the inverted mass ordering, and of the SN distance from Earth, have been studied. The DUNE sensitivity reach appears to be competitive with both laboratory-based direct neutrino mass experiments (such as KATRIN) and next-generation SN observatories primarily sensitive to the $\bar{\nu}_e$ flux component (such as Hyper-Kamiokande and JUNO).
\begin{acknowledgments}
This work has been supported by the Spanish grants FPA2017-85985-P, PROMETEO/2019/083 and PROMETEO/2021/087, and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019/860881-HIDDeN). The work of FC is supported by GVA Grant No. CDEIGENT/2020/003.
\end{acknowledgments}
|
\section{Introduction}
Spin-orbit interaction (SOI) plays an important role in the widely
studied spin-related effects and spintronic devices. In the latter
it can be either directly utilized to create spatial separation of
the spin-polarized charge carries or indirectly influence the device
performance through spin-decoherence time. In 2D structures two
kinds of SOI are known to be of the most importance, namely Rashba
and Dresselhaus mechanisms. The first one characterized by parameter
$\alpha$ is due to the structure inversion asymmetry (SIA) while the
second one characterized by $\beta$ is due to the bulk inversion
asymmetry (BIA). Most brightly both of the contributions reveal
themselves when the values of $\alpha$ and $\beta$ are comparable.
In this case a number of interesting effects occur: the electron
energy spectrum becomes strongly anisotropic \cite{AnisotrSpectrum},
the electron spin relaxation rate becomes dependent on the spin
orientation in the plane of the quantum well
\cite{AverkievObserved}, a magnetic break-down should be observed in
the Shubnilov de Haas effect\cite{magn}. The energy spectra
splitting due to SOI can be observed in rather well-developed
experiments as that based on Shubnikov--de Haas effect. However,
these experiments can hardly tell about the partial contributions of
the two mechanisms leaving the determination of the relation between
$\alpha$ and $\beta$ to be a more challenging task. At the same
time, in some important cases spin relaxation time $\tau_s$ and spin
polarization strongly depend on the $\frac{\alpha}{\beta}$ ratio. In
this paper we consider the tunneling between 2D electron layers,
which turns out to be sensitive to the relation between Rashba and
Dresselhaus contributions. The specific feature of the tunneling in
the system under consideration is that the energy and in-plane
momentum conservation put tight restrictions on the tunneling.
Without SOI the tunneling conductance exhibits delta function-like
maximum at zero bias broadened by elastic scattering in the layers
\cite{MacDonald}, and fluctuations of the layers width
\cite{VaksoFluctuations}. Such a behavior was indeed observed in a
number of experiments \cite{Eisenstein,Turner,Dubrovski}. Spin-orbit
interaction splits the electron spectra into two subbands in each
layer. At that energy and momentum conservation can be fulfilled for
the tunneling between opposite subbands of the layers at a finite
voltage corresponding to the subbands splitting. However, if the
parameters of SOI are equal for left and right layers, the tunneling
remains prohibited due to orthogonality of the appropriate spinor
eigenstates. In \cite{Raichev} it was pointed out that this
restriction can also be eliminated if Rashba parameters are
different for the two layers. A structure design was proposed
\cite{Raikh} where exactly opposite values of the Rashba parameters
result from the built-in electric field in the left layer being
opposite to that in the right layer. Because the SOI of Rashba type
is proportional to the electric field, this would result in
$\alpha^R=-\alpha^L$, where $\alpha^L$ and $\alpha^R$ are the Rashba
parameters for the left and right layers respectively. In this case
the
peak of the conductance should occur at the voltage $U_0$ corresponding
to the energy of SOI: $eU_0=\pm2\alpha k_F$, where $k_F$ is Fermi
wavevector. In this paper we consider arbitrary Rashba and
Dresselhaus contributions and show how qualitatively different
situations can be realized depending on their partial impact. In
some cases the structure of the electrons eigenstates suppresses
tunneling at ever voltage. At that the scattering is important as it
restores the features of voltage-current characteristic containing
information about SOI parameters. Finally the parameters $\alpha$
and $\beta$ can be obtained in the tunneling experiment which unlike
other spin-related experiments requires neither magnetic field nor
polarized light.
\section{Calculations}
We consider two 2D electron layers separated by potential barrier at
zero temperature (see Fig.\ref{fig:layers}). We shall consider only
one level of size quantization and not too narrow barrier so that
the electrons wavefunctions in the left and right layers overlap
weakly.
The system can be described by the phenomenological tunneling Hamiltonian \cite{MacDonald,MacDonald2,VaksoFluctuations}
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=180pt \epsfbox[30 530 500 760]{fig1.eps}
\caption{\label{fig:layers} Energy diagram of two 2D electron
layers.}
\end{figure}
\begin{equation}
\label{HT0} H=H_{0}^L+H_{0}^R+H_T,
\end{equation}
where $H_{0}^L,H_{0}^R$ are the partial Hamiltonians for the left
and right layers respectively, $H_T$ is the tunneling term. With
account for the elastic scattering and SOI in the layers the partial
Hamiltonians and the tunneling term have the the following form in
representation of secondary quantization:
\begin{equation}
\label{eqH}
\begin{array}{l}
H_{0}^l = \sum\limits_{k,\sigma} {\varepsilon^l_{k} c^{l+}_{k\sigma}
c^l_{k\sigma } } + \sum\limits_{k,k',\sigma} {V^l_{kk'} c^{l+}_{k\sigma}c^l_{k'\sigma }} + H^l_{SO} \\
H_T = \sum\limits_{k,k',\sigma,\sigma'} {T_{kk'\sigma\sigma'}\left( {c^{L+}_{k\sigma} c^{R}_{k'\sigma'} + c^{R+}_{k'\sigma'} c^L_{k\sigma} } \right)}, \\
\end{array}
\end{equation}
Here index $l$ is used for the layer designation and can take the
values $l=R$ for the right layer, $l=L$ for the left layer. By $k$
here and further throughout the paper we denote the wavevector
aligned parallel to the layers planes, $\sigma$ denotes spin
polarization and can take the values $\sigma=\pm 1/2$.
$\varepsilon_k^l$ is the energy of an electron in the layer $l$
having in-plane wavevector $k$. It can be expressed as:
\begin{equation}
\label{spectrum}
\varepsilon _k^l = \varepsilon+\varepsilon_0^l+\Delta^l,
\end{equation}
where $\varepsilon=\frac{\hbar^2k^2}{2m}$, $m$ being electron's
effective mass, $\varepsilon_0^l$ and $\Delta^l$ are the size
quantization energy and the energy shift due to external voltage for
the layer $l$ . We shall also use the value $\Delta^{ll'}$ defined
as
$\Delta^{ll'}=(\Delta^l-\Delta^{l'})+(\varepsilon_0^l-\varepsilon_0^{l'})$.
Similar
notation will be used for spin polarization denoted by indices $\sigma$, $\sigma'$.
The second term in the Hamiltonian (\ref{eqH}) $V_{kk'}^l$ is the matrix element of the scattering operator.
We consider only elastic scattering. The tunneling
term $H_T$ in (\ref{eqH}) is described by the tunneling constant
$T_{kk'\sigma\sigma'}$, which
has the meaning of size quantization levels splitting due to
the wavefunctions overlap. By lowercase $t$ we shall denote the
overlap integral itself. Our consideration is valid only for the
case of weak overlapping, i.e. $t\ll1$. Parametrically $T\sim
t\varepsilon_F$, where $\varepsilon_F$ is the electrons Fermi
energy. The term $H^{l}_{SO}$ describes the spin-orbit part of the
Hamiltonian:
\begin{equation}
\label{eqSOH}
\hat{H}^l_{SO}=\alpha^l \left( \bm{\sigma} \times \bm{k}
\right)_z + \beta^{l} \left( {\sigma _x k_x - \sigma _y k_y }
\right),
\end{equation}
where $\sigma_i$ are the Pauli matrices, $\alpha^l,\beta^l$ are
respectively the parameters of Rashba and Dresselhaus interactions
for the layer $l$. In the secondary quantization representation:
\begin{eqnarray}
\hat {H}_{SO}^l =\alpha^l \sum\limits_k {\left( {k_y
-ik_x } \right)c_{k\sigma }^{l+} c_{k\sigma '}^l +} \left( {k_y
+ik_x }
\right)c_{k\sigma '}^{l+} c_{k,\sigma }^l \nonumber \\
+\beta^l \sum\limits_k
{\left( {k_x -ik_y } \right)c_{k\sigma }^{l+} c_{k\sigma '}^l +}
\left( {k_x +ik_y } \right)c_{k\sigma '}^{l+} c_{k\sigma }^l
\label{eqSOHc}
\end{eqnarray}
The operator of the tunneling current can be expressed as
\cite{MacDonald}:
\begin{equation}
\label{current0}
\hat{I} = \frac{{ie}}{\hbar
}\sum\limits_{k,k',\sigma,\sigma'} T_{kk'\sigma\sigma'}
\left(\hat\rho_{kk'\sigma\sigma'}^{RL}-\hat\rho_{k'k\sigma'\sigma}^{LR}
\right),
\end{equation}
where
$\hat\rho_{kk'\sigma\sigma'}^{ll'}=c_{k,\sigma}^{l+}c_{k',\sigma'}^{l'}$
We shall assume the case of in-plane momentum and the spin
projection being conserved in the tunneling event so the tunneling
constant $T_{kk'\sigma\sigma'}$ has the form
$T_{kk'\sigma\sigma'}=T\delta_{kk'}\delta_{\sigma\sigma'}$, where
$\delta$ is the Cronecker symbol. The tunneling current is then
given by
\begin{equation}
\label{current}
I = \frac{ie}{\hbar}
T \int dk\: \mathrm{Tr} \left( \left<\hat\rho^{RL}_{k\sigma}\right>
-\left<\hat\rho^{LR}_{k\sigma}\right>\right),
\end{equation}
where $<>$ denotes the expectation value in quantum-mechanical
sense. For further calculations it is convenient to introduce vector
operator
$\bm{\hat{S}}^{ll'}_{kk'}=\left\{\hat{S_0},\bm{\hat{s}}\right\}=\left\{\mathrm{Tr}\left(\hat\rho^{ll'}_{kk'\sigma\sigma'}\right),\mathrm{Tr}\left({\bm
\sigma}\hat\rho^{ll'}_{kk'\sigma\sigma'}\right) \right\}$. This
vector fully determines the current because the latter can be
expressed through the difference
$\hat{S}^{RL}_{0k}-\hat{S}^{LR}_{0k}$. The time evolution of
$\bm{\hat{S}}^{ll'}_{kk'}$ is governed by:
\begin{equation}
\label{drodt}
\frac{d\bm{\hat{S}}_{kk'}^{ll'}}{dt}=\frac{i}{\hbar}[H,\bm{\hat{S}}_{kk'}^{ll'}]
\end{equation}
In the standard way of reasoning \cite{Luttinger} we assume
adiabatic onset of the interaction with characteristic time
$w^{-1}$. We will set $w=0$ in the final expression. With this
(\ref{drodt}) turns into:
\begin{equation}
\label{drodt0}
(\bm{\hat{S}}_{kk'}^{ll'}-\bm{\hat{S}}_{kk'}^{(0)ll'})w=\frac{i}{\hbar}[H,\bm{\hat{S}}_{kk'}^{ll'}]
\end{equation}
Here $\bm{\hat{S}}_{kk'}^{(0)ll'}$ represents the stationary
solution of (\ref{drodt}) without interaction. By interaction here
we mean the tunneling and the elastic scattering by impurities but
not the external voltage. The role of the latter is merely shifting
the layers by $eU$ on the energy scale. From such defined
interaction it immediately follows that the only non-zero elements
of $\bm{\hat{S}}_{kk'}^{(0)ll'}$ are that with $l=l'$ and $k=k'$. In
further abbreviations we will avoid duplication of the indices i.e.
write single $l$ instead of $ll$ and $k$ instead of $kk$:
\begin{equation}
\label{Sdiag}
\bm{\hat{S}}_{kk'}^{(0)ll'}=\bm{\hat{S}}_{k}^{(0)l}\delta_{kk'}\delta_{ll'}
\end{equation}
With use of fermion commutation rules
\begin{eqnarray*}
\left\{ {c_i c_k } \right\} = \left\{ {c_i^ + c_k^ + } \right\} = 0 \\
\left\{ {c_i c_k^ + } \right\} = \delta _{ik}
\end{eqnarray*}
the calculations performed in a way similar to \cite{Luttinger}
bring us to the following system of equations
with respect to
$\bm{\hat{S}}_{k}^{ll'}$:
\begin{eqnarray}
0= \left( {\Delta^{ll'}+i\hbar w } \right){\bf{\hat
S}}_k^{ll'} + T\left( {{\bf{\hat S}}_k^{l'} - {\bf{\hat S}}_k^l }
\right)+{\bf{M(}}k{\bf{)\hat S}}_k^{ll'} \nonumber \\
- \sum\limits_{k'} {\left( {\frac{{A_{kk'} {\bf{\hat S}}_k^{ll'} -
B_{kk'} {\bf{\hat S}}_{k'}^{ll'} }}{{ {\varepsilon' - \varepsilon
-\Delta^{ll'} } + i\hbar w}} + \frac{{B_{kk'} {\bf{\hat S}}_k^{ll'}
- A_{kk'} {\bf{\hat S}}_{k'}^{ll'} }}{{ {\varepsilon -
\varepsilon' -\Delta^{ll'} } + i\hbar w}}} \right)}
\label{system1}
\end{eqnarray}
\begin{eqnarray}
i\hbar w\left( {{\bf{\hat S}}_k^{\left( 0 \right)l} - {\bf{\hat
S}}_k^l } \right) = T\left( {{\bf{\hat S}}_k^{l'l} - {\bf{\hat
S}}_k^{ll'} } \right) + {\bf{M}}(k){\bf{\hat S}}_k^l \nonumber \\ +
\sum\limits_{k'} { {\frac{{2i\hbar wA_{kk'} \left( {{\bf{\hat
S}}_k^l - {\bf{\hat S}}_{k'}^{l'} } \right)}}{{\left( {\varepsilon'
- \varepsilon } \right)^2 + \left( {\hbar w} \right)^2 }}} },
\label{system2}
\end{eqnarray}
where $\bm{M}$ is a known matrix, depending on $k$ and parameters of
spin-orbit interaction in the layers. Here we also introduced the
quadratic forms of the impurities potential matrix elements:
\begin{eqnarray}
A_{kk'} \equiv \left| {V_{k'k}^{l} } \right|^2 \nonumber \\
B_{kk'} \equiv V_{k'k}^{l} V_{kk'}^{l'}
\label{correlators}
\end{eqnarray}
As (\ref{system1}) and (\ref{system2}) comprise a system of linear
integral equations these quantities enter the expression
(\ref{current}) for the current linearly and can be themselves
averaged over spatial distribution of the impurities. In order to
perform this averaging we assume the short range potential of
impurities:
\begin{equation}
\label{ImpuritiesPotential} V\left( r \right) = \sum\limits_a
{V_0^{} \delta \left( {r - r_a } \right)}
\end{equation}
The averaging immediately shows that the correlators
$\left<A_{kk'}\right>\equiv A$ and $\left<B_{kk'}\right>\equiv B$
have different parametrical dependence on the tunneling transparency
$t$, namely
\begin{equation}
\label{T2}
\frac{B}{A}\sim t^{2}\sim T^2
\end{equation}
We emphasize that this result holds for non-correlated distribution
of the impurities as well as for their strongly correlated
arrangement such as a thin layer of impurities placed in the middle
of the barrier. The corresponding expressions for these two cases
are given below. Index 'rand' stands for uniform impurities
distribution and 'cor' for their correlated arrangement in the
middle of the barrier $(z=0)$:
\begin{eqnarray}
{B^{rand} } = \frac{{V_0^2 n}}{W}\int {dz}
f_l ^2 (z)f_{l'} ^2 (z)\sim\frac{{V_0^2
n}}{W}\frac{{t^2 }}{d} \nonumber \\
{A^{rand} }
= \frac{{V_0^2 n}}{W}\int {dz} f_l^4\left(z\right)
\sim\frac{{V_0^2 n}}{W}\frac{1}{d}
\nonumber \\
{B^{cor} } = \frac{{V_0^2 n_s }}{W}f_l ^2 (0)f_{l'} ^2
(0)\sim\frac{{V_0^2 n_s}}{W}\frac{{t^2 }}{d}
\nonumber \\
{A^{cor} } = \frac{{V_0^2 n_s
}}{W}f_l ^4 \left( 0 \right)\sim\frac{{V_0^2 n_s}}{W}\frac{1}{d},
\label{correlators1}
\end{eqnarray}
where $n$ and $n_s$ are bulk and surface concentrations of the
impurities, $W$ is the lateral area of the layers, $d$ is the width
of the barrier and $f(z)$ is the eigenfunction corresponding to the
size quantization level, $z$ is coordinate in the direction normal
to the layers planes, $z=0$ corresponding to the middle of the
barrier\cite{Raikh}.
Unlike \cite{Raikh} and according to (\ref{T2}) we
conclude that the correlator $\left<B_{kk'}\right>$ has to be
neglected as soon as we shall be interested in calculating the
current within the order of $T^2$. In the hereused method of
calculation this result appears quite naturally, however, it can be
similarly traced in the technique used in \cite{Raikh} (see
Appendix). For the same reason the tunneling term should be dropped
from (\ref{system2}) as it would give second order in $T$ if
(\ref{system2}) substituted into (\ref{system1}). According to
(\ref{correlators}) $A$ can be expressed in terms of electrons
scattering time:
\begin{equation}
\label{tau} \frac{1}{\tau } = \frac{{2\pi }}{\hbar }\nu\left\langle
{\left| {V_{kk'} } \right|^2 } \right\rangle = \frac{{2\pi
}}{\hbar }\nu A ,
\end{equation}
where $\nu$ is the 2D density of states $\nu=\frac{m}{2\pi\hbar^2}$.
By means of Fourier transformation on energy variable the system
(\ref{system1}),(\ref{system2}) can be reduced to the system of
linear algebraic equations. Finally ${{\bf{\hat S}}_k^{ll'} }$ can
be expressed as a function of ${{\bf{\hat S}}_k^{\left( 0 \right)l}
}$. Consequently the current (\ref{current}) becomes a function of
$\left<\hat{\rho}_{k\sigma}^{(0)R}\right>$,
$\left<\hat{\rho}_{k\sigma}^{(0)L}\right>$. For the considered case
of zero temperature:
\[
\left<\rho _{k\sigma}^{(0)l}\right> = \frac{1}{2W} \theta \left(
{\varepsilon _F^l + \Delta ^l - \varepsilon - \varepsilon _\sigma }
\right),
\]
where
\[
\varepsilon _\sigma = \pm \left| {\alpha ^l \left( {k_x - ik_y }
\right) - \beta ^l \left( {ik_x - k_y } \right)} \right|,
\]. Without loss of generality we shall consider the
case of identical layers and external voltage applied as shown in
Fig.\ref{fig:layers}:
\begin{eqnarray*}
\varepsilon_0^R=\varepsilon_0^L\\
\Delta^L=-\frac{eU}{2}, \Delta^R=+\frac{eU}{2}\\
\Delta^{RL}=-\Delta^{LR}=eU
\end{eqnarray*}
The calculations can be simplified with account for
two small parameters:
\begin{eqnarray}
\xi=\frac{\hbar}{\varepsilon_F\tau}\ll1 \nonumber \\
\eta=\frac{eU}{\varepsilon_F}\ll1 \label{deltaef}
\end{eqnarray}
With (\ref{eftau}) calculation yields the following expression for
the current:
\begin{equation}
\label{currentfinal0} I = \frac{{ie}}{{2\pi \hbar }}T^2 \nu
\int\limits_0^\infty {\int\limits_0^{2\pi } {\left( {\zeta ^L +
\zeta ^R } \right)\mathrm{Tr}\left( {\rho _\sigma ^{\left( 0
\right)R} - \rho _\sigma ^{\left( 0 \right)L} } \right)d\varepsilon
d\varphi } },
\end{equation}
where
\[
\label{constants} \zeta ^l = \frac{{C^l \left[ {\left( {C^l }
\right)^2 - 2bk^2 \sin2\varphi - gk^2 } \right]}}{{\mathop {\left(
{f + 2d\sin2\varphi } \right)}\nolimits^2 k^4 - 2\left( {C^l }
\right)^2 \left( {c + 2a\sin2\varphi } \right)k^2 + \left( {C^l }
\right)^4 }}, \]
\[ C^l\left(U\right) = \Delta ^l + i\frac{\hbar
}{\tau },
\]
\begin{eqnarray}
a = \alpha ^L \beta ^L + \alpha ^R \beta ^R \nonumber \\
b = \left( {\beta ^L + \beta ^R } \right)\left( {\alpha ^L + \alpha ^R } \right)\nonumber \\
c = \left( {\beta ^L } \right)^2 + \left( {\beta ^R } \right)^2 + \left( {\alpha ^L } \right)^2 + \left( {\alpha ^R } \right)^2 \nonumber \\
d = \alpha ^L \beta ^L - \alpha ^R \beta ^R \nonumber \\
f = \left( {\beta ^L } \right)^2 - \left( {\beta ^R } \right)^2 + \left( {\alpha ^L } \right)^2 - \left( {\alpha ^R } \right)^2 \nonumber \\
g = \mathop {\left( {\beta ^L + \beta ^R } \right)}\nolimits^2 + \mathop {\left( {\alpha ^L + \alpha ^R } \right)}\nolimits^2 \nonumber \\
\label{constants}
\end{eqnarray}
Parameters $a$-$g$ are various combinations of the Rashba and
Dresselhaus parameters of SOI in the layers. Both types of SOI are
known to be small in real structures so that:
\begin{equation}
\alpha k_F\ll\varepsilon_F, \; \beta k_F\ll\varepsilon_F
\end{equation}
This additional assumption together with (\ref{deltaef}) reduces
(\ref{currentfinal0}) to
\begin{equation}
\label{currentfinal} I = \frac{{ie^2 }}{{2\pi \hbar }}T^2 \nu
WU\int\limits_0^{2\pi } {\left[ {\zeta ^L \left( {\varepsilon_F }
\right) + \zeta ^R \left( {\varepsilon_F } \right)} \right]d\varphi
}
\end{equation}
The integral over $\varphi$ in (\ref{currentfinal}) can be
calculated analytically by means of complex variable integration.
However, the final result for arbitrary $\alpha^l,\beta^l$ is not
given here for it is rather cumbersome. In the next section some
particular cases are discussed.
\section{Results and Discussion}
The obtained general expression (\ref{currentfinal}) can be
simplified for a few particular important relations between Rashba
and Dresselhaus contributions. These calculations reveal
qualitatively different dependencies of the d.c. tunneling current
on the applied voltage.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=210pt \epsfbox[130 350 700 800]{fig2.eps}
\caption{\label{fig:tunnelingmain}Tunneling conductance, a:
$\varepsilon_F=10$ meV, $\alpha=\beta=0$, $\tau=2*10^{-11}$ s; b:
same as a, but $\alpha k_F=0.6$ meV; c: same as b, but
$\beta=\alpha$; d: same as c, but $\tau=2*10^{-12}$ s.}
\end{figure}
The results of the calculations shown below were obtained using the
following parameters: Fermi energy $\varepsilon_F=10$ meV,
spin-orbit splitting was taken to resemble $GaAs$ structures:
$\alpha k_F=0.6$ meV.
\subsection{No Spin-Orbit Interaction}
In the absence of SOI ($\alpha^R=\alpha^L=0$, $\beta^R=\beta^L=0$) the
energy spectrum for each of the layers forms a paraboloid:
\begin{equation}
E^l(k)=\varepsilon_0+\frac{\hbar^2k^2}{2m}\pm \frac{eU}{2}.
\end{equation}
According to our assumptions (\ref{current0}),(\ref{current}), the tunneling takes place at:
\begin{eqnarray}
E^R=E^L\nonumber \\
k^R=k^L
\label{conservation}
\end{eqnarray}
Both conditions are satisfied
only at $U=0$ so that a nonzero external voltage does not produce any current
despite it produces empty states in one layers aligned to the filled states in the other layer
(Fig.\ref{fig:layers}). The momentum conservation restriction in (\ref{conservation}) is weakened if the electrons scatter at the impurities.
Accordingly, one should expect a nonzero tunneling current
within a finite voltage range in vicinity of zero.
For the considered case the general formula (\ref{currentfinal}) is simplified radically as all the parameters (\ref{constants})
have zero values. Finally we get the well-known
result\cite{MacDonald}:
\begin{equation}
\label{currentMacDonald}
I = 2e^2 T^2 \nu
WU\frac{{\frac{1}{\tau }}}{{\left( {eU} \right)^2 + \left(
{\frac{\hbar }{\tau }} \right)^2 }}. \end{equation}
The conductance defined as $G(U)=I/U$ has Lorentz-shaped peak at $U=0$
turning into delta function at $\tau\rightarrow\infty$.
This case is shown in (Fig.\ref{fig:tunnelingmain},a).
All the curves in Fig.\ref{fig:tunnelingmain} show the results of the
calculations for very weak scattering. The corresponding scattering
time is taken $\tau=2*10^{-11}s$.
\subsection{Spin-Orbit Interaction of Rashba type}
The spin-orbit interaction gives
qualitatively new option for the d.c. conductance to be finite at
non-zero voltage. SOI splits the spectra into two subbands. Now an
electron from the first subband of the left layer can tunnel to a
state in a second subband of the right layer. Let us consider a
particular case when only Rashba type of SOI interaction exists in
the system, its magnitude being the same in both layers, i.e.
$|\alpha^R|=|\alpha^L|\equiv \alpha$, $\beta^R=\beta^L=0$. In this
case the spectra splits into two paraboloid-like subbands "inserted"
into each other. Fig.\ref{fig:spectraRashba} shows their
cross-sections for both layers,
arrows show spin orientation. By applying a certain external
voltage $U_0=\frac{2\alpha k_F}{e}$,
$k_F=\frac{\sqrt{2m\varepsilon_F}}{\hbar}$ the layers can be shifted
on the energy scale in such a way that the cross-section of the
"outer" subband of the right layer coincides with the "inner"
subband of the left layer (see solid circles in
Fig.\ref{fig:spectraRashba}). At that both conditions
(\ref{conservation}) are satisfied. However, if the spin is taken
into account, the interlayer transition can still remain forbidden.
It happens if the appropriate spinor eigenstates involved in the
transition are orthogonal. This very case occurs if
$\alpha^R=\alpha^L$, consequently the conductance behavior remains
the same as that without SOI. Contrary, if the Rashba terms are of
the opposite signs, i.e. $\alpha^R=-\alpha^L$ the spin orientations
in the "outer" subband of the right layer and the "inner" subband of
the left layer are the same and the tunneling is allowed at a finite
voltage but forbidden at $U=0$ . This situation, pointed out in
\cite{Raichev,Raikh} should reveal itself in sharp maxima of the
conductance at $U=\pm U_0$ as shown in
Fig.\ref{fig:tunnelingmain},b. From this dependence the value of
$\alpha$ can be immediately extracted from the position of the peak.
Evaluating (\ref{constants}) for this case and further the
expression (\ref{currentfinal}) we obtain the following result for
the current:
\begin{equation}
\label{currentRaikh} I = \frac{{2e^2T^2 W\nu U\frac{\hbar }{\tau
}\left[ {\delta^2 + e^2 U^2 + \left( {\frac{\hbar }{\tau }}
\right)^2 } \right]}}{{\left[ {\left( {eU - \delta } \right)^2 +
\left( {\frac{\hbar }{\tau }} \right)^2 } \right]\left[ {\left( {eU
+ \delta } \right)^2 + \left( {\frac{\hbar }{\tau }} \right)^2 }
\right]}},
\end{equation}
where $\delta=2\alpha k_F$. The result is in agreement with that
derived in\cite{Raikh}, taken for uncorrelated spatial arrangement
of the impurities. As we have already noted we do not take into
account interlayer correlator $\left<B_{kk'}\right>$
($\ref{correlators}$) because parametrically it has higher order of
tunneling overlap integral $t$ than the intralayer correlator
$\left<A_{kk'}\right>$. Therefore the result (\ref{currentRaikh}) is
valid for arbitrary degree of correlation in spatial distribution of
the impurities in the system.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=220pt \epsfbox[130 500 700 800]{fig3.eps}
\caption{\label{fig:spectraRashba}Cross-section of electron energy spectra in the left(a) and right (b) layer for
the case
$\alpha^{L}=-\alpha^{R}, \beta^{L}=\beta^{R}=0$.}
\end{figure}
It is worth noting that the opposite case when only Dresselhaus type
of SOI exists in the system leads to the same results. However, it
is rather non-practical to study the case of the different
Dresselhaus parameters in the layers because this type of SOI
originates from the crystallographic asymmetry and therefore cannot
be varied if the structure composition is fixed. For this case to be
realized one needs no make the two layers of different materials.
\subsection{Both Rashba and Dresselhaus contributions}
The presence of Dresselhaus term in addition to the Rashba
interaction can further modify the tunneling conductance in a
non-trivial way. A special case occurs if the magnitude of the
Dresselhaus term is comparable to that of the Rashba term. We shall
always assume the Dresselhaus contribution being the same in both
layers: $\beta^{L}=\beta^{R}\equiv\beta$. Let us add the Dresselhaus
contribution to the previously discussed case so that
$\alpha^{L}=-\alpha^{R}\equiv\alpha,\;\alpha=\beta$. The
corresponding energy spectra and spin orientations are shown in
Fig.\ref{fig:spectraRD}. Note that while the spin orientations in
the initial and final states are orthogonal for any transition
between the layers, the spinor eigenstates are not, so that the
transitions are allowed whenever the momentum and energy
conservation requirement (\ref{conservation}) is fulfilled. It can
be also clearly seen from Fig.\ref{fig:spectraRD} that the condition
(\ref{conservation}), meaning overlap of the cross-sections a. and
b. occurs only at few points. This is unlike the previously
discussed case where the overlapping occurred within the whole
circular cross-section shown by solid lines in
Fig.\ref{fig:spectraRashba}. One should naturally expect the
conductance for the case presently discussed to be substantially
lower. Using (\ref{currentfinal}) we arrive at a rather cumbersome
expression for the current:
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=220pt \epsfbox[130 500 700 810]{fig4.eps}
\caption{\label{fig:spectraRD}Cross-section of electron energy
spectra in the left(a) and right (b) layer for
the case
$\alpha^{R}=-\alpha^L=\beta$.}
\end{figure}
\begin{eqnarray}
I = eT^2 W\nu U\left[ {\frac{{G_ - \left(
{G_ - ^2 - \delta ^2 } \right)}}{{\sqrt {F_ - \left( {\delta ^4 +
F_ - } \right)} }} - \frac{{G_ + \left( {G_ + ^2 - \delta ^2 }
\right)}}{{\sqrt {F_ + \left( {\delta ^4 + F_ + } \right)} }}}
\right], \label{CurrentSpecial}
\end{eqnarray}
where \begin{eqnarray*}
G_ \pm = eU \pm i\frac{\hbar }{\tau } \\
F_ \pm = G_ \pm ^2 \left( {G_ \pm ^2 - 2\delta^2 } \right).
\end{eqnarray*}
Alternatively, for the case of no interaction with impurities a
precise formula for the transition rate between the layers can be
obtained by means of Fermi's golden rule. We obtained the following
expression for the current:
\begin{equation}
\label{CurrentPrecise} I = \frac{{2\pi eT^2 W}}{{\hbar \alpha ^2
}}\left( {\sqrt {K + \frac{{8m\alpha ^2 eU}}{{\hbar ^2 }}} - \sqrt
{K - \frac{{8m\alpha ^2 eU}}{{\hbar ^2 }}} } \right),
\end{equation} where
\[
K = 2\delta^2 - e^2 U^2 + \frac{{16m^2 \alpha ^4 }}{{\hbar ^4 }}
\]
Comparing the results obtained from (\ref{CurrentSpecial}) and
(\ref{CurrentPrecise}) is an additional test for the correctness of
(\ref{CurrentSpecial}). Both dependencies are presented in
Fig.\ref{fig:goldenRule} and show a good match. The same dependence
of conductance on voltage is shown in Fig.\ref{fig:tunnelingmain},c.
As can be clearly seen in the figure the conductance is indeed
substantially suppressed in the whole voltage range. This is
qualitatively different from all previously mentioned cases.
Furthermore, the role of the scattering at impurities appears to be
different as well. For the considered above cases characterized by
resonance behavior of the conductance, the scattering broadens the
resonances into Lorentz-shape peaks with the characteristic width
$\delta=\hbar/(e\tau)$. Contrary, for the last case the weakening of
momentum conservation, caused by the scattering, increases the
conductivity and restores the manifestation of SOI in its dependence
on voltage. Fig.\ref{fig:tunnelingmain},d shows this dependence for
a shorter scattering time $\tau=2*10^{-12}$. The reason for that is
the weakening of the momentum conservation requirement due to the
elastic scattering. One should now consider the overlap of the
spectra cross-sections the circles in Fig.\ref{fig:spectraRD} having
a certain thickness proportional to $\tau^{-1}$. This increases the
number of points at which the overlap occurs and, consequently, the
value of the tunneling current. As the calculations show, for
arbitrary $\alpha$ and $\beta$ the dependence of conductance on
voltage can exhibit various complicated shapes with a number of
maxima, being very sensitive to the relation between the two
contributions. The origin of such a sensitivity is the interference
of the angular dependencies of the spinor eigenstates in the layers.
A few examples of such interference are shown in
Fig.\ref{fig:variousRD}, a--c. All the dependencies shown were
calculated for the scattering time $\tau=2*10^{-12}$ s.
Fig.\ref{fig:variousRD},a summarizes the results for all previously
discussed cases of SOI parameters, i.e. no SOI (curve 1), the case
$\alpha_R=-\alpha_L, \beta=0$ (curve 2) and
$\alpha_R=-\alpha_L=\beta$ (curve 3). Following the magnitude of
$\tau$ all the reasonances are broadenered compared to that shown in
Fig.\ref{fig:tunnelingmain}. Fig.\ref{fig:variousRD},b (curve 2)
demonstrates the conductiance calculated for the case
$\alpha_L=-\frac{1}{2}\alpha_R=\beta$, Fig.\ref{fig:variousRD},c
(curve 2) -- for the case $\alpha_L=\frac{1}{2}\alpha_R=\beta$. The
curve 1 corresponding to the case of no SOI is also shown in all the
figures for reference. Despite of a significant scattering parameter
all the patterns shown in Fig.\ref{fig:variousRD} remain very
distinctive. That means that in principle the relation between the
Rashba and Dresselhaus contributions to SOI can be extracted merely
from the I-V curve measured in a proper tunneling experiment.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=190pt \epsfbox[130 350 700 800]{fig5.eps}
\caption{\label{fig:goldenRule}Tunneling conductance calculated for
the case $\alpha^R=-\alpha^L=\beta$ and very weak scattering
compared to the precise result obtained through Fermi's golden rule
calculation.}
\end{figure}
\begin{figure}[h]
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\centering\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6a.eps}
\nonumber
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6b.eps}
\nonumber
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6c.eps}
\end{center}
\end{minipage}
\caption{\label{fig:variousRD}Tunneling conductance calculated for various parameters of
SOI}
\end{figure}
\section{Summary}
As we have shown, in the system of two 2D electron layers separated
by a potential barrier SOI can reveal itself in the tunneling
current. The difference in spin structure of eigenstates in the
layers results in a sort of interference in the tunneling
conductance. The dependence of tunneling conductance on voltage
appears to be very sensitive to the parameters of SOI. Thus, we
propose a way to extract the parameters of SOI and, in particular,
the relation between Rashba and Dresselhaus contributions in the
tunneling experiment. We emphasize that unlike many other
spin-related experiments the manifestation of SOI studied in this
paper should be observed without external magnetic field. Our
calculations show that the interference picture may be well resolved
for GaAs samples with the scattering times down to $\sim 10^{-12}$
s, in some special cases the scattering even restores the traces of
SOI otherwise not seen due to destructive interference.
\section*{ACKNOWLEDGEMENTS}
This work has been supported in part by RFBR, President
of RF support (grant MK-8224.2006.2) and Scientific Programs of RAS.
|
\section{Introduction}
Nano-manufacturing by polymer self-assembly is attracting interests in recent
decades due to its wide applications~\cite{FINK:1998}. The numerical simulation
of this process can be used to research the mechanisms of phase separation of
polymer blends and predict the unobservable process states and unmeasurable
material properties. The mathematical principles and numerical simulation of
self-assembly via phase separation has been extensively
studied~\cite{SCOTT:1949,HSU:1973,CHEN:1994,HUANG:1995,ALTENA:1982,ZHOU:2006,TONG:2002,HE:1997,MUTHUKUMAR:1997,KARIM:1998}. But few specific software
toolkit have been developed to efficiently investigate this phenomenon. \par
A computer program is developed in MATLAB for the numerical simulation of the
polymer blends phase separation.
With this software, the mechanisms of the phase separation are investigated.
Also the mobility, gradient energy coefficient energy, and the surface energy
in the experiment are estimated with the numerical model. The software can
evaluate the physical parameters in the numerical model by implementing the
real experimental parameters and materials properties. The numerical simulation
results can be analyzed with the software and the results from the simulation
software can be validated with the experimental results. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_gui_screen_shot.eps}
\caption{Screenshot of the simulation program graphical user interface.
\label{fg_gui_screenshot}}
\clearpage
\end{figure}
\section{Fundamentals}
The numerical model for phase separation of polymer blends are established and
validated with experimental results work~\cite{SHANG:2010}. The free energy profile during the phase separation in a inhomogeneous mixture
is described by Cahn-Hilliard
Equation~\cite{CAHN:1958, CAHN:1959, CAHN:1961, CAHN:1965}, as shown below,
\begin{equation}
F(C_1,C_2,C_3)=\int_{V} \left\{ f(C_1,C_2,C_3)+\displaystyle\sum_{i=1,2,3} [\kappa_i (\nabla C_i)^2] \right\} dV \label{cahn_hilliard_intro}
\end{equation}
where $f$ is the local free energy density of homogeneous material, $\phi _i$
is the lattice volume fraction of component $i$, and $\kappa_i$ is the gradient
energy coefficient for the component $i$. The total free energy of the system
is composed by two items as shown in Equation~\ref{cahn_hilliard_intro}. The
first item is the local free energy and the second is the composition gradient
contribution to the free energy. \par
In our study, the local free energy is in the form of Flory-Huggins equation,
which is well know and studied for polymer blends~\cite{HUANG:1999}
The ternary Flory-Huggins Equation is shown as follows,
\begin{equation}
\begin{split}
f(C_1,C_2,C_3)
&= \frac{RT}{v_{site}}\bigg( \frac{C_1}{m_1}\ln{C_1}+\frac{C_2}{m_2}\ln{C_2} + C_3\ln{C_3} \\
& \chi_{12}C_1C_2+\chi_{13}C_1C_3+\chi_{23}C_2C_3\bigg)
\label{eq_flory_huggins_intro}
\end{split}
\end{equation}
where $R$ is the ideal gas constant, $T$ is the absolute temperature,
$v_{site}$ is the lattice site volume in the Flory-Huggins model, $m_i$ is the
degree of polymerization of component $i$, and $C_i$ is the composition for the
component $i$. \par
There are some parameters in the numerical model which can not be measured
directly, such as the gradient energy coefficient and the mobility. These
parameters have to be estimated from the experimental
parameters.The gradient energy coefficient, $\kappa$, determines the influence
of the composition gradient to the total free energy of the domain.
The value of $\kappa$ is difficult to measure experimentally. Though efforts
have been made by Saxena and Caneba~\cite{SAXENA:2002} to estimate the
gradient energy coefficient in a ternary polymer system from experimental
methods, few experimental results are published for our conditions. Initially,
the value of $\kappa$ can be estimated by the interaction distance between
molecules~\cite{WISE_THESIS:2003},
\begin{equation}
\kappa=\frac{RTa^2}{3v_{site}}\label{eq_gradient_energy_coefficient}
\end{equation}
where $a$ is the monomer size. A modified equation to calculate $\kappa$
considering the effects of the composition is reported by Gennes,
et al.~\cite{GENNES:1980}.
\begin{equation}
\kappa_i=\frac{RTa^2}{36v_{site}C_i}
\end{equation}
where the subscript, $i$, represents component $i$. \par
The mobility is estimated from the diffusivity of the components. The mobility
of the polymer blends with long chains can be estimated by the equation as
follows~\cite{GENNES:1980},
\begin{equation}
M_i=\frac{C_i}{m_i}\frac{D_mN_ev_{site}}{RT}
\end{equation}
where $m_i$ is the degree of polymerization as stated before, $D_m$ is the
diffusivity of the monomer, and $N_e$ is the effective number of monomers per
entanglement length. Because of the scarce experimental data for $N_e$, a more
generalized form is employed for our study,
\begin{equation}
M=\frac{Dv_{site}}{RT}\label{eq_mobility}
\end{equation}
The time evolution of the composition of component $i$ can be represented
as~\cite{HUANG:1995,BATTACHARYYA:2003,GENNES:1980,SHANG:2009},\par
\begin{equation}
\begin{split}
\frac{\partial C_i}{\partial t}
&= M_{ii}\left[ \frac{\partial f}{\partial C_i}-\frac{\partial f}{\partial C_3}-2\kappa_{ii}\nabla^2C_i-2\kappa_{ij}\nabla^2C_j\right] \\
& +M_{ij}\left[ \frac{\partial f}{\partial C_j}-\frac{\partial f}{\partial C_3}-2\kappa_{ji}\nabla^2C_i-2\kappa_{jj}\nabla^2C_j \right]
\end{split}\label{eq6_paper2}
\end{equation}
where the subscripts $i$ and $j$ represent components 1 and 2, and\par
\begin{equation}
\begin{aligned}
M_{ii}=&(1-\overline{C}_i)^2M_i+\overline{C}_i^2\displaystyle\sum_{j\neq i}M_j\qquad i=1,2;j=1,2,3\\
M_{ij}=&-\displaystyle\sum_{i\neq j}\left[(1-\overline{C}_i)\overline{C}_j\right]M_i+\overline{C}_i\overline{C}_jM_3\qquad i=1,2;j=1,2
\end{aligned}
\end{equation}
where $\overline{C}_i$ is the average composition of component $i$. To simplify
the solution of Equation \ref{eq6_paper2}, $\kappa_{ii}=\kappa_i+\kappa_3$, and
$\kappa_{12}=\kappa_{21}=\kappa_3$, where $\kappa_i$ is the gradient energy
coefficient in Equation~\ref{eq_gradient_energy_coefficient}. \par
For detailed discussion and practical scientific cases with this software can
be found in our previous
works~\cite{SHANG:2008,SHANG:2009,SHANG:2009THESIS}.\par
\section{The MATLAB Program for Simulation of Polymer Phase Separation}
\subsection{Design Principles}
The program is developed in MATLAB m-code. A graphical user interface (GUI) is
implemented in the program created with MATLAB GUI editor. MATLAB is widely
used in scientific computation and has many toolkits and commonly used
mathematical functionalities. But implementing the software in MATLAB the
efficiency of development is greatly improved. Also, by developing the program
in MATLAB, the program is cross platform. \par
The software is designed for daily usage of simulation and experiment
scientists. The program is light weighted and programmed with high computation
efficiency so that it can produce significant science results in a common PC.
It also extensible to a parallel version or implement code to use the high
computation performance of GPU. The GUI is implemented so that the users can
conveniently input the experiment parameters. The results as well as the user
settings can be saved and revisited by the program. Also, for better assistance
to a real productive environment, the simulation model is carefully designed,
so that the users provide the real processing and material parameters and the
program will produce quantitative results comparable to experimental results.
Analytical tools are also provided with the program for post-processing of the
results. \par
\subsection{Numerical Methods}
To solve the partial differential equation, the discrete cosine transform
spectral method is employed. The discrete cosine transform (DCT) is applied
to the right hand side and left hand side of Equation~\ref{eq6_paper2}. The
partial differential equation in the ordinary space then transformed into an
ordinary differential equation in frequency space. When the ODE in frequency
space then is solved, the results are transformed back to the ordinary
space. \par
Comparing to conventional finite element method, the spectral method is more
efficient and accurate. This method enabled the program to solve the equation
in a reasonable computation time to investigate the changes of the phase
separation during an real time span long enough to observe the phase evolution.
The spectral method is only
applied to the spatial coordinates since the time length of the evolution is
not predictable. Actually the real time for phase evolution is usually one of
the major concerns as the result of the simulation. \par
The DCT takes a considerable portion of the computation time. Especially in a
3-dimensional numerical model, the 3-dimensional DCT function with conventional
approach has a complexity of $O(n^3)$, which can not be practical for real
application on a PC. To overcome this computational difficulty, the code can
either be translated to C code embedded in MATLAB m scripts, or a different
mathematical approach can be implemented as well. In this program, the DCT is
calculated from the fast Fourier transform (FFT) which is optimized in
MATLAB. \par
\subsection{Quantitative Simulation with Real Experimental Parameters}
Many of previous numerical simulations in the self-assembly with polymer blends
phase separation are qualitative other than quantitative. The results can only
be used to provide non-quantitative suggestions to the experiments. While this
program implemented a numerical model which quantitatively simulates the
experimental results with the real processing and material parameters. Most of
inputs in to this program can be directly measured and read from the instrument
or material labels. For some of the physical parameters such as $\kappa$ and
the mobility, the program can provide a start value from the calculation with
the theoretical model. The user may need to validate the value by comparing
the simulation results to the experimental results. Eventually, a more accurate
estimation can be find with optimization methods by setting the difference
between the simulation and experiment results as the cost function. \par
Besides the parameters in Cahn-Hilliard equation, other effects such as the
evaporation, substrate functionalization, and the degree of polymerization are
also implemented with the real conditions. The final results are saved and
summarized. The characteristic length of result pattern from simulation and its
compatibility with the substrate functionalization are calculated. These
numbers can be used to compare with the experimental results. \par
\subsection{Data Visualization and Results Analysis}
When running the program, the message from the software will be output to the
working console of MATLAB. The messages will show the current state and real
time results of the simulation. Also, when the simulation is started, the phase
pattern will be plotted in a real time plot window. Users can set the frequency
of real time plot and the scale factor on the domain of the contour plot in
the GUI. The results of the simulation will be saved to a folder designated by
the user. The real time plot will be saved to the result folder. The
quantitative results will be saved as several comma separated values (CSV)
text files. The result folder can be loaded into the analysis toolkit of the
program and the user can view the assessment values such as the characteristic
length, the compatibility parameters, and the composition profile wave in depth
direction with convenient plotting tools. Usually these results such as the
composition profile in each direction in the domain are difficult to observe
in experiment results. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_gui_running.eps}
\caption{The simulation is running with the real time plot of the
current ternary phase morphology.
\label{fg_gui_running}}
\clearpage
\end{figure}
\section{Examples}
To demonstrate the capability of this program, example simulation cases are
shown in this paper. The results of numerical simulation have been validated
with the experimental results in our previous work~\cite{SHANG:2010}. To
compare the simulated results with a real experimental system, we directed
the morphologies of polystyrene (PS) / polyacrylic acid (PAA) blends using
chemically heterogeneous patterns. More specifically, alkanethiols with
different chemical functionalities were patterned by electron beam
lithography, which were then used to direct the assembly of PS/PAA blends
during the spin coating from their mutual solvent~\cite{MING:2009}. The
experimental conditions are implemented into the numerical simulation. The
effects such as the substrate functionalization and the solvent evaporation
are involved in the numerical modeling.
The parameters difficult to measure are acquired with the optimization methods
~\cite{SHANG:2009}.
\par
Sophisticated techniques are required to investigate the composition profile in
the depth of the polymer film~\cite{GEOGHEGAN:2003}. While the numerical
simulation results can provide the composition profile in each position of the
file, the composition profile change in depth direction can be easily accessed.
To investigate the composition wave allow the direction perpendicular to the
film surface, a thick film is implemented to the numerical simulation. This kind
of film is not only difficult to fabricate and characterize in experiments, however
in the numerical modeling, the user only needs to change the mesh grid domain size.
The depth profiles with different substrate functionalization are shown in
Figure~\ref{fg_thick_film}, where $|f_s|$ denotes the surface energy term from the
substrate functionalization. This term will be added to the total free energy
on the interface of the polymer film and the substrate. The initial thickness
of the film is 1 mm and decreases to 8 $\mu m$ due to the evaporation of the
solvent. The thickness results are scaled by 0.5 to fit in the figures. It can
be seen that a higher surface interaction force can result in a faster substrate
directed phase separation in the film. A stronger substrate interface attraction
force can direct the phase separation morphology near the substrate surface. While
with a lower surface energy, the phase separation dynamics in the bulk of the
film overcomes the substrate attraction force. It can be seen that at 30 seconds,
the substrate functionalization has little effects on the morphology on the
substrate surface. Also, the checker board structure can be seen near the
substrate surface with a higher surface energy~\cite{KARIM:1998}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_thick_film.eps}
\caption{The phase separation in a thick film. \label{fg_thick_film}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_thick_film} The phase separation in a thick film.)\par
\qquad
\fi
To investigate the effects of a more complicated pattern, a larger domain is
simulated. The pattern on the substrate applied on the substrate surface is
shown in Figure~\ref{fg_chn_pattern}. The substrate pattern is designed to
investigate the effects of various shapes and contains components such as
squares, circles, and dead end lines in different sizes. The initial surface
dimensions of the model are changed to 12$\mu m\times$12$\mu m$. The initial
thickness of the film is 1mm and shrinks during the solvent evaporation. The
elements in the modelling is 384$\times$384$\times$16. The average composition
ratio of PS/PAA is changed to 38/62 to match the pattern. The result patterns
from the simulation can be seen in Figure~\ref{fg_complicated_patterns}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_chn_pattern.eps}
\caption{The substrate pattern with complicated features.
\label{fg_chn_pattern}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_chn_pattern}The substrate pattern with complicated features.)\par
\qquad
\fi
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_complicated_patterns.eps}
\caption{The effects of complicated substrate patterns.
\label{fg_complicated_patterns}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_complicated_patterns}The effects of complicated substrate
patterns.)\par
\qquad
\fi
It can be seen that in a larger domain with complicated substrate patterns, the
attraction factor has to be increased to obtain a better replication. In
general, the increase of the attraction factor will increase the refinement of
the pattern according to the substrate pattern. But since the substrate pattern
has geometrical features in different sizes, the attraction factor has to be
strong enough to force the intrinsic phase separation with unified
characteristic length to match the substrate pattern in different sizes. This
would be the main challenge to the replication of complicated patterns. It has
been reported by Ming et. al.~\cite{MING:2009} that the addition of the
copolymer can improve the refinement of the final patterns in experiments. The
reason is that the PAA-b-PS block copolymer will concentrate in the interface
of the PS and PAA domains in the phase separation, therefore decreasing the
mixing free energy. Fundamentally, the addition of the block copolymer
increased the miscibility of the two polymers. To simulate these phenomena, the
Flory-Huggins interaction parameter is decreased from 0.22 to 0.1 to increase
the miscibility of PS/PAA in the modelling. The result pattern is also shown in
Figure~\ref{fg_complicated_patterns}, in comparison to the cases without the
addition of block copolymers. It can be seen that the refinement of the phase
separated pattern is improved by the addition of the block copolymer. The $C_s$
values of the phase separation with complicated patten are measured and plotted
in Figure~\ref{fg_cs_complicated_patterns}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_cs_complicated_patterns.eps}
\caption{The effects of complicated substrate patterns. \label{fg_cs_complicated_patterns}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_cs_complicated_patterns}The effects of complicated substrate patterns.)\par
\qquad
\fi
A assessment parameter, $C_s$, the compatibility parameter is introduced to
evaluate the replication of the morphology to the substrate pattern, where
a higher $C_s$ value denotes a better
replication of the polymer film morphology according to the substrate pattern
It can be seen in Figure~\ref{fg_cs_complicated_patterns} that the $C_s$ value
for the system with block copolymer is 7.69E-01, which is higher than the
system without the block copolymer when attraction forces are the same. The
decrease of the Flory-Huggins interaction parameter increases the miscibility
of the polymers, which will decrease the miscibility gap of the polymers, as
can be seen in Equation~\ref{eq_flory_huggins_intro}. The two phase at
equilibrium will be less concentrated in different types of polymer. This is an
issue may need to be concerned when the interaction parameter of the two
polymers is changed. \par
\section{Conclusion}
A computer program for simulation of polymer self-assembly with phase separation
is introduced. The program is developed in MATLAB m code and designed to
assist the scientists in real working environments. The program is able to
simulate the experiment results quantitatively with real experimental
parameters. The unmeasurable physical parameters such as the gradient energy
coefficient and the mobility can be estimated with the program. The program
provides a graphical user interface and analytical toolkits. This program
can help the scientists in research in polymer phase separation mechanisms and
dynamics with high efficiency, convenience of usage, quantitative results
analysis, and validated reliability.
\section{Acknowledgement}
The authors would thank the efforts of Liang Fang and Ming Wei for providing
help in the experimental procedurals. The authors also appreciate the
valuable suggestions and comments from other users and testers of this program.
This project is a part of the research in Center of High-rate Nanomanufacturing,
sponsored by National Science Foundation (grant number NSF-0425826).
\bibliographystyle{unsrt}
|
\section{Introduction}
Sunspot oscillations are a significant phenomenon observed in the solar atmosphere. Studying the oscillations started in 1969 \citep{1969SoPh....7..351B}, when non-stationary brightenings in the CaII and K were discovered. These brightenings were termed umbral flashes (UFs). Furthermore, \citep{1972ApJ...178L..85Z} and \citep{1972SoPh...27...71G}, using observations in the $H\alpha$ line wing, discovered ring structures in sunspots. Those structures propagated from the umbral centre to the penumbral outer boundary with a three-minute periodicity. The authors referred to these background structures as running penumbral waves (RPWs). Below, at the photosphere level, the oscillation spectrum shows a wide range of frequencies with a peak near five-minute oscillations. These frequencies are coherent, which indicates at the umbral brightness variations within this range as a whole \citep{2004A&A...424..671K}. Also, there exist low-frequency 10-40 minute components in sunspots \citep{2009A&A...505..791S, 2008ASPC..383..279B, 2013A&A...554A.146K}. Their nature has been in doubt so far.
Observations in \cite{2002A&A...387L..13D} showed that the observed emission in magnetic loops anchored in a sunspot has an $\sim$ 172 sec frequency periodicity, which indicates that photospheric oscillations in the form of waves can penetrate through the transition zone upwards into the corona. According to \cite{1977A&A....55..239B}, the low-frequency waves oscillated at the subphotospheric level (p-mode) propagate through natural waveguides as a concentration of magnetic elements (e.g. sunspots and pores). Their oscillation period may be modified by a mechanism for the cut-off frequency. In \cite{1984A&A...133..333Z} showed that the oscillations with a frequency lower than the cut-off frequency fade quickly. The main factor affecting the cut-off frequency is the inclination of the field lines, along which the wave propagation occurs. We can observe five-minute oscillations both in the chromosphere spicules \citep{2004Natur.430..536D}, and in the coronal loops of active regions \citep{2005ApJ...624L..61D, 2009ApJ...702L.168D}. Further investigations of low-frequency oscillations in the solar atmosphere higher layers \citep{2009ASPC..415...28W, 2009ApJ...697.1674M, 2011SoPh..272..101Y} corroborated the assumption that their emergence at such heights is a consequence of wave channelling in the inclined magnetic fields. The observed rate of the disturbance indicates on propagation of slow magneto-acoustic waves \citep{2009A&A...505..791S, 2012SoPh..279..427K}.
For high-frequency oscillations, the sources with less than three-minute period are localized in the umbra, and they decrease in size as the period decreases \citep{2008SoPh..248..395S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2012ApJ...757..160J}. Here in the umbral central part, where the field is almost perpendicular to the Sun surface and there is no field line beam divergence, we see the footpoints of the elementary magnetic loops in the form of oscillating cells \citep{2014AstL...40..576Z}. The main mechanism that determines their power is related to the presence of the subphotospheric and chromospheric resonator in the sunspot. Outside the central part, where the field inclination starts to manifest itself, the mechanism for a cut-off frequency change begins.
Sunspot oscillations are also expressed in the form of UFs \citep{1969SoPh....7..351B, 1969SoPh....7..366W}, whose emission manifests itself most definitely in the kernel of chromospheric lines. A number of papers \citep{2007PASJ...59S.631N, 2007A&A...463.1153T, 2003A&A...403..277R, 2001ApJ...552..871L, 2000Sci...288.1396S, 1983SoPh...87....7T, 1981A&A...102..147K} have studied this phenomenon. \cite{2010ApJ...722..888B} assumed that UFs are induced by magneto-acoustic waves propagating upwards that are converted into shocks. Photospheric oscillations become more abrupt as the waves move into a medium with lower density and transform into a shock front, thus heating the ambient medium. The temperature in the UF source surroundings surpasses the ambient values by 1000 K, which results in brightening of individual umbral sites of the order of several arcsec. On these scales, one also observes sunspot umbral magnetic field variations, although there is no visible confirmation of field line inclination variations or their common configuration throughout these processes \citep{2003A&A...403..277R}. The observations taken recently have shown the presence of very small-size jet-like spatial details of less than 0.1 Mm in the sunspot umbra. Their positions are apparently related to the footpoints of single magnetic loops, along which sunspot oscillations propagate \citep{2014ApJ...787...58Y}.
Umbral flashes are also related to the running wave phenomenon in a sunspot penumbra. This phenomenon is observed in $H\alpha$ and He lines \citep{2007ApJ...671.1005B} and in CaII \citep{2013A&A...556A.115D} in the form of travelling spatial structures moving horizontally, radially from the umbra towards the penumbral outer boundary \citep{2000A&A...355..375T, 2003A&A...403..277R}. The waves that propagate along field lines are non-stationary with changes in the oscillation power both in time and in space \citep{2010SoPh..266..349S}. These results in noticeable periodic emission modulation by propagating three-minute waves at the footpoints of magnetic loops. A possible response of such a modulation is the emergence of both low-frequency wave trains, and individual oscillation maxima brightnesses as UFs.
In this study, we analysed the association between the sunspot UFs source spatial distribution and the spatial structure of the field lines anchored in the umbra. To better understand the association between oscillation activation and flash emergence, we studied the dynamics of the three-minute oscillations in UFs sources. For the spatial localization of the propagating wave fronts to magnetic waveguides, we used the method of pixelized wavelet filtration (PWF technique) \citep{2008SoPh..248..395S}. The paper is arranged as follows: in Section 1, we introduce the paper subject; in Section 2, we provide the observational data and processing methods; in Section 3, we describe the data analysis and obtained results; in Section 4, we discuss the processes of the flash evolutions; and in Section 5, we make conclusions concerning the obtained results.
\section{Observations and data processing}
To study the connection between UFs and sunspot oscillations we used the data observations of the Solar Dynamic Observatory (SDO/AIA) \citep{2012SoPh..275...17L} obtained with a high spatial and temporal resolution. We studied the four active regions with developed sunspots at the maximum of their wave activity. To obtain the spatial location of the UFs sources in space and height we used the observations of January 26, 2015 (NOAA 12268, 01:00-04:00 UT), January 10, 2016 (NOAA 12480, 01:00-04:00 UT), and March 27, 2016 (NOAA 12526, 01:00-04:00 UT). A more comprehensive analysis was carried out for the observations of December 08, 2010 (NOAA 11131, 00:00-03:20 UT).
We used calibrated and centred images of the Sun (Lev 1.5) for various wavelengths. The observations were performed within extreme ultraviolet (EUV; 1600 \AA) and UV (304 \AA, 171 \AA) ranges with cadences of 24 sec and 12 sec, respectively. The pixel size was 0.6 \arcsec. The differential rotation of the investigated regions during the observation was removed by using the Solar Software.
We built time-distance plots along the detected UF sources to search for a correlation between the background wavefront propagation process and the UF emergence. The precise value of the revealed oscillation periods was obtained through the Fourier method. For 2D wave processing and obtaining of their time dynamics, we used the PWF technique. The spectral algorithm applied in this method enabled us to search for waves throughout the sunspot and to trace the direction of their propagation.
Using the helioseismologic method to calculate the time lag of propagating three-minute wavefronts relative to each other \citep{2014A&A...569A..72S} enabled us to establish the height affixment of the SDO/AIA temperature channels. The channel of the extreme ultraviolet range 1600 \AA ~records the emission at the levels of the upper photosphere and transition region with temperatures 6000 K and 100000 K, respectively. However, the main sensitivity of the channel and, correspondingly, the minimum wave lag at the upwards propagation, falls at the emission arriving from the lower atmosphere. This channel often shows dotted, fine-structure details, brightening the field line magnetic footpoint regions. The regions with a high concentration of field lines appear black, particularly near to sunspots and active regions. The 304 \AA ~(He II) channel shows bright regions at the level of the upper chromosphere and lower transition region, where plasma has a high density. The characteristic temperature of the channel is about 50000 K. This channel is best suited to study various oscillation processes in the solar atmosphere, particularly in sunspots where the power of three-minute oscillations reaches maximum. To observe the coronal magnetic structures, we used observations with a 171 \AA ~(Fe IX) wavelength. The emission arrives from the quiet corona and from the upper transition region with the temperature of about 1000000K.
\section{Results}
We investigated the emergence of umbral short-time recurrent flashes of brightness by using the unique observational possibility of the SDO/AIA temperature channels to receive emission from different heights of the sunspot atmosphere. This allowed us to obtain, for the first time, the information on the UF source distribution throughout an umbra and to understand their height location. To test the stability of the recurrent UF source location and their seeing at different heights, we built variation maps for the SDO/AIA different temperature channels. These maps show the distribution of the signal variation relative to its mean value in each point of images throughout the observational time.
\subsection{Spatial and heights location of UFs}
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig1.eps}
\end{center}
\caption{Upper panels: Snapshots of the UFs in sunspot active regions on January 26, 2015 (01:57:54 UT), January 10, 2016 (01:33:52.6 UT), and March 27, 2016 (01:49:28.6 UT) obtained by SDO/AIA (1600 \AA). The broken black rectangles show the umbral regions. The arrows indicate the UFs sources. Middle panels: The corresponding sunspot regions at 171 \AA. The original maps (contours) overlapped on variation maps (colour background) of UV emission obtained during the observation. Asterisks denote the localization of the UFs sources. Bottom panels: Scaled variation maps of the umbral regions at 1600 \AA. The small white rectangles show sources of UFs.}
\label{1}
\end{figure}
Figure~\ref{1} presents a series of sunspots images and their variation maps during the emergence of separate bright UFs obtained by SDO/AIA at 1600 \AA ~and 171 \AA. The observational time was about three hours for each day during the four days of observation. The number of the obtained images for one day was 450 frames at a 24-sec temporal resolution. Similar images were also obtained in the 304 \AA ~and 171 \AA ~channels, where the temporal resolution was 12 seconds. The number of frames was 900. This observational material is adequate to compile with confidence the statistical material both by UF number and by the location in the umbral area. The umbral regions are shown by the dash squares. To increase the visibility of the umbral weak brightening sources, we used the log scale. This enabled us to record weak processes of the umbral background wavefront propagation and to study their association with the UF emergence. This procedure was applied to all studied SDO/AIA temperature channels. This allowed us to obtain time cubes of images and to make the films, in which the dynamics of the umbral emission intensity are presented.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig2.eps}
\end{center}
\caption{Variation maps of umbral UV emission in the SDO/AIA different temperature channels (1600 \AA, 304 \AA~ and 171 \AA) obtained during 00:00-03:20 UT observation NOAA 11131 on December 08, 2010. Squares with numerals indicate the position of observed UF sources. The arrows show the scanning direction when obtaining the time-distance plots. The dash circle outlines schematically the umbral boundary. The variation intensity is presented by colours in the logarithmic scale.}
\label{2}
\end{figure}
Watching and studying frame-by-frame the films obtained for a variety of ultraviolet wavelengths showed the presence of two dynamic components in a sunspots. The first is related to a continuous propagation of the background three-minute oscillations in the umbra and longer periodicity in the penumbra. This component is visible with a naked eye in the form of wavefronts propagating towards a penumbra from a pulsing source located in the sunspot centre. This source agrees well with the centre of the spiral wavefronts propagation described previously in \cite{2014A&A...569A..72S} for December 08, 2010 event. The other component is related to short-time brightenings of separate parts of the propagating fronts and with the emergence of small-angular size details as UF sources.
We can see on variation maps at 1600 \AA ~(Fig.~\ref{1}, bottom panels) that the UFs sources as local brightenings have different localizations, intensities, and shapes located in the umbral periphery. There are both bright point sources and extended sources that have different spatial orientation. Some localize near to the light bridge for example on January 10, 2016. This type of intensity variation was described in \cite{2014ApJ...792...41Y}. Watching the obtained films showed that the fast processes of the UF brightening mainly appear in the same umbral site. Also, they manifest themselves both as individual pulses and as a series of modulated pulsations.
When we compare the obtained spatial location of bright points of variation inside umbra at 1600 \AA ~and 171 \AA, we can see well coinciding UFs sources with footpoints of coronal loops, anchored in the umbra of the sunspots (Fig.~\ref{1}, middle panels). Mainly variation maps on coronal heights show the elongated details, which can be interpret as magnetic loops along which waves propagate from the bottom layers of the sunspot atmosphere to the corona. The maxima of waves variation distributes along the loops as a bright elongated details. The main behaviour of the oscillation sources at separated periods is determined by the cut-off frequency.
The UF source visibility varies depending on the height of the ultraviolet emission generation. We can observe a part of the flashes at all heights. The other part manifests itself only lower, at the photospheric level. The angular size of UF sources varies from flash-to-flash by revealing itself as a point or as an extended source.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig3.eps}
\end{center}
\caption{Snapshots of the narrowband maps of umbral region NOAA 11131 with 3-min periodicity on December 08, 2010. The left panel shows
the localization of the stable source of the local UFs at 1600 \AA ~(00:22:17 UT). The right panel shows the position of the bright sources at 304 \AA ~(00:22:32 UT), which ride the expending 3-min spiral wave fronts as background UFs. The dash circle outlines the umbral boundary. The arrows shows the position of UFs sources.}
\label{3}
\end{figure}
Figure~\ref{2} shows the variation maps obtained at 1600 \AA, 304 \AA, and 171 \AA ~wavelengths on December 08, 2010. One can see that the brightness variation distribution shows an inhomogeneous structure in the umbra, whose value depends on the SDO/AIA recording channel. Below, at the upper photosphere level (1600 \AA), there is a well-defined umbra indicated by the dashed circle. These umbral features have a lower level of the emission variation. Against its background, the sources that have both points and extended shapes stand out.
We found eight UF sources within the umbral boundary. The source size varies from 2 to 8 \arcsec. Mainly, these sources are located on the periphery near to the sunspot umbral boundary. When moving upwards onto the transition region level (304 \AA) we observe the disappearance of the point UF sources (No.1-4) and the increase in the brightness of the extended UF sources (No.5-8). There is an increase in the emission variation and accordingly the umbral brightness increases owing to the boost of the background three-minute oscillations. Higher, in the corona (171 \AA), we see that along with the UF sources visible below extended details appear that spatially coincide with the magnetic loops. Propagation of the background three-minute waves along these loops contributes mainly to the emission variation increase.
For the UF-type short-time processes, the maximal brightness is reached lower at the photosphere level (1600 \AA). When comparing the three-minute background component emission variations within different SDO/AIA temperature channels, the maximal value is reached at the transition region level (304 \AA).
The obtained variation maps show the values of the signal variance both in periodic and non-periodic components. To isolate only periodic signal, we have constructed a series of narrowband maps with 3 min signal periodicity in space and time with used the PWF technique. Figure \ref{3} shows the obtained snapshots of narrowband oscillation maps (positive half-periods) in the SDO/AIA temperature channels at 1600 \AA, 00:22:17 UT and 304 \AA, 00:22:45 UT. These times correspond to the appearance of maximum brightness in UF source N5. We see that at wavelength 1600 \AA ~there is only one bright, local source UFs associated with periodical oscillations in a limited spatial area. Its position almost does not change with time. At the transition zone (304 \AA), we see the wave fronts as an evolving spiral with the pulse source in the centre of umbra. Similar dynamics of wave fronts was discussed in \cite{2014A&A...569A..72S}. Contours highlight the details of the fronts, the brightness of which exceed 50 \% of the maximum value in time. With propagation waves from umbra centre to its boundary, these details continuously appear and disappear, originating the short-term brightening of separated parts of the fronts as background UFs. On the variation maps, these changes are connected with background brightening.
To understand how UF sources are related to the umbral magnetic structures, we compared their spatial position with the coronal loops seen in the UV emission (SDO/AIA, 171 \AA) and magnetic field structure of this active region previously described in \cite{2012ApJ...756...35R}. Because the considered sunspot is the leading in the group, the magnetic field configuration shows a well-defined east-west asymmetry. The magnetic field lines anchored in the eastern part of the sunspot are much lower and more compact, than the field lines anchored in the western part of the sunspot.
When considering the UF source positions (Fig.~\ref{2}, 1600 \AA), we notice that the detected UF point sources (numbered 1-4) are localized in the umbral western part near to the footpoints of large magnetic loops. More extended sources (numbered 5-8) are related to the eastern part, and are located near the compact loops of the footpoints, relating the sunspot with its tail part. The size of the extended UF sources is about 7-10 \arcsec, and the point UFs are about 2.5 \arcsec.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig4.eps}
\end{center}
\caption{Time-distance plots along the N5 UF source obtained by SDO/AIA at 1600 \AA ~(left panel), and at 304 \AA ~(right panel) temperature channels on December 08, 2010. The brightness periodic changes are the 3-minute oscillation wavefronts. The arrows show the UF. The horizontal dashed lines indicate the umbra/penumbra border. The 1D spatial coordinates are in arcsec, the time in UT.}
\label{4}
\end{figure}
\subsection{Time dynamics of UFs on December 08, 2010}
More comprehensive analysis of the time dynamics for wave processes was performed for the sunspot active region NOAA 11131 on December 08, 2010. The wave processes inside the umbra were intensively studied by \cite{2012A&A...539A..23S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2014AstL...40..576Z}.
The detected compact sources of the maximal variation in Fig.~\ref{2} were studied to reveal the existence of flash and/or oscillation activity. For this we scanned each of the sources at 1600 \AA ~and 304 \AA and built the time-distance plots. The arrows show the UF source scan directions.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig5.eps}
\end{center}
\caption{Time dynamics of the EUV emission for the N2 and N6 sources at 1600 \AA. The arrows show the maximum emission of UF. Time in UT.}
\label{5}
\end{figure}
Figure~\ref{4} presents an example of the obtained time-distance plots in 1600 \AA ~(left panel) and in 304 \AA ~(right panel) for the N5 extended source. We see that throughout the entire observational time there are background three-minute broad brightness variations in the umbra that smoothly transit into the five-minute oscillations at the boundary of the umbra and penumbra shown by the dashed line. This type of partial brightening of wave fronts during propagation in the umbra as UFs was described in \cite{2014ApJ...792...41Y}. Most clearly, these UFs are exhibited at the level of the transition region in 304 \AA ~(Fig.~\ref{4}, right panel). Also, these oscillations exist lower, at the level of the upper photosphere (1600 \AA). Against their background, we note a series of periodically recurrent various-power local UFs. The arrows in Fig.~\ref{4} indicate separate pulses (left panel). The position of flashes by space coincides with the maximal brightness of the N5 extended source. The fine spatio-temporal structure of the UF sources also coincides with the brightenings of the three-minute oscillation background wavefronts.
When comparing the flash peak values below and above the sunspot atmosphere, we note that UFs have shorter duration at the level of the photosphere than that at the level of the transition region. Low-frequency modulation of three-minute oscillations occurs. The brightness change at 304 \AA ~occurs smoothly without well-defined peaks. During flashes brightenings of the 3-minute wavefront in the source occur. The brightness contrast decreases as the height of the UF observation increases. One may assume that UFs and the background three-minute oscillations have identical natures in the form of the wave activity increase within the magnetic loops, where their propagation occurs with different time and spatial scales.
To compare the time profiles of the brightness variation within different UF sources for one wavelength, we used cuts along spatial coordinates with the maximal brightness on the time-distance plots (Fig.~\ref{4}). The profiles for each UF source were obtained. Fig.~\ref{5} shows a brightness change example for N2 and N6 sources at the level of the upper photosphere (1600 \AA), where the UF visibility is maximal.
One can see that, along with the well-defined three-minute oscillations (Fig.~\ref{5}, left panel), there also exist pulse events as UFs. Their number and duration depends on the flash source. Thus, we only observed individual flashes during a three-hour interval of observations for the sources numbered 1 through 4. At the same time, on the profiles of the 5-8 sources, we note series of flashes with different amplitudes and durations (Fig.~\ref{5}, right panel).
Comparing the shape of the revealed sources in Fig.~\ref{2} with the corresponding profiles in Fig.~\ref{5} showed that, for point sources, the emergence of rare individual UFs is common. The UF extended sources are related to the series of periodically recurring different amplitude pulses, about 4-14 flashes during the observations. Comparing the peak amplitudes of various UF sources revealed that the brightness change in the point sources is almost five times less, than that for the extended.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig6.eps}
\end{center}
\caption{Time dynamics for the N5 UF source at various SDO/AIA channels: 1600 \AA, left panel and 171 \AA, right panel. The blue lines show the brightness changes recorded during the flashes. The red lines show the time profiles of the filtered 3-minute oscillations. The numerals denote the oscillation train numbers. Bottom panels: Fourier spectra of UF signals for SDO/AIA channels accordingly.}
\label{6}
\end{figure}
\subsubsection{Relation between wave dynamics and UFs}
Based on the obtained 1D time-distance plots for each source ~(Fig.~\ref{4}), for which the relation between the oscillating 3-minute component and the UF emergence is well traced, we performed a spectral analysis of the time profiles by using the fast Fourier transform (FFT), and PWF technique. We applied the Fourier transform to provide a good spectral resolution, and the PWF technique to obtain a spatio-temporal structure of the wavefronts propagating in UF sources.
Figure~\ref{6} shows an example of the oscillations detected in the N5 extended source over the 00:10-00:50 UT observational period, when there emerged UFs. We can see the profiles with sharp UFs at 1600 \AA. At the corona level at 171 \AA ~there are stable 3 min oscillations without spikes. This served as the main criterion for studying the spectral behaviour of filtered 3 min oscillations at 171 \AA ~and its comparison with the original signal at 1600 \AA. In this case the spectral power does not change because of sharp jumps in the signals.
One can see that at the level of the upper photosphere (Fig.~\ref{6}a, 1600 \AA, blue lines), there exist periodic brightness changes in the EUV emission. These changes take shape as a UF series, where UFs were exhibited as a sequence of low-frequency trains of higher frequency oscillations. Those higher frequency oscillations are particularly expressed in the sunspot atmosphere higher coronal layers at 171 \AA ~(Fig.~\ref{6}b). The Fourier spectrum showed the existence of significant harmonics. These harmonics are related to an $\sim$ 3-5-minute periodicity and to the $\sim$ 13-min low-frequency oscillations (Fig.~\ref{6}c,d).
To trace the time dynamics for the detected periodicity, we performed a wavelet filtration of the series in the period band near three minutes. We found the four trains of high-frequency oscillations numbered in Fig.~\ref{6}a. If one compares the behaviour of the filtered three-minute signal (red lines) and the UF emergence (blue lines), it is apparent that the train maxima coincide with the UF brightness maxima. A complex UF time profile (in the form of a series of variable peaks) is related to the existence of oscillations with different amplitudes, phases, and lifetimes in the trains.
When comparing the oscillations in UFs, one can see (Fig.~\ref{6}), that the low-frequency trains are well visible in the lower atmosphere. Their power decreases in the upper atmosphere. This is well traced on the Fourier spectra of the signals for different height levels (Fig.~\ref{6}c,d). We note the inverse dependence between the harmonic power. At the level of the upper photosphere, the low-frequency modulation is maximal at a low level of the 3-minute harmonic. In contrast, in the corona, there is a pronounced peak of 3-minute oscillations with the minimal value of the $\sim$ 13-minute component power.
Increasing oscillations in the source led to the formation of compact brightenings in the form of UFs on the time-distance plot (Fig.~\ref{4}, left panel). As the low-frequency oscillation power decreases, at the corona level a smooth increase occurs in the high-frequency three-minute component in the form of brightenings of the wavefront separate details (Fig.~\ref{4}, right panel). The mean UF duration for extended sources was $\sim$ 3.7 minutes. This value is near the value of one period for the three-minute oscillation maximal power.
To test the obtained association between UFs and oscillations, we calculated the correlation coefficients between the original signal and the three-minute filtered signal in various SDO/AIA channels. There is a direct correlation between the three-minute oscillation power and the UFs power. The maximal value for the correlation coefficient is at 1600 \AA, and this value varies within the 0.65 - 0.85 range for different sources of flashes.
One may assume that the obtained association between the increase in the three-minute oscillations and the UF emergence is characteristic of not only the detected N5 source, but is also present in all the detected sources. To test this statement, we calculated the narrowband three-minute oscillation power variations in the N7 and N8 sources above, at the corona level (171 \AA), and compared these variations with the UF emergence in the integral signal lower, at the photosphere level (1600 \AA). The observational interval was 00:00-03:20 UT.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig7.eps}
\end{center}
\caption{Amplitude variations of the N7 and N8 extended sources of UFs at 1600 \AA ~and 171 \AA ~temperature channels. Blue lines show the profiles of the original signal at 1600 \AA. Red lines show the 3-min oscillation power at 171 \AA.}
\label{7}
\end{figure}
Figure~\ref{7} shows the time profiles for the signals in the N7 and N8 extended sources, and the corresponding variation of power oscillations in the corona. Apparently, in the sources at the upper photosphere level (blue lines, 1600 \AA), there are recurrent UFs of different amplitude. In addition to the case with the N5 source, the bulk of the UF peak values are accompanied by an increase in the three-minute oscillation low-frequency trains at the corona level (red lines, 171 \AA). There is a well-defined correlation between the signals. Thus, over 01:20-03:20 UT, the emergence of the "step-like" signals at the photosphere level with their gradual steeping and the emergence of UF pulses is followed by a smoothly varying increase in the power of the three-minute oscillation trains in the corona.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig8.eps}
\end{center}
\caption{Amplitude variations of the point N1 and N3 UF sources. Green lines show the original signal at 1600 \AA; blue lines present the signal at 171 \AA. Red lines show the mean power of the 3-min oscillations.}
\label{8}
\end{figure}
\begin{figure*
\begin{center}
\includegraphics[width=14.0 cm]{Fig9.eps}
\end{center}
\caption{Snapshots of spatial distribution of travelling wave fronts during the UF for the N5 extended source. Duration of propagating the 3-min waves along a magnetic waveguide of about one period. The observational wavelength is 1600 \AA. A continuous line represents a positive half-period of propagating waves, and the dashed line separates the negative half-period. The background image is the distribution of the brightness of the source at the time of the maximum flash. The minimum negative half-period is indicated in green. The time resolution is 24 sec.}
\label{9}
\end{figure*}
For the N1 and N4 point sources only single pulses with a low-intensity level were observed. For these sources, we compared the coronal three-minute oscillation power mean-level variations with the moments of the UF single burst peak emergence at the photosphere level. Fig.~\ref{8} shows the original signal profiles at varying height levels (green lines for 1600 \AA, blue lines for 171 \AA) with the superposition of the three-minute oscillation mean power (red lines, 171 \AA). Apparently, the moments of the short flash emergence below the sunspot coincide with the three-minute oscillation power maxima above. In this, we note a similar sequence in the signal evolution such as that for the extended sources. The difference is in the duration of the flashes. Thus, for the N1 source (02:36:15 UT), the UF duration was $\sim$ 1.5 minutes, for N2 (03:07:30 UT) - about 1.1 minutes, for N3 (01:01:30 UT) - about 1.0 minute, and for N4 (03:12:00 UT) - about 1.1 minutes. The UF mean value for the point sources was $\sim$ 1.2 minutes.
\subsubsection{Wave propagation in UF sources}
To study narrowband wave propagation over the UF source space, we used the PWF technique. Fig.~\ref{9} shows the time sequence for the EUV emission wavefront images (SDO/AIA, 1600 \AA) obtained for the N5 source during the second train of the three-minute oscillations (00:18:00 - 00:20:48 UT). The temporal resolution was 24 sec. The oscillation positive half-period is shown by the continuous contours, the negative is outlined by the dash contours. The basis is the source image at the UF maximum instant at 00:20 UT.
Comparing the obtained images (Fig.~\ref{9}) with the profile for the UF maximal brightness variation (Fig.~\ref{6}a), we can clearly see that the brightness increase is accompanied by the onset of the wave propagation along the detected direction coinciding (by shape) with the UF maximal emission source. These motion directions towards the penumbra, in turn, coincide with the footpoint of the magnetic loop, along which the waves propagate. There are recurrent instances when the fronts emerge in the same site of the umbra limited space. The beginning of the N5 extended source coincides (by space) with the pulsing centre of the three-minute waves expanding spirally. One may assume that the wave source coincides with the footpoint of the magnetic bundle that diverges in the upper atmosphere. Separate spiral arms rotate anti-clockwise. These background waves were studied in \cite{2014A&A...569A..72S} for this active region.
Presumably, propagation of spiral-shaped waves (Fig.~\ref{3}, 304 \AA) is the initiator of the wave increase in separate magnetic loops. In this case, the bulk of bright fronts propagates towards the bright extended UF emergences. The wave propagation projection velocities along the waveguide lie within the 20-30 km/s interval. These values agree with the slow magneto-acoustic wave propagation velocity in the sunspot.
For different low-frequency numbered trains of the UF in the N5 source (Fig.~\ref{6}a, 1600 \AA), the maximal brightness was located in various parts of the magnetic waveguide, and it varied with time. Each series UFs with the $\sim$ 10-13 minute duration was accompanied by an increase in the low-frequency trains of the 3-minute waves. There are differences for each wave train. One observes the UFs, when both propagating and standing waves are visible throughout one train. The wave velocity can vary from train to train. Mainly, the waves move towards the sunspot boundary from its centre.
The increase in the wave processes for the UF point sources occurs in the form of producing single pulses in the umbral site limited to several pixels. The emergence of so-called standing waves without their apparent propagation is characteristic for these sources. Mainly, the 2D time dynamics of the three-minute oscillation sources agrees with the UF source dynamics.
\section{Discussion}
The results obtained from the SDO/AIA data showed that the investigated phenomenon of UFs is characteristic of all the heights within a sunspot atmosphere. We see a response to both below, at the photosphere level, and above, at the corona level, the sunspot atmosphere. This means that flashes represent a global process of energy release and this process encompasses all the layers of an sunspot umbra.
Usually, an umbra is considered a relatively quiet region as compared with a penumbra. This is because the umbral magnetic field represents a vertical bundle of magnetic field lines diverging with height. The umbral field line inclination is minimal. Correspondingly, the magnetic reconnection responsible for the flash energy release emergence is unlikely in a homogeneous, vertical field. This conclusion indicates that there are other mechanisms for the emission increase during UFs.
A wave mechanism is an alternative to explain this increase. It is based on the assumption that the observed brightenings in the form of UFs are short-time power increase in wave processes within separate umbral parts. This viewpoint to be common, because the well-known three-minute umbral oscillation were revealed to propagate non-uniformly both over the sunspot space and with time \citep{2012A&A...539A..23S, 2014A&A...569A..72S}. Mainly, the waves are modulated by a low-frequency component in the form of the $\sim$ 13-15 minute trains and their power is time variable. The wave motion direction is determined by the spatial structure of the umbra-anchored magnetic field lines, along which slow magneto-acoustic waves propagate.
There are instances when a significant increase in power of the three-minute oscillation trains occurs at separate footpoints of magnetic loops. These processes have an indefinite character, and the source of the next wave increase is impossible to locate. On the other hand, the magnetic loop footpoints are stable over the umbral space over a certain time period. This enables us to assume that the positions of the UF sources are probably directly related to the magnetic loop footpoints, in which short-time increases in the three-minute waves are observed.
These assumptions agree well with the spatial localization of the UF sources at the umbral boundary (Fig.~\ref{1}) as well as the difference in shape, i.e. extended and point. Umbral flash sources maintain their spatial stability for about three hours, producing UF series. On the other hand, \cite{2003A&A...403..277R} noted that some flashes possess instability both in space and time.
In \cite{2014ApJ...792...41Y}, the authors showed that the UFs visible on time-distance plots occur at random locations without a well-established occurrence rate. It has been established that the appearance of new UFs sources is associated with the trains of three-minute oscillations in the sunspot umbra with much larger amplitude. The individual UFs ride wave fronts of umbral oscillations. A possible explanation for this is the presence in the umbra background oscillations as expending fronts of 3-min waves and their interaction between each other. A similar type of brightening was considered in \cite{2014A&A...569A..72S}. These authors noted that the individual parts of the wave fronts, which are shaped as rings or spirals, during propagation along magnetic loops with different
spatial configuration and interactions between each other, can lead to the appearance of diffuse brightening
with spatial instability. Such short-lived background UFs are well visible on the time-distance diagrams, constantly appear in umbra, and do not have stable shapes and localizations in space (Fig.~\ref{3}, 304 \AA). Basically, the pulse source of such wave fronts is located in the centre of umbra, and is possibly associated with the footpoint of the magnetic bundle whose loops are expanding with height.
In the case of background UFs, we observed the local traces of waves that propagate along loops with different inclinations relative to the solar normal and correspondingly different cut-off frequencies. This forms a brightening of wave tracks, which we observed as diffuse UFs during increasing of power oscillations in selected areas of umbra. We can also obtain the same effect during interactions between wave fronts. With height, the visibility and positions in space of these sources are shifted in a radial direction because of upwards wave propagation.
For the local UFs discussed in our work, the sources have small angular size with a periodic 3-min component and stable location, both over space and height (Fig.~\ref{3}, 1600 \AA). Their appearance is associated with the power of the maximum wave propagating near the footpoints of coronal loops outside the main magnetic bundle. The origin of these loops is umbral periphery. Their inclination can be different relatively to the configuration of the main magnetic bundle.
The existence of an UFs fine structure was previously assumed in \cite{2000Sci...288.1396S} and \cite{2005ApJ...635..670C} using spectroscopic observations. Improving the angular and spatial resolutions of astronomical instruments enabled us to observe such changes in UF sources directly. Thus, \cite{2009ApJ...696.1683S} used HINODE data (CaII H line) to find an umbra fine structure in the form of a filamentary structures that emerged during UFs. These details were present at an oscillation increase, and formed a system of extended filaments, along which brightness varied with time in the form of travelling bright and dark details. The calculated horizontal velocity varied within 30-100 km/s.
We can assume that we observe in UF sources projection motions (at the footpoints of magnetic field lines) of the three-minute longitudinal wavefronts propagating upwards \citep{2003ApJ...599..626B}. Depending on the field line start localization (the umbral centre or near its boundary) and on the inclination to the solar normal, there is a projection effect of wave visibility. Near the sunspot boundary, one observes extended UF sources, whereas closer to the centre point sources are observable. This statement is true if we assume a symmetry of diverging field lines relative to the umbral central part. In reality, there is often an east-west asymmetry of an active group. This asymmetry is related to the presence of the head (sunspot) and tail (floccule) parts.
The wave path length, and, accordingly, the wave visibility with certain oscillation periods is determined by the cut-off frequency \citep{1977A&A....55..239B, 1984A&A...133..333Z}. The path also varies as a cosine of the magnetic waveguide inclination angle. Point UF sources with a minimal angular size are related to the footpoints of vertical magnetic field lines. Large, extended UF sources, are related to the footpoints of field lines with a large inclination to the solar normal.
Comparing the positions of the sources at the NOAA 11131, various heights showed a good concurrence of the UF sources underneath (1600 \AA) with the footpoints of coronal loops (171 \AA), which play the role of magnetic waveguides for three-minute waves. For the low-lying loops in the eastern part of the NOAA 11131 that connect the sunspot with the tail part, we see extended UF sources at their footpoints. For the western part, we see point sources.
The revealed interconnection between the UF emergence and the increase in the three-minute wave trains indicates that we can consider UFs as events, in which peak increases in the trains of oscillations at the footpoints of magnetic loops exhibit themselves. There is a direct dependence between the oscillation power and the flash brightness at a maximal correlation. The higher the magnitude of the three-minute waves, the more powerful the flash. This dependence concerns both extended UF sources and point UF\ sources. The UF emission maximum coincides with the maximum of the three-minute oscillations within one wave train.
The 2D spectral PWF-analysis for the SDO/AIA image cube directly showed (Fig.~\ref{9}) that, during UFs, three-minute wave motions emerge along the detected magnetic loops towards the umbral boundary at the UF sources. The wave propagation starts at the footpoint and terminates at the level, where its inclination angle corresponds to the cut-off frequency beyond the limits of the observed three-minute frequency band. The greater the loop inclination, the greater the projection of the UF source to the observer, and the more wave trains (UF pulses) we can record. Correspondingly, we will observe extended, bright UF sources. Contrary to the propagating waves, so-called standing waves will be observed for point sources. An explanation of this is the projection of the propagating waves along vertical magnetic loops towards the observer. In this case, spatial wavefronts will be observed within a loop cut region limited by space. Those fronts form UF sources with a small angular size.
The UF source lifetime will also be different. For point UFs, the source lifetime is about 1-2 minutes; for extended UFs, it is 3-15 minutes. The visibility of the source is restricted by a low integral power level of the UF emission of the point sources and a short observational time for maximal oscillations (1-2 half-periods). For extended UF sources, we can observe a few travelling wave trains simultaneously, which intensifies their integral brightness and increases the observational time (lifetime).
\section{Conclusions}
We analysed the association between an increase of wave activity in the sunspot active groups and the emergence of UFs. We used the observational data in the UV emission obtained in the various temperature channels of the SDO/AIA with high spatial and temporal resolutions. To detect the oscillations, we used the time-distance plots and Fourier and wavelet transform spectral techniques. The results are as follows:
1) We revealed fast periodic disturbances related to the wave activity in the sunspot umbra during a three-hour observation. These disturbances correlate well with the continuous diffuse brightening of separate details of the propagating three-minute wavefronts as described in \cite{2014ApJ...792...41Y}. Along with this, short-time emergences of the small local sources having a periodic component and identified as UFs are observed.
2) We can divide the observed umbral brightening into two types. The first type are background UFs associated with random brightening of separated parts of wave fronts during their propagation. These UFs are observed all of the time in umbra as weak diffuse details that ride the wave fronts without stable shapes and localization in space. The second type are local UFs associated with the increasing of wave activity near to the footpoints of magnetic loops. These sources not change spatial position in time and show pronounced wave dynamics during UFs.
3). For the local UFs we revealed various types of spatial shapes of the sources. We suppose that the point sources are located at the footpoints of large magnetic loops. Their feature is the activity with rare single pulses of low power and duration. The extended sources are related to the footpoints of low magnetic loops and large inclinations. The features of this source type are series of recurrent UF pulses related to propagating trains of three-minute waves. The flash power depends on the distance of the wave path, along which the emission is integrated. The wave path and, correspondingly the UF source projection size, are determined by the cut-off frequency.
4) The emergence of the main UF maximum is shown to coincide with the maximal value of the power of the three-minute oscillation trains in separated loops. This type of wave dynamics follows that described in \cite{2014ApJ...792...41Y} for background UFs but localized in magnetic loops. There is a correlation between the UF emergence at the photosphere level and the increase in the power of the three-minute wave trains in the corona.
These results explicitly show the correlation between the sunspot three-minute oscillation processes and the UF emergence. These processes are a reflection of the slow magneto-acoustic wave propagation from the subphotospheric level into the corona along the inclined magnetic fields. The wave process power dynamics in separate magnetic loops determines the time and site of the UF source emergence. The main mechanism responsible for the observed UF parameters is the wave cut-off frequency. In the future we plan to study in more detail the relationship between the shape of the local UFs sources and the inclination of the magnetic loops near to the footpoints of which the flashes are observed.
\begin{acknowledgements}
We are grateful to the referee for helpful and constructive comments and suggestions. The authors are grateful to the SDO/AIA teams for operating the instruments and performing the basic data reduction, and especially, for the open data policy. This work is partially supported by the Ministry of Education and Science of the Russian Federation, the Siberian Branch of the Russian Academy of Sciences
(Project II.16.3.2) and by the programme of basic research of the RAS Presidium No.28. The work is carried out as part of Goszadanie 2018, project No. 007-00163-18-00 of 12.01.2018 and supported by the Russian Foundation for Basic Research (RFBR), grants Nos. 14-0291157 and
17-52-80064 BRICS-a. The research was funded by Chinese Academy of Sciences President’s International Fellowship Initiative, Grant No. 2015VMA014.
\end{acknowledgements}
\bibliographystyle{aa}
|
\section{Introduction}
A process of $e^+e^-$ pair production by a high-energy electron in the atomic field is interesting both from experimental and theoretical points of view. It is important to know the cross section of this process with high accuracy at data analysis in detectors. Besides, this process gives the substantial contribution to a background at precision experiments devoted to search of new physics. From the theoretical point of view, the cross section of electroproduction in the field of heavy atoms reveals very interesting properties of the Coulomb corrections, which are the difference between the cross section exact in the parameters of the field and that calculated in the lowest order of the perturbation theory (the Born approximation).
The cross sections in the Born approximation, both differential and integrated, have been discussed in numerous papers
\cite{Bhabha2,Racah37,BKW54,MUT56,Johnson65,Brodsky66,BjCh67,Henry67,Homma74}. The Coulomb corrections to the differential cross section of high-energy electroproduction by an ultra-relativistic electron in the atomic field have been obtained only recently in our paper \cite{KM2016}. In that paper it is shown that the Coulomb corrections significantly modify the differential cross section of the process as compared with the Born result. It turns out that both effects, the exact account for the interaction of incoming and outgoing electrons with the atomic field and the exact account for the interaction of the produced pair with the atomic field, are very important for the value of the differential cross section. On the other hand, the are many papers devoted to the calculation of $e^+e^-$ electroproduction by a heavy particles (muons or nuclei) in an atomic field \cite{Nikishov82,IKSS1998,SW98,McL98,Gre99,LM2000}. It that papers, the interaction of a heavy particle and the atomic field have been neglected. In our recent paper \cite{KM2017} it has been shown that the cross section, differential over the angles of a heavy outgoing particle, changes significantly due to the exact account for the interaction of a heavy particle with the atomic field. However, the cross section integrated over these angles is not affected by this interaction. Such unusual properties of the cross section of electroproduction by a heavy particle stimulated us to perform the detailed investigation of the integrated cross section of the electroproduction by the ultra-relativistic electron.
In the present paper we investigate in detail the integrated cross section, using the analytical result for the matrix element of the process obtained in our paper \cite{KM2016} with the exact account for the interaction of all charged particles with the atomic field. Our goal is to understand the relative importance of various contributions to the integrated cross section under consideration.
\section{General discussion}\label{general}
\begin{figure}[h]
\centering
\includegraphics[width=1.\linewidth]{diagrams.eps}
\caption{Diagrams $T$ (left) and $\widetilde{T}$ (right) for the contributions to the amplitude ${\cal T}$ of the process $e^-Z\to e^- e^+e^-Z$. Wavy line denotes the photon propagator, straight lines denote the wave functions in the atomic field.}
\label{fig:diagrams}
\end{figure}
The differential cross section of high-energy electroproduction by an unpolarized electron in the atomic field reads
\begin{equation}\label{eq:cs}
d\sigma=\frac{\alpha^2}{(2\pi)^8}\,d\varepsilon_3d\varepsilon_4\,d\bm p_{2\perp}\,d\bm p_{3\perp}d\bm p_{4\perp}\,\frac{1}{2}\sum_{\mu_i=\pm1}|{\cal T}_{\mu_1\mu_2\mu_3\mu_4}|^{2}\,,
\end{equation}
where $\bm p_1$ is the momentum of the initial electron, $\bm p_2$ and $\bm p_3$ are the final electron momenta, $\bm p_4$ is the positron momentum, $\mu_i=\pm 1$ corresponds to the helicity of the particle with the momentum $\bm p_i$, $\bar\mu_i=-\mu_i$, $\varepsilon_{1}=\varepsilon_{2}+\varepsilon_{3}+\varepsilon_{4}$ is the energy of the incoming electron, $\varepsilon_{i}=\sqrt{{p}_{i}^2+m^2}$, $m$ is the electron mass, and $\alpha$ is the fine-structure constant, $\hbar=c=1$. In Eq.~\eqref{eq:cs} the notation $\bm X_\perp=\bm X-(\bm X\cdot\bm \nu)\bm\nu$ for any vector $\bm X$ is used, $\bm\nu=\bm p_1/p_1$.
We have
\begin{equation}\label{TTT}
{\cal T}_{\mu_1\mu_2\mu_3\mu_4}=T_{\mu_1\mu_2\mu_3\mu_4}-\widetilde{T}_{\mu_1\mu_2\mu_3\mu_4}\,,
\quad \widetilde{T}_{\mu_1\mu_2\mu_3\mu_4}=T_{\mu_1\mu_3\mu_2\mu_4}(\bm p_2\leftrightarrow \bm p_3)\,,
\end{equation}
where the contributions $T$ and $\widetilde{T}$ correspond, respectively, to the left and right diagrams in Fig.~\ref{fig:diagrams}.
The amplitude $T$ has been derived in Ref.~\cite{KM2016} by means of the quasiclassical approximation \cite{KLM2016}. Its explicit form is given in Appendix with one modification. Namely, we have introduced the parameter $\lambda$ which is equal to unity if the interaction of electrons, having the momenta $\bm p_1$, $\bm p_2$ in the term $T$ and $\bm p_1$ , $\bm p_3$ in the term $\widetilde T$, with the atomic field is taken into account. The parameter $\lambda$ equals to zero, if one neglects this interaction. Insertion of this parameter allows us to investigate the importance of various contributions to the cross section.
First of all we note that the term $T$ is a sum of two contributions, see Appendix,
$$T=T^{(0)}+T^{(1)}\,,$$
where $T^{(0)}$ is the contribution to the amplitude in which the produced $e^+e^-$ pair does not interact with the atomic field, while the contribution $T^{(1)}$ contains such interaction.
In other words, the term $T^{(0)}$ corresponds to bremsstrahlung of the virtual photon decaying into a free $e^+e^-$ pair. In the contribution $T^{(1)}$, electrons with the momenta $\bm p_1$ and
$\bm p_2$ may interact or not interact with the atomic field. The latter contribution is given by the amplitude $T^{(1)}$ at $\lambda=0$. Below we refer to the result of account for such interaction in the term $T^{(1)}$ as the Coulomb corrections to scattering. Note that the contribution $T^{(0)}$ at $\lambda=0$ vanishes.
In the present work we are going to elucidate the following points: the relative contribution of the term $T^{(0)}$ to the cross section, an importance of the Coulomb corrections to scattering, an importance of the interference between the amplitudes $T$ and $\widetilde{T}$ in the cross section.
We begin our analysis with the case of the differential cross section. Let us consider the quantity $S$,
\begin{equation}\label{S}
S=\sum_{\mu_i=\pm1}\Bigg|\frac{\varepsilon_1 m^4 {\cal T}_{\mu_1\mu_2\mu_3\mu_4}}{\eta (2\pi)^2}\Bigg|^2 \,,
\end{equation}
where $\eta=Z\alpha$ and $Z$ is the atomic charge number. In Fig.~\ref{dif} the dependence of $S$
on the positron transverse momentum $p_{4\perp}$ is shown for gold ($Z=79$) at some values of $\varepsilon_i$, $\bm p_{2\perp}$, and $\bm p_{3\perp}$. Solid curve is the exact result, long-dashed curve corresponds to $\lambda=0$, dashed curve is the result obtained without account for the contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, dash-dotted curve is the result obtained without account for the interference between $T$ and $\widetilde{T}$, and dotted curve is the Born result (in the Born approximation $S$ is independent of $\eta$). One can see for the case considered, that the Born result differs significantly from the exact one, and account for the interference is also very important. The contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$ are noticeable but not large, and the Coulomb corrections to the contributions $T^{(1)}$ and $\widetilde{T}^{(1)}$ are essential.
The effect of screening for the values of the parameters considered in Fig.~\ref{dif} is unimportant. Note that relative importance of different effects under discussions for the differential cross section strongly depends on the values of $\bm p_{i}$. However, in all cases a deviation of the Born result from the exact one is substantial even for moderate values of $Z$.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{plotdif.eps}
\caption{The quantity $S$, see Eq. \eqref{S}, as the function of $p_{4\perp}/m$ for $Z=79$, $\varepsilon_1=100m$, $\varepsilon_2/\varepsilon_1=0.28$, $\varepsilon_3/\varepsilon_1=0.42$, $\varepsilon_4/\varepsilon_1=0.3$, $p_{2\perp}=1.3 m$, $p_{3\perp}=0.5 m$, $\bm p_{3\perp}$ parallel to $\bm p_{4\perp}$, and the angle between $\bm p_{2\perp}$ and $\bm p_{4\perp}$ being $\pi/2$; solid curve is the exact result, dotted curve is the Born result, dash-dotted curve is that obtained without account for the interference between $T$ and $\widetilde{T}$, the result for $\lambda=0$ is given by long-dashed curve, and the dashed curve corresponds to the result obtained by neglecting the contribution $T^{(0)}$ and $\widetilde{T}^{(0)}$.}
\label{dif}
\end{figure}
Let us consider the cross sections $d\sigma/dp_{2\perp}$, i.e., the cross sections differential over the electron transverse momentum $p_{2\perp}$. This cross section for $Z=79$ and $\varepsilon_1=100 m$ is shown in the left panel in Fig.~\ref{dif2}. In this picture solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\lambda=0$.
It is seen that the exact result significantly differs from the Born one, and account for the Coulomb corrections to scattering is also essential. An importance of account for the interference between $T$ and $\widetilde{T}$, as well as account for the contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, is demonstrated by the right panel in Fig.~\ref{dif2}. In this picture the quantity $\delta$, which is the deviation of the approximate result for $d\sigma/dp_{2\perp}$ from the exact one in units of the exact cross section, is shown. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$. It seen that both effects are noticeable.
Our results are obtained under the condition $\varepsilon_i\gg m$, and a question on the limits of integration over energies appears at the numerical calculations of $d\sigma/dp_{2\perp}$. We have examined this question and found that the variation of the limits of integration in the vicinity of the threshold changes only slightly the result of integration.
In any case, such a variation does not change the interplay of various contributions to the cross sections, and we present the results obtained by the integration over all kinematical region allowed.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{dp2new.eps}
\includegraphics[width=0.45\linewidth]{dp2_difnew.eps}
\caption{Left panel: the dependence of $d\sigma/dp_{2\perp}$ on $p_{2\perp}/m$ in units $\sigma_0/m=\alpha^2\eta^2/m^3$ for $Z=79$, $\varepsilon_1/m=100$; solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\lambda=0$. Right panel: the quantity $\delta$ as the function of $p_{2\perp}/m$, where $\delta$ is the deviation of the approximate result for $d\sigma/dp_{2\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$.}
\label{dif2}
\end{figure}
It follows from Fig.~\ref{dif2} that the deviation of the results obtained for $\lambda=1$ from that obtained for $\lambda=0$ is
noticeable and negative in the vicinity of the pick and small and positive in the wide region outside the pick. However, these two deviations (positive and negative) strongly compensate each other in the cross section integrated over both electron transverse momenta $\bm p_{2\perp}$ and $\bm p_{3\perp}$. This statement is illustrated in Fig.~\ref{dif4}, where
the cross sections differential over the positron transverse momentum, $d\sigma/dp_{4\perp}$ is shown for $Z=79$ and $\varepsilon_1=100 m$.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{dp4new.eps}
\includegraphics[width=0.45\linewidth]{dp4_difnew.eps}
\caption{Left panel: the dependence of $d\sigma/dp_{4\perp}$ on $p_{4\perp}/m$ in units $\sigma_0/m=\alpha^2\eta^2/m^3$ for $Z=79$, $\varepsilon_1/m=100$; solid curve is the exact result and dotted curve is the Born result. Right panel: the quantity $\delta_1$ as the function of $p_{4\perp}/m$, where $\delta_1$ is the deviation of the approximate result for $d\sigma/dp_{4\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$.}
\label{dif4}
\end{figure}
Again, the Born result differs significantly from the exact one. It is seen that all relative deviations $\delta_1$ depicted in the right panel are noticeable. Then,
the results obtained for $\lambda=0$ and that without contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$ are very close to each other. This means that account for the Coulomb corrections to scattering leads to a very small shift of the integrated cross section $d\sigma/dp_{4\perp}$, in contrast to the cross section $d\sigma/dp_{2\perp}$. Such suppression is similar to that found in our resent paper \cite{KM2017} at the consideration of $e^+e^-$ pair electroproduction by a heavy charged particle in the atomic field.
At last, let us consider the total cross section $\sigma$ of the process under consideration. The cross section $\sigma$ for $Z=79$ as the function of $\varepsilon_1/m$ is shown in the left panel in Fig.~\ref{tot}. In this picture solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \cite{Racah37}. Note that a small deviation of our Born result at relatively small energies from the asymptotics of the Born result is due, first, to uncertainty of our result related to the uncertainty of low limit of integration over the energies of the produced particles, and secondly, to neglecting identity of the final electrons in Ref.~\cite{Racah37}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{total_cs.eps}
\includegraphics[width=0.45\linewidth]{total_cs_dif.eps}
\caption{Left panel: the total cross section $\sigma$ as the function of $\varepsilon_1/m$ in units $\sigma_0=\alpha^2\eta^2/m^2$ for $Z=79$; solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \cite{Racah37}. Right panel: the quantity $\delta_2$ as the function of $\varepsilon_1/m$, where $\delta_2$ is the deviation of the approximate result for $\sigma$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$.}
\label{tot}
\end{figure}
It is seen that the exact result differs significantly from the Born one. In the right panel of Fig.~\ref{tot} we show the relative deviation $\delta_2$ of the approximate result for $\sigma$ from the exact one. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$. The corrections to the total cross section due to account for the contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, and the Coulomb corrections to scattering are small even at moderate energy $\varepsilon_1$. The effect of the interference is more important at moderate energy and less important at high energies.
In our recent paper \cite{KM2016} the differential cross section of electroproduction by relativistic electron has been derived. For the differential cross section, we have pointed out that the Coulomb corrections to the scattering are the most noticeable in the region $p_{2\perp}\sim \omega/\gamma$. On the basis of this statement, we have evaluated in the leading logarithmic approximation the Coulomb corrections to the total cross section, see Eq.~(33) of Ref.~\cite{KM2016}. However, as it is shown in the present paper, for the total cross section the contribution of the Coulomb corrections to scattering in the region $p_{2\perp}\sim \omega/\gamma$ is compensated strongly by the contribution of the Coulomb corrections to scattering in the wide region outside $p_{2\perp}\sim \omega/\gamma$. As a result, the Coulomb corrections to the total cross section derived in the leading logarithmic approximation does not affected by account for the Coulomb corrections to scattering. This means that the coefficient in Eq.~(33) of Ref.~\cite{KM2016} should be two times smaller and equal to that in the Coulomb corrections to the total cross section of $e^+e^-$ electroproduction by a relativistic heavy particle calculated in the leading logarithmic approximation. Note that an accuracy of the result obtained for the Coulomb corrections to the total cross section is very low because in electroproduction there is a strong compensation between the leading and next-to-leading terms in the Coulomb corrections, see Ref.~\cite{LM2009}.
\section{Conclusion}
Performing tabulations of the formula for the differential cross section of $e^+e^-$ pair electroproduction by a relativistic electron in the atomic field \cite{KM2016}, we have elucidated the importance of various contributions to the integrated cross sections of the process. It is shown that the Coulomb corrections are very important both for the differential cross section and for the integrated cross sections even for moderate values of the atomic charge number. This effect is mainly related to the Coulomb corrections to the amplitudes $T^{(1)}$ and ${\widetilde T}^{(1)}$ due to the exact account of the interaction of the produced $e^+e^-$ pair with the atomic field (the Coulomb corrections to the amplitude of $e^+e^-$ pair photoproduction by a virtual photon). There are also some other effects. For the cross section differential over the electron transverse momentum, $d\sigma/dp_{2\perp}$, the account for the interference of the amplitudes and the contribution of virtual bremsstrahlung (the contribution of the amplitudes $T^{(0)}$ and ${\widetilde T}^{(0)}$)
is noticeable. The Coulomb corrections to scattering is larger than these two effects but essentially smaller than the Coulomb corrections to the amplitude of pair photoproduction by a virtual photon. However, in the cross section differential over the positron transverse momentum, $d\sigma/dp_{4\perp}$, the interference of the amplitudes and the contribution of virtual bremsstrahlung lead to the same corrections as the effect of the Coulomb corrections to scattering. They are of the same order as in the case of $d\sigma/dp_{2\perp}$. This means that there is a strong suppression of the effect of the Coulomb corrections to scattering in the cross section $d\sigma/dp_{4\perp}$. Relative importance of various effects for the total cross section is the same as in the case of the cross section $d\sigma/dp_{4\perp}$.
\section*{Acknowledgement}
This work has been supported by Russian Science Foundation (Project No. 14-50-00080). It has been also supported in part by
RFBR (Grant No. 16-02-00103).
\section*{Appendix}\label{app}
Here we present the explicit expression for the amplitude $T$, derived in Ref.~\cite{KM2016}, with one modification. Namely, since we are going to investigate the importance of the interaction of electrons with the momenta $\bm p_1$ and $\bm p_2$ with the atomic field, we introduce the parameter $\lambda$ which is equal to unity if this interaction is taken into account and equals to zero if one neglects this interaction. We write the amplitude $T$ in the form
$$T=T^{(0)}+T^{(1)}\,,\quad T^{(0)}=T^{(0)}_\parallel+T^{(0)}_\perp\,,\quad T^{(1)}=T^{(1)}_\parallel+T^{(1)}_\perp\,,$$
where the helicity amplitudes $T^{(0)}_{\perp\parallel}$ read
\begin{align}\label{T0}
&T_\perp^{(0)}=\frac{8\pi A(\bm\Delta_0)}{\omega(m^2+\zeta^2)} \Big\{\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}
\Big[\frac{\varepsilon_3}{\omega^2}(\bm s_{\mu_3}^*\cdot \bm X)(\bm s_{\mu_3}\cdot\bm\zeta)(\varepsilon_1\delta_{\mu_1\mu_3}+\varepsilon_2\delta_{\mu_1\mu_4})\nonumber\\
&-\frac{\varepsilon_4}{\omega^2}(\bm s_{\mu_4}^*\cdot \bm X)(\bm s_{\mu_4}\cdot\bm\zeta) (\varepsilon_1\delta_{\mu_1\mu_4}+\varepsilon_2\delta_{\mu_1\mu_3})\Big]\nonumber\\
&-\frac{m\mu_1}{\sqrt{2}\varepsilon_1\varepsilon_2}R\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\bar\mu_4}
(\bm s_{\mu_1}\cdot\bm\zeta)(-\varepsilon_3\delta_{\mu_1\mu_3}+\varepsilon_4\delta_{\mu_1\mu_4})\nonumber\\
&+\frac{m\mu_3}{\sqrt{2}\varepsilon_3\varepsilon_4}\delta_{\mu_1\mu_2}\delta_{\mu_3\mu_4}(\bm s_{\mu_3}^*\cdot\bm X)(\varepsilon_1\delta_{\mu_3\mu_1}+\varepsilon_2\delta_{\mu_3\bar\mu_1})
+\frac{m^2\omega^2}{2\varepsilon_1\varepsilon_2\varepsilon_3\varepsilon_4}R\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\mu_4}\delta_{\mu_1\mu_3}\Big\}\,,\nonumber\\
&T_\parallel^{(0)}=-\frac{8\pi }{\omega^2}A(\bm\Delta_0)R\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}\,.
\end{align}
Here $\mu_i=\pm 1$ corresponds to the helicity of the particle with the momentum $\bm p_i$, $\bar\mu_i=-\mu_i$, and
\begin{align}\label{T0not}
&A(\bm\Delta)=-\frac{i\lambda}{\Delta_{\perp}^2}\int d\bm r\,\exp[-i\bm\Delta\cdot\bm r-i\chi(\rho)]\bm\Delta_{\perp}\cdot\bm\nabla_\perp V(r)\,,\nonumber\\
&\chi(\rho)=\lambda\int_{-\infty}^\infty dz\,V(\sqrt{z^2+\rho^2})\,,\quad\bm\rho=\bm r_\perp\,,\quad\bm\zeta=\frac{\varepsilon_3\varepsilon_4}{\omega}\bm\theta_{34}\,,\nonumber\\
&\omega=\varepsilon_3+\varepsilon_4\,, \quad \bm\Delta_{0\perp}=\varepsilon_2\bm\theta_{21}+\varepsilon_3\bm\theta_{31}+\varepsilon_4\bm\theta_{41}\,,\nonumber\\
&\Delta_{0\parallel}=-\frac{1}{2}\left[m^2\omega\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\frac{p_{2\perp}^2}{\varepsilon_2}+ \frac{p_{3\perp}^2}{\varepsilon_3}+\frac{p_{4\perp}^2}{\varepsilon_4}\right]\,,\nonumber\\
&R=\frac{1}{d_1d_2}[\Delta^2_{0\perp} (\varepsilon_1+\varepsilon_2)+2\varepsilon_1\varepsilon_2(\bm\theta_{12}\cdot\bm\Delta_{0\perp})]\,,\nonumber\\
&\bm X=\frac{1}{d_1}(\varepsilon_3\bm\theta_{23}+\varepsilon_4\bm\theta_{24})-\frac{1}{d_2}(\varepsilon_3\bm\theta_{13}+\varepsilon_4\bm\theta_{14})\,,\nonumber\\
&d_1=m^2\omega\varepsilon_1\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\varepsilon_2\varepsilon_3\theta_{23}^2
+\varepsilon_2\varepsilon_4\theta_{24}^2+\varepsilon_3\varepsilon_4\theta_{34}^2\,,\nonumber\\
&d_2=m^2\omega\varepsilon_2\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\varepsilon_2\varepsilon_3\theta_{31}^2
+\varepsilon_2\varepsilon_4\theta_{41}^2+(\varepsilon_3\bm\theta_{31}+\varepsilon_4\bm\theta_{41})^2\,,\nonumber\\
&\bm\theta_i=\bm p_{i\perp}/p_i \,,\quad \bm\theta_{ij}=\bm\theta_{i}-\bm\theta_{j}\,,
\end{align}
with $V(r)$ being the electron potential energy in the atomic field. In the amplitude $T^{(0)}$ the interaction of the produced $e^+e^-$ pair with the atomic field is neglected, so that $T^{(0)}$ depends on the atomic potential in the same way as the bremsstrahlung amplitude, see, e.g., Ref.~\cite{LMSS2005}.
The amplitudes $T^{(1)}_{\perp\parallel}$ have the following form
\begin{align}\label{T1C}
&T_\perp^{(1)}=\frac{8i\eta}{\omega \varepsilon_1}|\Gamma(1-i\eta)|^2 \int\frac{d\bm\Delta_\perp\, A(\bm\Delta_\perp+\bm p_{2\perp})F_a(Q^2)}{Q^2 M^2\,(m^2\omega^2/\varepsilon_1^2+\Delta_\perp^2)}\left(\frac{\xi_2}{\xi_1}\right)^{i\eta}
{\cal M}\,, \nonumber\\
&{\cal M}=-\frac{\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}}{\omega} \big[ \varepsilon_1(\varepsilon_3 \delta_{\mu_1\mu_3}-\varepsilon_4 \delta_{\mu_1\mu_4})
(\bm s_{\mu_1}^*\cdot\bm \Delta_\perp)(\bm s_{\mu_1}\cdot\bm I_1)\,\nonumber\\
&+\varepsilon_2(\varepsilon_3 \delta_{\mu_1\bar\mu_3}-\varepsilon_4 \delta_{\mu_1\bar\mu_4})(\bm s_{\mu_1}\cdot\bm \Delta_\perp)(\bm s_{\mu_1}^*\cdot\bm I_1) \big]+\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\bar\mu_4}\frac{m\omega\mu_1}{\sqrt{2}\varepsilon_1 }(\varepsilon_3 \delta_{\mu_1\mu_3}-\varepsilon_4 \delta_{\mu_1\mu_4})(\bm s_{\mu_1}
\cdot\bm I_1)\nonumber\\
&+\delta_{\mu_1\mu_2}\delta_{\mu_3\mu_4}\frac{m\mu_3}{\sqrt{2}}(\varepsilon_1 \delta_{\mu_1\mu_3}+\varepsilon_2 \delta_{\mu_1\bar\mu_3})(\bm s_{\mu_3}^*\cdot\bm \Delta_\perp)I_0
-\frac{m^2\omega^2}{2\varepsilon_1}\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\mu_4}\delta_{\mu_1\mu_3}I_0\,,\nonumber\\
&T_\parallel^{(1)}=-\frac{8i\eta\varepsilon_3\varepsilon_4}{\omega^3}|\Gamma(1-i\eta)|^2 \int \frac{d\bm\Delta_\perp\, A(\bm\Delta_\perp+\bm p_{2\perp})F_a(Q^2)}{Q^2 M^2}\left(\frac{\xi_2}{\xi_1}\right)^{i\eta}\,I_0
\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}\,,
\end{align}
where $F_a(Q^2)$ is a atomic form factor,
and the following notations are used
\begin{align}\label{T1Cnot}
&M^2=m^2\Big(1+\frac{\varepsilon_3\varepsilon_4}{\varepsilon_1\varepsilon_2}\Big)
+\frac{\varepsilon_1\varepsilon_3\varepsilon_4}{\varepsilon_2\omega^2} \Delta_\perp^2\,,\quad
\bm Q_\perp=\bm \Delta_\perp-\bm p_{3\perp}-\bm p_{4\perp}\,, \nonumber\\
&Q^2= Q_\perp^2+\Delta_{0\parallel}^2\,,\quad
\bm q_1=\frac{\varepsilon_3}{\omega}\bm \Delta_\perp- \bm p_{3\perp}\,,\quad \bm q_2=
\frac{\varepsilon_4}{\omega}\bm \Delta_\perp- \bm p_{4\perp} \,,\nonumber\\
&I_0=(\xi_1-\xi_2)F(x)+(\xi_1+\xi_2-1)(1-x)\frac{F'(x)}{i\eta}\,,\nonumber\\
&\bm I_1=(\xi_1\bm q_1+\xi_2\bm q_2)F(x)+(\xi_1\bm q_1-\xi_2\bm q_2)(1-x)\frac{F'(x)}{i\eta}\,,\nonumber\\
&\xi_1=\frac{M^2}{M^2+q_1^2}\,,\quad \xi_2=\frac{M^2}{M^2+q_2^2}\,,\quad x=1-\frac{Q_\perp^2\xi_1\xi_2}{M^2}\,,\nonumber\\
&F(x)=F(i\eta,-i\eta, 1,x)\,,\quad F'(x)=\frac{\partial}{\partial x}F(x)\,,\quad \eta=Z\alpha\,.
\end{align}
Note that the parameter $\lambda$ is contained solely in the function $A(\bm\Delta)$, Eq.~\eqref{T0not}.
|
\section{Introduction}
\label{sec:introduction}
\subsection{Motivation: Road and Terrain Mapping}
\label{subsec:terrain}
There has been a steep rise of interest in the last decade among researchers in academia and the commercial sector in autonomous vehicles and self driving cars. Although adaptive estimation has been studied for some time, applications such as terrain or road mapping continue to challenge researchers to further develop the underlying theory and algorithms in this field. These vehicles are required to sense the environment and navigate surrounding terrain without any human intervention. The environmental sensing capability of such vehicles must be able to navigate off-road conditions or to respond to other agents in urban settings. As a key ingredient to achieve these goals, it can be critical to have a good {\em a priori} knowledge of the surrounding environment as well as the position and orientation of the vehicle in the environment.
To collect this data for the construction of terrain maps, mobile vehicles equipped with multiple high bandwidth, high resolution imaging sensors are deployed. The mapping sensors retrieve the terrain data relative to the vehicle and navigation sensors provide georeferencing relative to a fixed coordinate system. The geospatial data, which can include the digital terrain maps acquired from these mobile mapping systems, find applications in emergency response planning and road surface monitoring. Further, to improve the ride and handling characteristic of an autonomous vehicle, it might be necessary that these digital terrain maps have accuracy on a sub-centimeter scale.
One of the main areas of improvement in current state of the art terrain modeling technologies is the localization. Since the localization heavily relies on the quality of GPS/GNSS, IMU data, it is important to come up with novel approaches which could fuse the data from multiple sensors to generate the best possible estimate of the environment. Contemporary data acquisition systems used to map the environment generate scattered data sets in time and space. These data sets must be either post-processed or processed online for construction of three dimensional terrain maps.
Fig.\ref{fig:vehicle1} and Fig.\ref{fig:vehicle2} depict a map building vehicle and trailer developed by some of the authors at Virginia Tech. The system generates experimental observations in the form of data that is scattered in time and space. These data sets have extremely high dimensionality.
Roughly 180 million scattered data points are collected per minute of data acquisition, which corresponds to a data file of roughly $\mathcal{O}(1GB)$ in size. Current algorithms and software developed in-house post-process the scattered data to generate road and terrain maps. This offline batch computing problem can take many days of computing time to complete. It remains a challenging task to derive a theory and associated algorithms that would enable adaptive or online estimation of terrain maps from such high dimensional, scattered measurements.
This paper introduces a novel theory and associated algorithms that are amenable to observations that take the form of scattered data. The key attribute of the approach is that the unknown function representing the terrain is viewed as an element of a RKHS. The RKHS is constructed in terms of a kernel function $k(\cdot,\cdot): \Omega \times \Omega \rightarrow \mathbb{R}$ where $\Omega \subseteq \mathbb{R}^d$ is the domain over which scattered measurements are made.
The kernel $k$ can often be used to define a collection of radial basis functions (RBFs) $k_x(\cdot):=k(x,\cdot)$, each of which is said to be centered at some point $x\in \Omega$. For example, these RBFs might be exponentials, wavelets, or thin plate splines \cite{wendland}.
By embedding the unknown function that represents the terrain in a RKHS, the new formulation generates a system that constitutes a distributed parameter system. The unknown function, representing map terrain, is the infinite dimensional distributed parameter.
Although the study of infinite dimensional distributed parameter systems can be substantially more difficult than the study of ODEs, a key result is that stability and convergence of the approach can be established succinctly in many cases.
Much of the complexity \cite{bsdr1997,bdrr1998} associated with construction of Gelfand triples or the analysis of infinitesimal generators and semigroups that define a DPS can be avoided for many examples of the systems in this paper.
The kernel $k(\cdot,\cdot): \Omega \times \Omega \rightarrow \mathbb{R}$ that defines the RKHS provides a natural collection of bases for approximate estimates of the solution that are based directly on some subset of scattered measurements $\{ x_i \}_{i=1}^n \subset \mathbb{R}^d$.
It is typical in applications to select the centers $\{x_i\}_{i=1}^n$ that locate the basis functions from some sub-sample of the locations at which the scattered data is measured. Thus, while we do not study the nuances of such methods, in this paper the formulation provides a natural framework to pose so-called ``basis adaptive methods'' such as in~\cite{dzcf2012} and the references therein.
While our formulation is motivated by this particular application, it is a general construction for framing and generalizing some conventional approaches for online adaptive estimation. This framework introduces sufficient conditions that guarantee convergence of estimates in spatial domain $\Omega$ to the unknown function $f$. In contrast, nearly all conventional strategies consider stability and convergence in time alone for some fixed finite dimensional space of $\mathbb{R}^d \times \mathbb{R}^n$, with $n$ the number of parameters used to represent the estimate. The remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for the infinite dimensional adaptive estimation problem defined over an RKHS. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from RBF bases that are centered at a subset of scattered observations.
\begin{figure}
\centering
\includegraphics[scale=0.75]{Picture1.png}
\caption{Vehicle Terrain Measurement System, Virginia Tech}
\label{fig:vehicle1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.75]{Picture2.png}
\caption{Experimental Setup with LMI 3D GO-Locator Lasers}
\label{fig:vehicle2}
\end{figure}
\subsection{Related Research}
\label{sec:related_research}
The general theory derived in this paper has been motivated in part by the terrain mapping application discussed in Section \ref{sec:introduction}, but also by recent research in a number of fields related to estimation of nonlinear functions. In this section we briefly review some of the recent research in probabilistic or Bayesian mapping methods, nonlinear approximation and learning theory, statistics, and nonlinear regression.
\subsubsection{Bayesian and Probabilistic Mapping}
Many popular known techniques adopt a probabilistic approach towards solving the localization and mapping problem in robotics. The algorithms used to solve this problem fundamentally rely on Bayesian estimation techniques like particle filters, Kalman filters and other variants of these methods \cite{Thrun2005Probabilistic, Whyte2006SLAM1, Whyte2006SLAM2}. The computational efforts required to implement these algorithms can be substantial since they involve constructing and updating maps while simultaneously tracking the relative locations of agents with respect to the environment. Over the last three decades significant progress has been made on various frontiers in terms of high-end sensing capabilities, faster data processing hardwares, robust and efficient computational algorithms \cite{Dissanayake2011Review, Dissanayake2000Computational}. However, the usual Kalman filter based approaches implemented in these applications often are required to address the inconsistency problem in estimation that arise from uncertainties in state estimates \cite{Huang2007Convergence,Julier2001Counter}. Furthermore, it is well acknowledged among the community that these methods suffer from a major drawback of `{\em closing the loop}'. This refers to the ability to adaptively update the information if it is revisited. Since such a capability for updating information demands huge memory to store the high resolution and high bandwidth data. Moreover, it is highly nontrivial to guarantee that the uncertainties in estimates would converge to lower bound at sub optimal rates, since matching these rates and bounds significantly constraint the evolution of states along infeasible trajectories. While probabilistic methods, and in particular Bayesian estimation techniques, for the construction of terrain maps have flourished over the past few decades, relatively few approaches for establishing deterministic theoretical error bounds in the spatial domain of the unknown function representing the terrain have appeared.
\subsubsection{Approximation and Learning Theory}
Approximation theory has a long history, but the subtopics of most relevance to this paper include recent studies in multiresolution analysis (MRA), radial basis function (RBF) approximation and learning theory. The study of MRA techniques became popular in the late 1980's and early 1990's, and it has flourished since that time. We use only a small part of the general theory of MRAs in this paper, and we urge the interested reader to consult one of the excellent treatises on this topic for a full account. References \cite{Meyer,mallat,daubechies, dl1993} are good examples of such detailed treatments. We briefly summarize the pertinent aspects of MRA here and in Section \ref{sec:MRA}. A multiresolution analysis defines a family of nested approximation spaces $\seq{H_j}_{j\in \mathbb{N}}\subseteq H$ of an abstract space $H$ in terms of a single function $\phi$, the scaling function. The approximation space $H_j$ is defined in terms of bases that are constructed from dilates and translates $\seq{\phi_{j,k}}_{k\in \mathbb{Z}^d}$ with $\phi_{j,k}(x):=2^{jd/2}\phi(2^jx-k)$ for $x\in \mathbb{R}^d$ of this single function $\phi$. It is for this reason that these spaces are sometimes referred to as shift invariant spaces. While the MRA is ordinarily defined only in terms of the scaling functions, the theory provides a rich set of tools to derive bases $\seq{\psi_{j,k}}_{k\in \mathbb{Z}}$, or wavelets,
for the complement spaces $W_j:=V_{j+1}- V_{j}$. Our interest in multiresolution analysis arises since these methods can be used to develop multiscale kernels for RKHS, as summarized in \cite{opfer1,opfer2}. We only consider approximation spaces defined in terms of the scaling functions in this paper. Specifically, with a parameter $s \in \mathbb{R}^+$ measuring smoothness, we use $s-$regular MRAs to define admissible kernels for the reproducing kernels that embody the online and adaptive estimation strategies in this paper.
When the MRA bases are smooth enough, the RKHS kernels derived from a MRA can be shown to be equivalent to a scale of Sobolev spaces having well documented approximation properties.
The B-spline bases in the numerical examples yield RKHS embeddings with good condition numbers. The details of the RKHS embedding strategy given in terms of wavelet bases associated with an MRA is treated in the forthcoming paper.
\subsubsection{Learning Theory and Nonlinear Regression}
The methodology defined in this paper for online adaptive estimation can be viewed as similar in philosophy to the recent efforts that synthesize learning theory and approximation theory. \cite{dkpt2006,kt2007,cdkp2001,t2008} In these references, independent and identically distributed observations of some unknown function are collected, and they are used to define an estimator of that unknown function. Sharp estimates of error, guaranteed to hold in probability spaces, are possible using tools familiar from learning theory and thresholding in approximation spaces. The approximation spaces are usually defined terms of subspaces of an MRA. However, there are a few key differences between the these efforts in nonlinear regression and learning theory and this paper. The learning theory approaches to estimation of the unknown function depend on observations of the function itself. In contrast, the adaptive online estimation framework here assumes that observations are made of the estimator states, not directly of the unknown function itself. The learning theory methods also assume a discrete measurement process, instead of the continuous measurement process that characterizes online adaptive estimation. On the other hand, the methods based on learning theory derive sharp function space rates of convergence of the estimates of the unknown function. Such estimates are not available in conventional online adaptive estimation methods. Typically, convergence in adaptive estimation strategies is guaranteed in time in a fixed finite dimensional space. One of the significant contributions of this paper is to construct sharp convergence rates in function spaces, similar to approaches in learning theory, of the unknown function using online adaptive estimation.
\subsubsection{Online Adaptive Estimation and Control}
Since the approach in this paper generalizes a standard strategy in online adaptive estimation and control theory, we review this class of methods in some detail. This summary will be crucial in understanding the nuances of the proposed technique and in contrasting the sharp estimates of error available in the new strategy to those in the conventional approach.
Many popular textbooks study online or adaptive estimation within the context of adaptive control theory for systems governed by ordinary differential equations \cite{sb2012,IaSu,PoFar}. The theory has been extended in several directions, each with its subtle assumptions and associated analyses.
Adaptive estimation and control theory has been refined for decades, and significant progress has been made in deriving convergent estimation and stable control strategies that are robust with respect to some classes of uncertainty.
The efforts in \cite{bsdr1997,bdrr1998} are relevant to this paper, where the authors generalize some of adaptive estimation and model reference adaptive control (MRAC) strategies for ODEs so that they apply to deterministic infinite dimensional evolution systems. In addition, \cite{dmp1994,dp1988,dpg1991,p1992} also investigate adaptive control and estimation problems under various assumptions for classes of stochastic and infinite dimensional systems.
Recent developments in $\mathcal{L}^1$ control theory as presented in \cite{HC}, for example, utilize adaptive estimation and control strategies in obtaining stability and convergence for systems generated by collections of nonlinear ODEs.
To motivate this paper, we consider a model problem in which the plant dynamics are generated by the nonlinear ordinary differential equations
\begin{align}
\dot{x}(t)&= A x(t) + Bf(x(t)), \quad \quad x(0)=x_0
\label{eq:simple_plant}
\end{align}
with state $x(t)\in \mathbb{R}^d$, the known Hurwitz system matrix $ A \in \mathbb{R}^{d\times d}$, the known control influence matrix $B\in \mathbb{R}^d$, and the unknown function $f:\mathbb{R}^d \rightarrow \mathbb{R}$.
Although this model problem is an exceedingly simple prototypical example studied in adaptive estimation and control of ODEs \cite{sb2012,IaSu,PoFar}, it has proven to be an effective case study in motivating alternative formulations such as in \cite{HC} and will suffice to motivate the current approach.
Of course, much more general plants are treated in standard methods \cite{sb2012,IaSu,PoFar,naranna} and can be attacked using the strategy that follows. This structurally simple problem is chosen so as to clearly illustrate the essential constructions of RKHS embedding method while omitting the nuances associated with general plants. A typical adaptive estimation problem can often be formulated in terms of an estimator equation and a learning law. One of the simplest estimators for this model problem takes the form
\begin{align}
\dot{\hat{x}}(t)&= A \hat{x}(t) + B\hat{f}(t,x(t)),
\quad \quad
\hat{x}(0)=x_0
\label{eq:sim_estimator}
\end{align}
where $\hat{x}(t)$ is an estimate of the state $x(t)$ and $\hat{f}(t,x(t))$ is time varying estimate of the unknown function $f$ that depends on measurement of the state $x(t)$ of the plant at time $t$. When the state error $\tilde{x}:=x-\hat{x}$ and function estimate error $\tilde{f}:=f-\hat{f}$ are defined, the state error equation is simply
\begin{align}
\dot{\tilde{x}}(t)&= A \tilde{x}(t) + B\tilde{f}(t,x(t)), \quad \quad
\tilde{x}(0)=\tilde{x}_0.
\label{eq:sim_error}
\end{align}
The goal of adaptive or online estimation is to determine a learning law that governs the evolution of the function estimate $\hat{f}$ and guarantees that the state estimate $\hat{x}$ converges to the true state $x$,
$
\tilde{x}(t)= x(t)-\hat{x}(t) \to
0 \text{ as } t\to \infty
$.
Perhaps additionally, it is hoped that the function estimates $\hat{f}$ converge to the unknown function $f$,
$
\tilde{f}(t)= f(t) -\hat{f}(t) \to
0 \text{ as } t \to \infty.
$
The choice of the learning law for the update of the adaptive estimate $\hat{f}$ depends intrinsically on what specific information is available about the unknown function $f$.
It is most often the case for ODEs that the estimate $\hat{f}$ depends on a finite set of unknown parameters $\hat{\alpha}_1,\ldots,\hat{\alpha}_n$. The learning law is then expressed as an evolution law for the parameters $\hat{\alpha}_i$, $i=1,\ldots,n$. The discussion that follows emphasizes that this is a very specific underlying assumption regarding the information available about unknown function $f$. Much more general prior assumptions are possible.
\subsubsection{Classes of Uncertainty in Adaptive Estimation}
The adaptive estimation task seeks to construct a learning law based on the knowledge that is available regarding the function $f$.
Different methods for solving this problem have been developed depending on the type of information available about the unknown function $f$.
The uncertainty about $f$ is often described as forming a continuum between structured and unstructured uncertainty.
In the most general case, we might know that $f$ lies in some compact set $\mathcal{C}$ of a particular Hilbert space of functions $H$ over a subset $\Omega \subseteq \mathbb{R}^d$.
This case, that reflects in some sense the least information regarding the unknown function, can be expressed as the condition that
$
f \in \{g \in \mathcal{C} | \mathcal{C}\subset {H} \},
$
for some compact set of functions $\mathcal{C}$ in a Hilbert space of functions $H$.
In approximation theory, learning theory, or non-parametric estimation problems this information is sometimes referred to as the {\em prior}, and choices of $H$ commonly known as the hypothesis space. The selection of the hypothesis space $H$ and set $\mathcal{C}$ often reflect the approximation, smoothness, or compactness properties of the unknown function \cite{dkpt2006}.
This example may in some sense utilize only limited or minimal information regarding the unknown function $f$, and we may refer to the uncertainty as unstructured. Numerous variants of conventional adaptive estimation admit additional knowledge about the unknown function.
In most conventional cases the unknown function $f$ is assumed to be given in terms of some fixed set of parameters.
This situation is similar in philosophy to problems of parametric estimation which restrict approximants to classes of functions that admit representation in terms of a specific set of parameters.
Suppose the finite dimensional basis $\left \{ \phi_k\right \}_{k=1,\ldots, n}$ is known for a particular finite dimensional subspace $H_n \subseteq H$ in which the function lies, and further that the uncertainty is expressed as the condition that there is a unique set of unknown coefficients $\left \{\alpha_i^*\right\}_{i=1,\ldots,n} $ such that $f:=f^*=\sum_{i=1,\ldots,n} \alpha_i^* \phi_i \in H_n$. Consequently, conventional approaches may restrict the adaptive estimation technique to construct an estimate with knowledge that $f$ lies in the set
\begin{align}
\label{eq:e2}
f \in \biggl \{ g \in H_n \subseteq H \biggl |
&g = \sum_{i=1,\ldots,n} \alpha_i \phi_i
\text{ with } \\
\notag &\alpha_i \in [a_i,b_i]
\subset \mathbb{R} \text{ for } i=1,\ldots,n
\biggr \}
\end{align}
\noindent This is an example where the uncertainty in the estimation problem may be said to be structured. The unknown function is parameterized by the collection of coefficients $\{\alpha_i^*\}_{i=1,\ldots,n}$.
In this case the compact set the $\mathcal{C}$ is a subset of $H_n$. As we discuss in sections ~\ref{subsec:Lit},~\ref{sec:RKHS},and ~\ref{sec:existence}, the RKHS embedding approach can be characterised by the fact that the uncertainty is more general and even unstructured, in contrast to conventional methods.
\subsubsection{Adaptive Estimation in $\mathbb{R}^d \times \mathbb{R}^n$}
\label{subsec:adapt1}
The development of adaptive estimation strategies when the uncertainty takes the form in \ref{eq:e2} represents, in some sense, an iconic approach in the adaptive estimation and control community.
Entire volumes \cite{sb2012,IaSu,PoFar,NarPar199D} contain numerous variants of strategies that can be applied to solve adaptive estimation problems in which the uncertainty takes the form in \ref{eq:e2}.
One canonical approach to such an adaptive estimation problem is governed by three coupled equations: the plant dynamics ~\ref{eq:f}, estimator equation \ref{eq:a2}, and the learning rule.
We organize the basis functions as $\phi:=[\phi_1,\dots,\phi_n]^T$ and the parameters as $\alpha^{*^T}=[\alpha^*_1,\ldots,\alpha^*_n]$,
$\hat{\alpha}^T=[\hat{\alpha}_1,\ldots,\hat{\alpha}_n]$. A common gradient based learning law yields the governing equations that incorporate the plant dynamics, estimator equation, and the learning rule.
\begin{align}
\label{eq:f}
\dot{x}(t) &= Ax(t) + B \alpha^{*^T} \phi(x(t)),\\
\label{eq:a2}
\dot{\hat{x}}(t) &
=A \hat{x}(t) + B \hat{\alpha}^T(t) \phi(x(t)), \\
\label{eq:a3}
\dot{\hat{\alpha}}(t) &= \Gamma^{-1}\phi B^T P(x-\hat{x}),
\end{align}
where $\Gamma\in \mathbb{R}^{n\times n}$ is symmetric and positive definite. The symmetric positive definite matrix $P\in\mathbb{R}^{d\times d}$ is the unique solution of Lyapunov's equation $A^T P + PA = -Q$, for some selected symmetric positive definite $Q \in \mathbb{R}^{d\times d}$.
\noindent Usually the above equations are summarized in terms the two error equations
\begin{align}
\label{eq:a4}
\dot{\tilde{x}}(t) &= A \tilde{x} + B \phi^{T}(x(t))\tilde{\alpha}(t)\\
\label{eq:a5}
\dot{\tilde{\alpha}}(t) &= -\Gamma^{-1} \phi(x(t)) B^T P\tilde{x}.
\end{align}
with $\tilde{\alpha}:=\alpha^*-\hat{\alpha}$ and $\tilde{x}:=x-\hat{x}$.
Equations ~\ref{eq:a4},~\ref{eq:a5} can also be written as
\begin{align}
\begin{Bmatrix}
\dot{\tilde{x}}(t) \\
\dot{\tilde{\alpha}}(t)
\end{Bmatrix}
=
\begin{bmatrix}
A & B \phi^T (x(t))\\
-\Gamma^{-1} \phi(x(t)) B ^T P & 0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{\alpha}(t)
\end{Bmatrix}.
\label{eq:error_conv}
\end{align}
This equation defines an evolution on $\mathbb{R}^d \times \mathbb{R}^n$
and has been studied in great detail in ~\cite{naranna,narkud,mornar}.
Standard texts such as ~\cite{sb2012,IaSu,PoFar,NarPar199D} outline numerous other variants for the online adaptive estimation problem using projection, least squares methods and other popular approaches.
\subsection{Overview of Our Results}
\label{subsec:Lit}
\subsubsection{Adaptive Estimation in $\mathbb{R}^d \times H$}
\label{subsec:adapt2}
In this paper, we study the method of RKHS embedding that interprets the unknown function $f$ as an element of the RKHS $H$, without any {\em a priori} selection of the particular finite dimensional subspace used for estimation of the unknown function. The counterparts to Equations ~\ref{eq:f},~\ref{eq:a2},~\ref{eq:a3} are the plant, estimator, and learning laws
\begin{align}
\dot{x}(t) &= Ax(t) + BE_{x(t)}f,\\
\dot{\hat{x}}(t) &= A\hat{x}(t) + BE_{x(t)}\hat{f}(t), \label{eq:rkhs_plant}\\
\dot{\hat{f}}(t) &= \Gamma^{-1}(BE_{x(t)})^*P(x(t) - \hat{x}(t)),
\end{align}
where as before $x,\hat{x}\in \mathbb{R}^d$, but $f$ and $\hat{f}(t)\in H$, $E_{\xi}: H \to \mathbb{R}^d $ is the evaluation functional that is given by $E_{\xi}: f \mapsto f(\xi)$ for all $\xi\in \mathbb{R}^d$ and $f \in H$, and $\Gamma\in \mathcal{L}(H,H)$ is a self adjoint, positive definite linear operator.a The error equation analogous to Equation~\ref{eq:error_conv} system is then given by
\begin{align}
\begin{Bmatrix}
\dot{\tilde{x}}(t) \\
\dot{\tilde{f}}(t)
\end{Bmatrix}
=
\begin{bmatrix}
A & B E_{x(t)}\\
-\Gamma^{-1}(B E_{x(t)})^*P & 0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix},
\label{eq:eom_rkhs}
\end{align}
which defines an evolution on $\mathbb{R}^d \times H$, instead of on $\mathbb{R}^d \times \mathbb{R}^n$.
\subsubsection{Existence, Stability, and Convergence Rates}
We briefly summarize and compare the conlusions that can be reached for the conventional and RKHS embedding approaches. Let $(\hat{x}, \hat{f})$ be estimates of $(x,f)$ that evolve according to the state, estimator, and learning law of RKHS embedding. Define the state and distributed parameter error as $\tilde{x}:=x-\hat{x}$ and $\tilde{f}:=f-\hat{f}$, respectively. Under the assumptions outlined in Theorems \ref{th:unique}, \ref{th:stability}, and \ref{th:PE} for each $T>0$ there is a unique mild solution for the error $(\tilde{x},\tilde{f})\in C([0,T];\mathbb{R}^d\times H)$ to the DPS described by Equations \ref{eq:eom_rkhs}. Moreover, the error in state estimates $\tilde{x}(t)$ converges to zero,
$\lim_{t \rightarrow \infty} \| \tilde{x}(t)\|=0$. If all the evolutions with initial conditions in an open ball containing the origin exist in $C([0,\infty);\mathbb{R}\times H)$, the equilibrium at the origin $(\tilde{x},\tilde{f})=(0,0)$ is stable. The results so far are therefore entirely analogous to conventional estimation method, but are cast in the infinite dimensional RKHS $H$. See the standard texts ~\cite{sb2012,IaSu,PoFar,NarPar199D} for proofs of existence and convergence of the conventional methods. It must be emphasized again that the conventional results are stated for evolutions in $\mathbb{R}^d\times\mathbb{R}^n$, and the RKHS results hold for evolutions in $\mathbb{R}^d\times H$. Considerably more can be said about the convergence of finite dimensional approximations. For the RKHS embedding approach state and finite dimensional approximations $(\hat{x}_j,\hat{f}_j)$ of the infinite dimensional estimates $(\hat{x},\hat{f})$ on a grid that has resolution level $j$ are governed by Equations \ref{eq:approx_on_est1} and \ref{eq:approx_on_est2}. The finite dimensional estimates $(\hat{x}_j,\hat{f}_j)$ converge to the infinite dimensional estimates $(\hat{x},\hat{f})$ at a rate that depends on $\|I-\Gamma\Pi_j^*\Gamma_j^{-1} \Pi_j\|$ and $\|I - \Pi_j\|$ where $\Pi_j : H \to H_j$ is the $H$-orthogonal projection.
The remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for infinite dimensional, online or adaptive estimation problems. The analysis is based on a study of distributed parameter systems (DPS) that contains the RKHS $H$. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from radial basis function (RBF) bases that are centered at a subset of scattered observations.
The discussion focuses on a comparison and contrast of the analysis for the ODE system and the distributed parameter system.
Prior to these discussions, however, we present a brief review fundamental properties of RKHS spaces in the next section.
\section{Reproducing Kernel Hilbert Space}
\label{sec:RKHS}
Estimation techniques for distributed parameter systems have been previously studied in \cite{bk1989}, and further developed to incorporate adaptive estimation of parameters in certain infinite dimensional systems by \cite{bsdr1997} and the references therein. These works also presented the necessary conditions required to achieve parameter convergence during online estimation. But both approaches rely on delicate semigroup analysis and evolution, or Gelfand triples.The approach herein is much simpler and amenable to a wide class of applications. It appears to be simpler, practical approach to generalise conventional methods. This paper considers estimation problems that are cast in terms of the unknown function $f:\Omega \subseteq \mathbb{R}^d \to \mathbb{R}$, and our approximations will assume that this function is an element of a reproducing kernel Hilbert space. One way to define a reproducing kernel Hilbert space relies on demonstrating the boundedness of evaluation functionals, but we briefly summarize a constructive approach that is helpful in applications and understanding computations such as in our numerical examples.
In this paper $\mathbb{R}$ denotes the real numbers, $\mathbb{N}$ the positive integers, $\mathbb{N}_0$ the non-negative integers, and $\mathbb{Z}$ the integers. We follow the convention that $a \gtrsim b$ means that there is a constant $c$, independent of $a$ or $b$, such that $b \leq ca$. When $a\gtrsim b $ and $b\gtrsim a$, we write $a \approx b $. Several function spaces are used in this paper. The $p$-integrable Lebesgue spaces are denoted $L^p(\Omega)$ for $1\leq p \leq \infty$, and $C^s (\Omega)$ is the space of continuous functions on $\Omega$ all of whose derivatives less than or equal to $s$ are continuous. The space $C_b^s (\Omega)$ is the normed vector subspace of $C^s (\Omega)$ and consists of all $f\in C^s (\Omega)$ whose derivatives of order less than or equal to $s$ are bounded. The space $C^{s,\lambda} (\Omega)\subseteq C_b^s (\Omega) \subseteq C^s (\Omega)$ is the collection of functions with derivatives $\frac{\partial^{|\alpha|}f}{\partial x^{|\alpha|}}$ that are $\lambda$-Holder continuous,
\begin{align*}
\|f(x)-f(y)\| \leq C\|x - y\|^{\lambda}
\end{align*}
The Sobolev space of functions that have weak derivatives of the order less than equal to $r$ that lie in $L^p(\Omega)$ is denoted $H^r_p(\Omega)$.
A reproducing kernel Hilbert space is constructed in terms of a symmetric, continuous, and positive definite function $k:\Omega \times \Omega \to \mathbb{R}$, where positive definiteness requires that for any finite collection of points
$\{x_i\}_{i=1}^n \subseteq \Omega $
$$\sum_{i,j=1}^{n}k(x_i , x_j ) \alpha_i \alpha_j \gtrsim \|\alpha\|^{2}_{\mathbb{R}^n}
$$
for all $\alpha = \{\alpha_1,\hdots, \alpha_n \}^T$.. For each $x\in \Omega$, we denote the function $k_x := k_x (\cdot) = k(x,\cdot)$ and refer to $k_x$ as the kernel function centered at $x$. In many typical examples ~\cite{wendland}, $k_x$ can be interpreted literally as a radial basis function centered at $x\in \Omega$. For any kernel functions $k_x$ and $k_y$ centered at $x,y \in \Omega$, we define the inner product $(k_x,k_y):= k(x,y)$.
The RKHS $H$ is then defined as the completion of all finite sums extracted from the set $\{k_x|x \in \Omega\}$.
It is well known that this construction guarantees the boundedness of the evaluation functionals $E_x : H \to \mathbb{R}$. In other words for each $x\in \Omega$ we have a constant $c_x$ such that
$$ |E_x f | = |f(x)| \leq c_x \|f\|_H$$
for all $f\in H$. The reproducing property of the RKHS $H$ plays a crucial role in the analysis here, and it states that,
$$E_xf = f(x) = (k_x , f)_H$$
for $x \in \Omega$ and $f\in H$. We will also require the adjoint $E_x^* :\mathbb{R}\to H $ in this paper, which can be calculated directly by noting that
$$ (E_x f,\alpha )_\mathbb{R} = (f,\alpha k_x)_H = (f,E_x^* \alpha)_H $$
for $\alpha \in \mathbb{R}$ , $x\in \Omega$ and $f\in H$. Hence, $E_x^* : \alpha \mapsto \alpha k_x \in H$.
Finally, we will be interested in the specific case in which it is possible to show that the RKHS $H$ is a subset of $C(\Omega)$, and furthermore, that the associated injection$i:H \rightarrow C(\Omega)$ is uniformly bounded.
This uniform embedding is possible, for example, provided that the kernel is bounded by a constant $\tilde{C}^2$,
$
\sup_{x\in \Omega} k(x,x) \leq \tilde{C}^2.
$
This fact follows by first noting that by the reproducing kernel property of the RKHS,
we can write
\begin{equation}
|f(x)|=|E_x f |= |(k_x, f)_H | \leq \|k_x \|_H \|f\|_H.
\end{equation}
From the definition of the inner product on $H$, we have
$
\|k_x \|^2=|(k_x, k_x)_H |=|(k(x,x)| \leq \tilde{C}^2.
$
It follows that $\|if\|_{C(\Omega)}:= \|f\|_{C(\Omega)} \leq {\tilde{C}} \|f\|_H$ and thereby that $\|i\|\leq {\tilde{C}}$. We next give two examples that will be studied in this paper.
\subsection*{Example: The Exponential Kernel}
A popular example of an RKHS, one that will be used in the numerical examples, is constructed from the family of exponentials $\kappa(x,y):=e^{-\| x-y\|^2/\sigma^2}$ where $\sigma>0$.
Suppose that $\tilde{C} = \sqrt{\sup_{x\in\Omega}\kappa(x,x)}<\infty$. Smale and Zhou in \cite{sz2007} argue that
$$
|f(x)|=|E_x(f)|=|(\kappa_x,f)_H|\leq
\|\kappa_x\|_H \|f\|_H
$$
for all $x\in \Omega$ and $f\in H$, and since
$\|\kappa_x\|^2=|\kappa(x,x)|\leq \tilde{C}^2$, it follows that the embedding $i:H \rightarrow L^\infty(\Omega)$ is bounded,
$$
\|f\|_{L^\infty(\Omega)}:=\|i(f)\|_{L^\infty(\Omega)}\leq \tilde{C} \|f\|_H.
$$
For the exponential kernel above, $\tilde{C}=1$.
Let $C^s(\Omega)$ denote the space of functions on $\Omega$ all of whose partial derivatives of order less than or equal to $s$ are continuous. The space $C^s_b(\Omega)$ is endowed with the norm
$$
\|f\|_{C^s_b(\Omega)}:= \max_{|\alpha|\leq s}
\left \|
\frac{\partial^{|\alpha|}f}{\partial x^\alpha}
\right \|_{L^\infty(\Omega)},
$$
with the summation taken over multi-indices $\alpha:=\left \{ \alpha_1, \ldots,\alpha_d \right \}\in \mathbb{N}^d$, $\partial x^{\alpha}:=\partial x_1^{\alpha_1} \cdots \partial x_d^{\alpha_d}$, and $|\alpha|=\sum_{i=1,\ldots,d} \alpha_i$.
Observe that the continuous functions in $C^s(\Omega)$ need not be bounded even if $\Omega$ is a bounded open domain. The space $C^s_b(\Omega)$ is the subspace consisting of functions $f\in C^s_b(\Omega)$ for which all derivatives of order less than or equal to $s$ are bounded.
The space $C^{s,\lambda}(\Omega)$ is the subspace of functions $f$ in $C^{s}(\Omega)$
for which all of the partial derivatives $\frac{\partial f^{|\alpha|}}{\partial x^\alpha}$ with $|\alpha|\le s$ are
$\lambda$-Holder continuous. The norm of $C^{s,\lambda}(\Omega)$ for $0 < \lambda \leq 1$ is given by
$$
\|f\|_{C^{s,\lambda}(\Omega)} = \|f\|_{C^s(\Omega)}+ \max_{0 \leq \alpha \leq s} \sup_{\substack{x,y\in \Omega \\x\ne y}}\frac{\left| \frac{\partial^{|\alpha|} f}{\partial x^{|\alpha|}}(x) -\frac{\partial^{|\alpha|}f}{\partial x^{|\alpha|}}(y) \right|}{|x-y|^\lambda}
$$
Also, reference \cite{sz2007} notes that if $\kappa(\cdot,\cdot)\in C^{2s,\lambda}_b(\Omega \times \Omega)$ with $0<\lambda<2$ and $\Omega$ is a closed domain, then the inclusion $H\rightarrow C^{s,\lambda/2}_b(\Omega)$ is well defined and continuous. That is the mapping $i:H \rightarrow C^{s,\lambda/2}_b$ defined via $f\mapsto i(f):=f$ satisfies
$$
\| f\|_{C^{s,\lambda/2}_b(\Omega)}\lesssim \|f\|_H.
$$
In fact reference \cite{sz2007} shows that
$$
\|f \|_{C^s_b(\Omega)} \leq 4^s \|\kappa\|_{{C^{2s}_b}(\Omega\times \Omega)}^{1/2} \|f\|_H.
$$
The overall important conclusion to draw from the summary above is that there are many conditions that guarantee that the imbedding $H\hookrightarrow C_b(\Omega)$ is continuous. This condition will play a central role in devising simple conditions for existence of solutions of the RKHS embedding technique.
\subsection{Multiscale Kernels Induced by $s$-Regular Scaling Functions}
\label{sec:MRA}
The characterization of the norm of the Sobolev space $H^{r}_2:=H^{r}_2(\mathbb{R}^d)$ has appeared in many monographs that discuss multiresolution analysis \cite{Meyer,mallat,devore1998}. It is also possible to define the Sobolev space $H^{r}_2(\mathbb{R}^d)$ as the Hilbert space constructed from a reproducing kernel $\kappa(\cdot,\cdot):\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ that is defined in terms of an $s$-regular scaling function $\phi$ of an multi-resolution analysis (MRA) \cite{Meyer,devore1998}. The scaling function $\phi$ is $s$-regular provided that, for $\frac{d}{2}<r<s$, we define the kernel
\begin{align*}
\kappa(u,v):&=\sum_{j=0}^\infty 2^{j(d-2r)}\sum_{k\in \mathbb{Z}^d}\phi(2^ju-k)\phi(2^jv-k)\\
&=\sum_{j=0}^\infty 2^{-2rj}\sum_{k\in \mathbb{Z}^d}\phi_{j,k}(u)\phi_{j,k}(v)
.\end{align*}
It should be noted that the requirement $d/2<r$ implies the coefficient $2^{j(d-2r)}$ above is decreasing as $j\rightarrow \infty$, and ensures the summation converges. As discussed in Section \ref{sec:RKHS} and in reference \cite{opfer1,opfer2}, the RKHS is constructed as the closure of the finite linear span of the set of function $\left\{\kappa_u\right\}_{u\in \Omega}$ with $\kappa_u(\cdot):=\kappa(u,\cdot)$. Under the assumption that $\frac{d}{2}<r<s$, the Sobolev space $H^r_2(\mathbb{R}^d)$
can also be related to the Hilbert space $H_\kappa^r(\mathbb{R}^d)$
defined as
\begin{align*}
H_{\kappa}^r(\mathbb{R}^d):=\left\{ f:\mathbb{R}^d\rightarrow\mathbb{R} \mid (f,f)_{\kappa,r}^\frac{1}{2}=\|f\|_{\kappa,r}<\infty\right\}
\end{align*}
with the inner product $(\cdot,\cdot)_{\kappa,r}$ on $H_{\kappa}^r(\mathbb{R}^d)$ defined as
\begin{align*}
(f,f)_{\kappa,r}&:=\|f\|_{\kappa,r}^2:=
\inf \biggl\{ \sum_{j=0}^\infty 2^{j(2r-d)}\|f_j\|_{V_j}^2\biggl|
f_j\in V_j, f=\sum_{j=0}^\infty f_j\biggr\}
\end{align*}
with $\|f\|^2_{V_j}=\sum_{k \in \mathbb{Z}^d} c_{j,k}^2 $ for $f_j(u)=\sum_{k \in \mathbb{Z}^d}c_{j,k}\phi(2^ju-k)$ and $j\in \mathbb{N}_0$. Note that the characterization above of $H_{\kappa}^r(\mathbb{R}^d)$ is expressed only in terms of the scaling functions $\phi_{j,k}$ for $j\in \mathbb{N}_0$ and $k\in \mathbb{Z}^d$. The functions $\phi$ and $\psi$ need not define an orthonormal multiresolution in this characterization, and the bases $\psi_{j,k}$ for the complement spaces $W_j$ are not used. We discuss the use of wavelet bases $\psi_{j,k}$ for the definition of the kernel in forthcoming paper. References \cite{opfer1,opfer2} show that when $d/2< r<s$, we have the norm equivalence
\begin{align}
H_{\kappa}^r(\mathbb{R}^d)\approx H^{r}_2(\mathbb{R}^d).
\label{eq:norm_equiv}
\end{align}
Finally, from Sobolev's Embedding Theorem \cite{af2003}, whenever $r>d/2$ we have the embedding
$$
H^r_2 \hookrightarrow C_b^{r-d/2} \subset C^{r-d/2}
$$
where $C_b^r$ is the subspace of functions $f$ in $C^r$ all of whose derivatives up through order $r$ are bounded. In fact, by choosing the $s$-regular MRA with $s$ and $r$ large enough, we have the imbedding
$H^r_2(\Omega) \hookrightarrow C(\Omega)$ when $\Omega \subseteq \mathbb{R}^d$ \cite{af2003}.
One of the simplest examples that meet the conditions of this section includes the normalized B-splines of order $r>0$. We denote by $N^r$ the normalized B-spline of order $r$ with integer knots and define its translated dilates by $N^r_{j,k}:=2^{jd/2}N^r(2^{jd} x - k)$ for $k\in \mathbb{Z}^d$ and $j\in \mathbb{N}_0$. In this case the kernel is written in the form
$$
\kappa(u,v):=\sum_{j=0}^\infty 2^{-2rj}\sum_{k\in \mathbb{Z}^d}N^r_{j,k}(u)N^r_{j,k}(v).
$$
Figure \ref{fig:nbsplines} depicts the translated dilates of the normalized B-splines of order $1$ and $2$ respectively.
\begin{center}
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{nbsplines_N1}
&
\includegraphics[width=.4\textwidth]{nbsplines_N2}\\
{ B-splines $N^1$}
&
{ B-splines $N^2$}
\end{tabular}
\caption{Translated Dilates of Normalized B-Splines}
\label{fig:nbsplines}
\end{figure}
\end{center}
\section{Existence,Uniqueness and Stability}
\label{sec:existence}
In the adaptive estimation problem that is cast in terms of a RKHS $H$, we seek a solution $X = (\tilde{x},\tilde{f}) \in \mathbb{R}^d \times H \equiv \mathbb{X}$ that satisfies Equation \ref{eq:eom_rkhs}.
In general $\mathbb{X}$ is an infinite dimensional state space for this estimation problem, which can in principle substantially complicate the analysis in comparison to conventional ODE methods.
We first establish that the adaptive estimation problem in Equation \ref{eq:eom_rkhs} is well-posed.
The result that is derived below is not the most general possible, but rather has been emphasised because its conditions are simple and easily verifiable in many applications.
\begin{theorem}
\label{th:unique}
Suppose that $x \in C([0,T];\mathbb{R}^d)$ and that the embedding $i:H \hookrightarrow C(\Omega)$ is uniform in the sense that there is a constant $C>0$ such that for any $f \in H$,
\begin{equation}
\label{6}
\|f\|_{C(\Omega)}\equiv \|if\|_{C(\Omega)} \leq C\|f\|_H.
\end{equation}
For any $T>0$ there is a unique mild solution $(\tilde{X},\tilde{f}) \in C([0,T],\mathbb{X})$ to Equation \ref{eq:eom_rkhs} and the map $X_0 \equiv (\tilde{x}_0,\tilde{f}_0) \mapsto (\tilde{x},\tilde{f}) $ is Lipschitz continuous from $\mathbb{X}$ to $C([0,T],\mathbb{X})$.
\end{theorem}
\begin{proof}
We can split the governing Equation \ref{eq:eom_rkhs} into the form
\begin{align}
\begin{split}
\begin{Bmatrix}
\dot{\tilde{x}}(t)\\
\dot{{\tilde{f}}}(t)
\end{Bmatrix}
=
&\begin{bmatrix}
A & 0\\
0 & A_0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix}+
\begin{bmatrix}
0 & B E_{(x(t))}\\
-\Gamma^1 (B E_{(x(t)})^* P & -A_0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix},
\end{split}
\end{align}
and write it more concisely as
\begin{equation}
\dot{\tilde{X}} = \mathbb{A}\tilde{X}(t) + \mathbb{F}(t,\tilde{X}(t))
\end{equation}
where the operator $A_0 \in \mathcal{L}(H,H)$ is arbitrary. It is immediately clear that $\mathbb{A}$ is the infinitesimal generator of $C_0$ semigroup on $\mathbb{X}\equiv \mathbb{R}^d\times H$ since $\mathbb{A}$ is bounded on $\mathbb{X}$. In addition, we see the following:
\begin{enumerate}
\item The function $\mathbb{F}: \mathbb{R}^+ \times \mathbb{X} \to \mathbb{X}$ is uniformly globally Lipschitz continuous: there is a constant $L>0$ such that
$$
\|\mathbb{F}(t,X)-\mathbb{F}(t,Y)\| \leq L\|X-Y\|
$$
for all $ X,Y \in \mathbb{X}$ and $t\in [0,T]$.
\item The map $t \mapsto \mathbb{F}(t,X)$ is continuous on $[0,T]$ for each fixed $X\in \mathbb{X}$.
\end{enumerate}
By Theorem 1.2, p.184, in reference \cite{pazy}, there is a unique mild solution
$$\tilde{X} = \{\tilde{x},\tilde{f}\}^T \in C([0,T];\mathbb{X})\equiv C([0,T];\mathbb{R}^d\times H). $$
In fact the map $\tilde{X}_0 \mapsto X$ is Lipschitz continuous from $\mathbb{X}\to C([0,T];\mathbb{X})$.
\end{proof}
The proof of stability of the equilibrium at the origin of the RKHS
Equation \ref{eq:eom_rkhs} closely resembles the Lyapunov analysis of Equation \ref{eq:error_conv}; the extension to consideration of the infinite dimensional state space $\mathbb{X}$ is required.
It is useful to carry out this analysis in some detail to see how the adjoint $E_x^* :\mathbb{R}\to H $ of the evaluation functional $E_x : H \to \mathbb{R}$ plays a central and indispensable role in the study of the stability of evolution equations on the RKHS.
\begin{theorem}
\label{th:stability}
Suppose that the RKHS Equations \ref{eq:eom_rkhs} have a unique solution in $C([0,\infty);H)$ for every initial condition $X_0$ in some open ball $B_r (0) \subseteq \mathbb{X}$. Then the equilibrium at the origin is Lyapunov stable. Moreover, the state error $\tilde{x}(t) \rightarrow 0$ as $t \rightarrow \infty$.
\end{theorem}
\begin{proof}
Define the Lyapunov function $V:\mathbb{X} \to \mathbb{R}$ as
$$ V \begin{Bmatrix}
\tilde{x}\\
\tilde{f}
\end{Bmatrix}
= \frac{1}{2}\tilde{x}^T P\tilde{x} + \frac{1}{2}(\Gamma \tilde{f},\tilde{f})_H.
$$
This function is norm continuous and positive definite on any neighborhood of the origin since $ V(X) \geq \|X\|^2_{\mathbb{X}}$ for all $X \in \mathbb{X}$. For any $X$, and in particular over the open set $B_r(0)$, the derivative of the Lyapunov function $V$ along trajectories of the system is given as
\begin{align*}
\dot{V} &= \frac{1}{2}(\dot{\tilde{x}}^T P\tilde{x}+\tilde{x}^TP\dot{\tilde{x}})+(\Gamma \tilde{f},\dot{\tilde{f}})_H\\
&= -\frac{1}{2}\tilde{x}^T Q\tilde{x}+(\tilde{f},E_x^*B^*P\tilde{x}+\Gamma\dot{\tilde{f}})_{H}= -\frac{1}{2}\tilde{x}^T Q\tilde{x},
\end{align*}
since $(\tilde{f},E_x^*B^*P\tilde{x}+\Gamma\dot{\tilde{f}})_{H}=0$.
Let $\epsilon$ be some constant such that $0 < \epsilon < r$. Define $\gamma (\epsilon)$ and $\Omega_\gamma$ according to
$$\gamma(\epsilon) = \inf_{\|X\|_\mathbb{X}=\epsilon} V(X),$$
$$\Omega_\gamma = \{X \in \mathbb{X}|V(X)<\gamma \}.$$
We can picture these quantities as shown in Fig. \ref{fig:lyapfun} and Fig. \ref{fig:kernels}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{fig1Lyap_2}
\caption{Lyapunov function, $V(x)$}
\label{fig:lyapfun}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.55]{fig2Stability_2}
\caption{Stability of the equilibrium}
\label{fig:kernels}
\end{figure}
But $\Omega_\gamma=\{X\in \mathbb{X}|V(X)<\gamma\}$ is an open set since it is the inverse image of the open set $(-\infty,\gamma) \subset \mathbb{R}$ under the continuous mapping $V:\mathbb{X} \to \mathbb{R}$. The set $\Omega_\gamma$ therefore contains an open neighborhood of each of its elements. Let $\delta>0$ be the radius of such an open ball containing the origin with $B_\delta(0) \subset \Omega_\gamma$.
Since $\overline{\Omega}_\gamma:=\{X\in \mathbb{X}|V(X)\leq \gamma\}$ is a level set of $V$ and $V$ is non-increasing, it is a positive invariant set. Given any initial condition $x_0 \in B_\delta(0) \subseteq \Omega_\gamma$, we know that the trajectory $x(t)$ starting at $x_0$ satisfies
$x(t) \in \overline{\Omega}_\gamma \subseteq \overline{B_\epsilon(0)} \subseteq B_r(0)$ for all $t\in [0,\infty)$.
The equilibrium at the origin is stable.
The convergence of the state estimation error $\tilde{x}(t) \rightarrow 0$ as $t\rightarrow \infty$ can be based on Barbalat's lemma by modifying the conventional arguments for ODE systems. Since $\frac{d}{dt}(V(X(t))) = - \frac{1}{2} \tilde{x}^T(t) Q \tilde{x}\leq 0$, $V(X(t))$ is non-increasing and bounded below by zero. There is a constant $V_\infty:=\lim_{t \rightarrow \infty}V(X(t))$, and we have
$$
V(X_0)-V_\infty = \int_0^\infty \tilde{x}^T(\tau)Q\tilde{x} d\tau \gtrsim \|\tilde{x}\|^2_{L^2((0,\infty);\mathbb{R}^d)}.
$$
Since $V(X(t)) \leq V(X_0)$, we likewise have $\|\tilde{x}\|_{L^\infty(0,\infty)}\lesssim V(X_0)$ and $\|\tilde{f}\|_{L^\infty((0,\infty);H)}\lesssim V(X_0)$. The equation of motion enables a uniform bound on $\dot{\tilde{x}}$ since
\begin{align}
&\|\dot{\tilde{x}}(t)\|_{\mathbb{R}^d}
\leq \|A\| \| \tilde{x}(t)\|_{\mathbb{R}^d}
+ \|B\| \|E_{x(t)} \tilde{f}(t)\|_{\mathbb{R}^d}, \notag \\
&\leq \|A\| \| \tilde{x}(t)\|_{\mathbb{R}^d}
+ \tilde{C} \|B\| \| \tilde{f}(t) \|_{H},\\
& \leq \|A\| \|\tilde{x}\|_{L^\infty((0,\infty);\mathbb{R}^d)}
+ \tilde{C} \|B\| \| \tilde{f} \|_{L^\infty((0,\infty),H)}. \notag
\end{align}
Since $\tilde{x}\in L^\infty((0,\infty);\mathbb{R}^d)) \cap L^2((0,\infty);\mathbb{R}^d)$ and $\dot{\tilde{x}} \in L^\infty((0,\infty);\mathbb{R}^d)$, we conclude by generalizations of Barbalat's lemma \cite{Farkas2016Variations} that $\tilde{x}(t) \rightarrow 0$ as $t \to \infty$.
\end{proof}
It is evident that Theorem \ref{th:stability} yields results about stability and convergence over the RKHS of the state estimate error to zero that are analogous to typical results for conventional ODE systems. As expected, conclusions for the convergence of the function estimates $\hat{f}$ to $f$ are more difficult to generate, and they rely on {\em persistency of excitation } conditions that are suitably extended to the RKHS framework.
\begin{mydef}
We say that the plant in the RKHS Equation ~\ref{eq:rkhs_plant} is {\em strongly persistently exciting} if there exist constants $\Delta,\gamma>0,\text{ and }T$ such that for $f\in H$ with $\|f\|_H=1$ and $t>T$ sufficiently large,
$$
\int_{t}^{t+\Delta}
\left(E^*_{x(\tau)}E_{x(\tau)}f,f\right)_H d\tau \gtrsim \gamma.
$$
\end{mydef}
As in the consideration of ODE systems, persistency of excitation is sufficient to guarantee convergence of the function parameter estimates to the true function.
\begin{theorem}
\label{th:PE}
Suppose that the plant in Equation \ref{eq:rkhs_plant} is strongly persistently exciting and that either (i) the function $k(x(.),x(.)) \in L^1((0,\infty);\mathbb{R})$, or (ii) the matrix $-A$ is coercive in the sense that $(-Av,v)\geq c\|v\|^2$ $\forall$ $v\in\mathbb{R}^d$ and $\Gamma =P=I_d$. Then the parameter function error $\tilde{f}$ converges strongly to zero,
$$
\lim_{t\rightarrow \infty} \| f-\hat{f}(t) \|_H = 0.
$$
\end{theorem}
\begin{proof}
We begin by assuming $(i)$ holds,
In the proof of Theorem \ref{th:stability} it is shown that $V$ is bounded below and non-increasing, and therefore approaches a limit
$$
\lim_{t\rightarrow \infty} V(t)=V_\infty< \infty.
$$
Since $\tilde{x}(t) \rightarrow 0$ as $t\rightarrow \infty$, we can conclude that the limit
$$
\lim_{t\rightarrow \infty} \| \tilde{f}(t) \|_H \lesssim V_\infty.
$$
Suppose that $V_\infty \not = 0.$ Then there exists a positive, increasing sequence of times $\left\{ t_k\right \}_{k\in \mathbb{N}}$ with $\lim_{k\rightarrow \infty} t_k = \infty$ and some constant $\delta>0$
such that
$$
\| \tilde{f}(t_k)\|^2_H \ge \delta
$$
for all $k\in\mathbb{N}$.
Since the RKHS is persistently exciting, we can write
\begin{align*}
\int^{t_k+\Delta}_{t_k} \left(E^{*}_{x(\tau)}E_{x(\tau)}\tilde{f}(t_k),\tilde{f}(t_k)\right)_Hd\tau \gtrsim \gamma \| \tilde{f}{(t_k)}\|_{H}^{2} \geq \gamma \delta
\end{align*}
for each $k\in \mathbb{N}$. By the reproducing property of the RKHS, we can then see that
\begin{align*}
\gamma \delta \leq \gamma \| \tilde{f}(t_k) \|_H^2 &\lesssim \int_{t_k}^{t_k + \Delta} \left ( \kappa_{x(\tau)}, \tilde{f}(t_k) \right )_H^2 d\tau\\
&\leq \|\tilde{f}(t_k)\|_H^2 \int_{t_k}^{t_k + \Delta} \|\kappa_{x(\tau)} \|_H^2 d\tau \\
&= \| \tilde{f}(t_k) \|_H^2
\int_{t_k}^{t_k+\Delta} \left (\kappa_{x(\tau)},\kappa_{x(\tau)}\right )_H d\tau \\
& = \| \tilde{f}(t_k) \|_H^2
\int_{t_k}^{t_k+\Delta} \kappa(x(\tau),x(\tau)) d\tau.
\end{align*}
Since $\kappa_r(x(.),x(.)) \in L^1((0,\infty);\mathbb{R})$ by assumption, when we take the limit as $k\rightarrow \infty$, we obtain the contradiction $0<\gamma \leq 0$. We conclude therefore that $V_\infty=0$ and $\lim_{t\rightarrow \infty} \|\tilde{f}(t)\|_H = 0$.
We outline the proof when (ii) holds, which is based on slight modifications of arguments that appear in \cite{d1993,bsdr1997,dr1994,dr1994pe,bdrr1998,kr1994} that treat a different class of infinite dimensional nonlinear systems whose state space is cast in terms of a Gelfand triple.
Perhaps the simplest analysis follows from \cite{bsdr1997} for this case. Our hypothesis that $\Gamma=P=I_d$ reduces Equations \ref{eq:eom_rkhs} to the form of Equations 2.20 in \cite{bsdr1997}. The assumption that $-A$ is coercive in our theorem implies the coercivity assumption (A4) in \cite{bsdr1997} holds. If we define $\mathbb{X}=\mathbb{Y}:=\mathbb{R}^n \times H$, then it is clear that the imbeddings $\mathbb{Y} \rightarrow \mathbb{X} \rightarrow \mathbb{Y}$ are continuous and dense, so that they define a Gelfand triple. Because of the trivial form of the Gelfand triple in this case, it is immediate that the Garding inequality holds in Equation 2.17 in \cite{bsdr1997}.
We identify $BE_{x(t)}$ as the control influence operator $\mathcal{B}^*(\overline{u}(t))$ in \cite{bsdr1997}.
Under these conditions, Theorem ~\ref{th:PE} follows from Theorem 3.4 in \cite{bsdr1997} as a special case.
\end{proof}
\section{Finite Dimensional Approximations}
\label{sec:finite}
\subsection{Convergence of Finite Dimensional Approximations}
The governing system in Equations \ref{eq:eom_rkhs} constitute a distributed parameter system since the functions $\tilde{f}(t)$ evolve in the infinite dimensional space $H$. In practice these equations must be approximated by some finite dimensional system. Let $\{H_n\}_{n\in\mathbb{N}_0} \subseteq H$ be a nested sequence of subspaces. Let $\Pi_j$ be a collection of approximation operators $
\Pi_j:{H}\rightarrow {H}_n$ such that $\lim_{j\to \infty}\Pi_j f = f$ for all $f\in H$ and $\sup_{j\in \mathbb{N}_0} \|\Pi_j\| \leq C $ for a constant $C > 0$. Perhaps the most evident example of such collection might choose $\Pi_j$ as the $H$-orthogonal projection for a dense collection of subspaces $H_n$. It is also common to choose $\Pi_j$ as a uniformly bounded family of quasi-interpolants \cite{devore1998}. We next construct a finite dimensional approximations $\hat{x}_j$ and $\hat{f}_j$ of the online estimation equations in
\begin{align}
\dot{\hat{x}}_j(t) & = A\hat{x}_j(t) +
B E_{x(t)} \Pi^*_j \hat{f}_j(t), \label{eq:approx_on_est1} \\
\dot{\hat{f}}_j(t) & = \Gamma_j^{-1}\left ( B E_{x(t)} \Pi^*_j \right)^* P\tilde{x}_j(t)
\label{eq:approx_on_est2}
\end{align}
with $\tilde{x}_j:=x-\hat{x}_j$.
It is important to note that in the above equation $
\Pi_j:{H}\rightarrow {H}_n$, and $\Pi_j^*:{H}_n\rightarrow {H}$.
\begin{theorem}
Suppose that $x \in C([0,T],\mathbb{R}^d)$ and that the embedding $i:H \to C(\Omega)$ is uniform in the sense that
\begin{equation}
\label{6}
\|f\|_{C(\Omega)}\equiv \|if\|_{C(\Omega)} \leq C\|f\|_H.
\end{equation}
Then for any $T>0$,
\begin{align*}
\| \hat{x} - \hat{x}_j\|_{C([0,T];\mathbb{R}^d)} &\rightarrow 0,\\
\|\hat{f} - \hat{f}_j\|_{C([0,T];H)} &\rightarrow 0,
\end{align*}
as $j\rightarrow \infty$.
\end{theorem}
\begin{proof}
Define the operators $\Lambda(t):= B E_{x(t)}:H\rightarrow \mathbb{R}^d$ and for each $t\geq 0$, introduce the measures of state estimation error $\overline{x}_j:=\hat{x}-\hat{x}_j$, and define the function estimation error $\overline{f}_j
=\hat{f}-\hat{f}_j$.
Note that $\tilde{x}_j:=x-\hat{x}_j=x-\hat{x} + \hat{x}-\hat{x}_j=\tilde{x}+ \overline{x}_j$.
The time derivative of the error induced by approximation of the estimates can be expanded as follows:
\begin{align*}
&\frac{1}{2} \frac{d}{dt}\left (
( {\overline{x}}_j, {\overline{x}}_j )_{\mathbb{R}^d} + ({\overline{f}}_j,{\overline{f}}_j )_H
\right ) =
( \dot{\overline{x}}_j, {\overline{x}}_j )_{\mathbb{R}^d} + (\dot{\overline{f}}_j,{\overline{f}}_j )_H
\\
&= (A\overline{x}_j + \Lambda \overline{f}_j , \overline{x}_j)_{\mathbb{R}^d} +
\left (
\left (\Gamma^{-1}-\Pi_j^*\Gamma_j^{-1}\Pi_j \right )
\Lambda^*P \tilde{x}, \overline{f}_j
\right )_H
-\left (\Pi_j^* \Gamma_j^{-1} \Pi_j \Lambda^* P \overline{x}_j,\overline{f}_j \right)_H
\\
&\leq C_A \| \overline{x}_j \|^2_{\mathbb{R}^d} + \|\Lambda\| \| \overline{f}_j \|_{H} \| \overline{x}_j \|_{\mathbb{R}^d} \\
&\quad \quad
+ \| \Gamma^{-1}
(I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j) \Lambda^* P \tilde{x}\|_{H} \|\overline{f}_j \|_H
+\left \|
\Pi_j^* \Gamma_j^{-1} \Pi_j \Lambda^* P
\right \| \|\overline{x}_j\|
\|\overline{f}_j \|
\\
& \leq
C_A \| \overline{x}_j \|_{\mathbb{R}^d}^2 + \frac{1}{2}
\|\Lambda\| \left (
\| \overline{f}_j \|_{H}^2
+ \| \overline{x}_j \|_{\mathbb{R}^d}^2
\right )
+ \frac{1}{2}\|\Pi^*_j \Gamma_j^{-1} \Pi_j\|
\| \Lambda^*\| \|P\| \left ( \|\overline{x}_j\|^2_{\mathbb{R}^d} + \| \overline{f}_j\|_H \right )
\\
&\quad \quad
+ \frac{1}{2} \left (
\Gamma^{-1}
(I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j) \Lambda^* P \tilde{x}\|_{H}
+
\|\overline{f}_j \|^2_H
\right) \\
& \leq
\frac{1}{2} \|\Gamma^{-1} \| \| \Lambda^*\| \|P\|
\| I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j \|^2\|\tilde{x}\|^2_{\mathbb{R}^d}
+\\
&\quad \quad
+\left (C_A + \frac{1}{2} \|\Lambda\|
+ \frac{1}{2} C_B \|\Lambda^*\| \|P\|
\right ) \|\overline{x}_j\|^{2}_{\mathbb{R}^d}
+
\frac{1}{2} \left ( \|\Lambda\| + 1
+ \frac{1}{2} C_B \|\Lambda^*\| \|P\|\right) \|\overline{f}_j\|^{2}_H
\end{align*}
We know that $\|\Lambda(t)\|=\|\Lambda^*(t)\|$ is bounded uniformly in time from the assumption that $H$ is uniformly embedded in $C(\Omega)$.
We next consider the operator error that manifests in the term $(\Gamma^{-1} - \Pi^*_j \Gamma_j^{-1} \Pi_j)$. For any $g\in H$ we have
\begin{align*}
\| (\Gamma^{-1} - \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H & =
\| \Gamma^{-1}( I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H \\
&\leq
\| \Gamma^{-1} \|
\|\left (\Pi_j + (I-\Pi_j)\right )( I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H \\
&\lesssim \| I-\Pi_j \| \|g\|_H.
\end{align*}
This final inequality follows since $\Pi_j(I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)=0$ and
$\Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j\equiv\Gamma \Pi^*_j \left (\Pi_j \Gamma \Pi_j^* \right)^{-1} \Pi_j $ is uniformly bounded.
We then can write
\begin{align*}
\frac{d}{dt}\left (
\|\overline{x}_j\|^2_{\mathbb{R}^d} + \|\overline{f}_j\|^2_H
\right )
&\leq C_1 \| I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j \|^2 \\
&\quad \quad+ C_2 \left (\|\overline{x}_j\|^2_{\mathbb{R}^d} + \|\overline{f}_j\|^2_H \right )
\end{align*}
where $C_1,C_2>0$. We integrate this inequality over the interval $[0,T]$ and obtain
\begin{align*}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq
\|\overline{x}_j(0)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(0)\|^2_H \\
&
+ C_1T \| I-\Gamma\Pi_j^*\Gamma_j^{-1}\Pi_j \|^2 \\
&+ C_2\int_0^T \left (
\|\overline{x}_j(\tau)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(\tau)\|^2_H
\right ) d\tau
\end{align*}
We can always choose $\hat{x}(0) = \hat{x}_j(0)$, so that $\overline{x}_j(0) = 0$. If we choose $\hat{f}_j(0):=\Pi_j\hat{f}(0)$ then,
\begin{align*}
\|\overline{f}_j(0)\| &= \|\hat{f}(0)-\Pi_j\hat{f}(0)\|_H\\
&\leq \|I-\Pi_j\|_H \|\hat{f}(0)\|_H.
\end{align*}
The non-decreasing term can be rewritten as $C_1T \| I-\Gamma\Pi_j^* \Gamma_j^{-1} \Pi_j \|^2 \leq C_3 \|I-\Pi_j\|^2_H$.
\begin{align}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq C_4\|I-\Pi_j\|^2_H+ C_2\int_0^T \left (
\|\overline{x}_j(\tau)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(\tau)\|^2_H
\right ) d\tau
\label{eq:gron_last}
\end{align}
Let $\alpha(t):=C_4\|I-\Pi_j\|^2_H$ and applying Gronwall's inequality to equation \ref{eq:gron_last}, we get
\begin{align}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq \alpha(t) e^{C_2 T}
\end{align}
As $j\to \infty$ we get $\alpha(t) \to 0$, this implies $\overline{x}_j(t)\to 0$ and $\overline{f}_j(t)\to 0$.
Therefore the finite dimensional approximation converges to the infinite dimensional states in $\mathbb{R}^d \times H$.
\end{proof}
\section{Numerical Simulations}
\label{sec:numerical}
\begin{figure}
\centering
\includegraphics[scale=0.3]{Figure1parta}
\hspace{1cm}
\includegraphics[scale=0.3]{Figure1partb}
\captionsetup{justification=justified,margin=1cm}
\caption{Experimental setup and definition of basis functions}
\label{fig:Model}
\end{figure}
A schematic representation of a quarter car model consisting of a chassis, suspension and road measuring device is shown in Fig ~\ref{fig:Model}. In this simple model the displacement of car suspension and chassis are $x_1$ and $x_2$ respectively. The arc length $s$ measures the distance along the track that vehicle follows. The equation of motion for the two DOF model has the form,
\begin{equation}
M\ddot{x}(t)+C\dot{x}(t)+Kx(t)=Bf(s(t))
\end{equation}
with the mass matrix $M \in \mathbb{R}^{2\times2}$, the stiffness matrix $K \in \mathbb{R}^{2\times2}$, the damping matrix $C \in \mathbb{R}^{2\times2}$, the control influence vector $b \in \mathbb{R}^{2\times 1}$ in this example. The road profile is denoted by the unknown function $f:\mathbb{R} \to \mathbb{R}$. For simulation purposes, the car is assumed to traverse a circular path of radius $R$, so that we restrict attention to periodic round profiles $f : [0,R]\to \mathbb{R}$. To illustrate the methodology, we first assume that the unknown function, $f$ is restricted to the class of uncertainty mentioned in Equation~\ref{eq:e2} and therefore can be approximated as
\begin{equation}
f(\cdot)=\sum_{i=1}^n{\alpha_i^*k_{x_i}(\cdot)}
\end{equation}
with $n$ as the number of basis functions, $\alpha_i^*$ are the true unknown coefficients to be estimated, and $k_{x_i}(\cdot)$ are basis functions over the circular domain.
Hence the state space equation can be written in the form
\begin{equation}
\dot{x}(t)=Ax(t)+B\sum_{i=1}^n{\alpha_i^*k_{x_i}(s(t))}.
\label{eq:num_sim}
\end{equation}
where the state vector $x = [\dot{x}_1,x_1,\dot{x}_2,x_2]$, the system matrix $A\in \mathbb{R}^{4 \times 4}$, and control influence matrix $B \in \mathbb{R}^{4 \times 1}$.
For the quarter car model shown in Fig. \ref{fig:Model} we derive the matrices,
$$
A=\begin{bmatrix}
\frac{-c_2}{m_1} &\frac{-(k_1+k_2)}{m_1} &\frac{c_2}{m_1} &\frac{k_2}{m_1}\\
1 &0 &0 &0\\
\frac{-c_2}{m_2} &\frac{(k_2)}{m_2} &\frac{-c_2}{m_2} &\frac{-k_2}{m_2}\\
0 &0 &1 &0
\end{bmatrix}
\quad \text{and} \quad
B=\begin{bmatrix}
\frac{k_1}{m_1}\\
0\\
0\\
0
\end{bmatrix}.
$$
Note that if we augment the state to be $\{x_1,x_2,x_3,x_4,s\}$ and append an ODE that specifies $\dot{s}(t)$ for $t\in \mathbb{R}^+$ the equations ~\ref{eq:num_sim} can be written in the form of equations ~\ref{eq:simple_plant}.Then the finite dimensional set of coupled ODE's for the adaptive estimation problem can be written in terms of the plant dynamics, estimator equation, and the learning law which are of the form shown in Equations \ref{eq:f}, \ref{eq:a2}, and \ref{eq:a3} respectively.
\subsection{Synthetic Road Profile}
The constants in the equation are initialized as follows: $m_1=0.5$ kg, $m_2=0.5$ kg, $k_1=50000$ N/m, $k_2=30000$ N/m and $c_2=200$ Ns/m, $\Gamma=0.001$.
The radius of the path traversed $R=4$ m, the road profile to be estimated is assumed to have the shape $f(\cdot)= \kappa\sin(2\pi \nu (\cdot))$ where $\nu =0.04$ Hz and $\kappa=2$.
Thus our adaptive estimation problem is formulated for a synthetic road profile in the RKHS $H = \overline{\{k_x(\cdot)|x\in \Omega\}}$ with $k_x(\cdot)=e^\frac{-\|x-{\cdot} \|^2}{2\sigma^2 }$.
The radial basis functions, each with standard deviation of $\sigma=50$, span over the range of $25^o$ with their centers $s_i$ evenly separated along the arc length. It is important to note that we have chosen a scattered basis that can be located at any collection of centers $\{s_i\}_{i=1}^{n}\subseteq \Omega$ but the uniformly spaced centers are selected to illustrate the convergence rates.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{rbf_road}
\caption{Road surface estimates for $n=\{10,20,\cdots,100\}$}
\label{fig:Sine Road}
\end{figure}
Fig.\ref{fig:Sine Road} shows the finite dimensional estimates $\hat{f}$ of the road and the true road surface $f$ for different number of basis kernels ranging from $n=\{10,20,\cdots,100\}$.
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.5\textwidth]{L2_example}
&
\includegraphics[width=.5\textwidth]{C_error_example}\\
\end{tabular}
\caption{Convergence rates using Gaussian kernel for synthetic data}
\label{fig:logsup}
\end{figure}
The plots in Fig.\ref{fig:logsup} show the rate of convergence of $L^2$ error and the $C(\Omega)$ error with respect to the number of basis functions. The {\em{log}} along the axes in the figures refer to the natural logarithm unless explicitly specified.
\subsection{Experimental Road Profile Data}
The road profile to be estimated in this subsection is based on the experimental data obtained from the Vehicle Terrain Measurement System shown in Fig.~\ref{fig:circle}. The constants in the estimation problem are initialized to the same numerical values as in previous subsection.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{Road_Run1}
&
\includegraphics[width=0.4\textwidth]{Circle}\\
{Longitudinal Elevation Profile.}
&
{Circular Path followed by VTMS.}
\end{tabular}
\caption{Experimental Data From VTMS.}
\label{fig:circle}
\end{figure}
In the first study in this section the adaptive estimation problem is formulated in the RKHS $H = \overline{k_x(\cdot)|x\in \Omega\}}$ with $k_x(\cdot)=e^\frac{-\|x-{\cdot}\|^2}{2\sigma^2 }$. The radial basis functions, each with standard deviation of $\sigma=50$, span over the range of with a collection of centers located at $\{s_i\}_{i=1}^{n}\subseteq \Omega$ evenly separated along the arclength. This is repeated for kernels defined using B-splines of first order and second order respectively.
Fig.\ref{fig:Kernels} shows the finite dimensional estimates of the road and the true road surface $f$ for a data representing single lap around the circular track, the finite dimensional estimates $\hat{f}_n$ are plotted for different number of basis kernels ranging from $n=\{35,50,\cdots,140\}$ using the Gaussian kernel as well as the second order B-splines.
The finite dimensional estimates $\hat{f}_n$ of the road profile and the true road profile $f$ for data collected representing multiple laps around the circular track is plotted for the first order B-splines as shown in Fig.~\ref{fig:Lsplines Road}. The plots in Fig.~\ref{fig:sup_error_compare} show the rate of convergence of the $L^2$ error and the $C(\Omega)$ error with respect to number of basis functions.
It is seen that the rate of convergence for $2^{nd}$ order B-Spline is better as compared to other kernels used to estimate in these examples. This corroborates the fact that smoother kernels are expected to have better convergence rates.
Also, the condition number of the Grammian matrix varies with $n$, as illustrated in Table.\ref{table:1} and Fig.\ref{fig:conditionnumber}. This is an important factor to consider when choosing a specific kernel for the RKHS embedding technique since it is well known that the error in numerical estimates of solutions to linear systems is bounded above by the condition number. The implementation of the RKHS embedding method requires such a solution that depends on the grammian matrix of the kernel bases at each time step. We see that the condition number of Grammian matrices for exponentials is $\mathcal{O}(10^{16})$ greater than the corresponding matrices for splines. Since the sensitivity of the solutions of linear equations is bounded by the condition numbers, it is expected that the use of exponentials could suffer from a severe loss of accuracy as the dimensionality increases. The development for preconditioning techniques for Grammian matrices constructed from radial basis functions to address this problem is an area of active research.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.4 \textwidth]{Exp_RBF_Road}
&
\includegraphics[width = 0.4 \textwidth]{Bsplines_Road}\\
{Road surface estimates for Gaussian kernels}
&
{Road surface estimate for second-order B-splines}
\end{tabular}
\caption{Road surface estimates for single lap}
\label{fig:Kernels}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 0.4 \textwidth]{LSpline_Road}
\caption{Road surface estimate using first-order B-splines}
\label{fig:Lsplines Road}
\end{figure}
\begin{center}
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.5\textwidth]{Compare_L2_Error}
&
\includegraphics[width = 0.5\textwidth]{Compare_C_Error}\\
\end{tabular}
\caption{Convergence rates for different kernels}
\label{fig:sup_error_compare}
\end{figure}
\end{center}
\begin{center}
\centering
\begin{table}[H]
\centering
\begin{tabular}{|p{1cm}|p{2.2cm}|p{2.2cm}|p{2.2cm}|}
\hline
No. of Basis Functions & Condition No. (First order B-Splines) $\times 10^3$ & Condition No.(Second order B-Splines) $\times 10^4$ & Condition No.(Gaussian Kernels) $\times 10^{20}$\\
\hline \hline
10 & 0.6646 & 0.3882 & 0.0001 \\
20 & 1.0396 & 0.9336 & 0.0017 \\
30 & 1.4077 & 1.5045 & 0.0029 \\
40 & 1.7737 & 2.0784 & 0.0074 \\
50 & 2.1388 & 2.6535 & 0.0167\\
60 & 2.5035 & 3.2293 & 0.0102\\
70 & 2.8678 & 3.8054& 0.0542\\
80 & 3.2321 & 4.3818& 0.0571\\
90 & 3.5962 & 4.9583& 0.7624\\
100 & 3.9602 & 5.5350& 1.3630\\
\hline
\end{tabular}
\caption{Condition number of Grammian Matrix vs Number of Basis Functions}
\label{table:1}
\end{table}
\end{center}
\begin{figure}[H]
\centering
\includegraphics[height=0.3\textheight,width=0.65\textwidth]{Conditon_Number}
\caption{Condition Number of Grammian Matrix vs Number of Basis Functions}
\label{fig:conditionnumber}
\end{figure}
\vspace{-1cm}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we introduced a novel framework based on the use of RKHS embedding to study online adaptive estimation problems. The applicability of this framework to solve estimation problems that involve high dimensional scattered data approximation provides the motivation for the theory and algorithms described in this paper. A quick overview of the background theory on RKHS enables rigorous derivation of the results in Sections \ref{sec:existence} and \ref{sec:finite}. In this paper we derive (1) the sufficient conditions for the existence and uniqueness of solutions to the RKHS embedding problem, (2) the stability and convergence of the state estimation error, and (3) the convergence of the finite dimensional approximate solutions to the solution of the infinite dimensional state space. To illustrate the utility of this approach, a simplified numerical example of adaptive estimation of a road profile is studied and the results are critically analyzed. It would be of further interest to see the ramifications of using multiscale kernels to achieve semi-optimal convergence rates for functions in a scale of Sobolev spaces. It would likewise be important to extend this framework to adaptive control problems and examine the consequences of {\em persistency of excitation} conditions in the RKHS setting, and further extend the approach to adaptively generate bases over the state space.
|
\section{Introduction}
Given a closed Riemannian manifold $(M,g)$ we consider the
conformal class of the metric $g$, $[g]$. The Yamabe
constant of $[g]$, $Y(M,[g])$, is the
infimum of the normalized total scalar curvature functional on
the conformal class. Namely,
$$Y(M,[g])= \inf_{h\in [g]}
\frac{\int {\bf s}_h \ dvol(h)}{(Vol(M,h))^{\frac{n-2}{n}}},$$
\noindent
where ${\bf s}_h$ denotes the scalar curvature of the metric $h$
and $dvol(h)$ its volume element.
If one writes metrics conformal to $g$ as $h=f^{4/(n-2)} \ g$,
one obtains the expression
$$Y(M,[g])= \inf_{f\in C^{\infty} (M)}
\frac{\int ( \ 4a_n {\| \nabla f \|}_g^2 + f^2 {\bf s}_g \ )
\ dvol(g)}{{\| f\|}_{p_n}^2},$$
\noindent
where $a_n =4(n-1)/(n-2) $ and $p_n =2n/(n-2)$. It is a fundamental
result
on the subject that the infimum is actually achieved
(\cite{Yamabe, Trudinger, Aubin, Schoen}). The functions $f$ achieving
the infimum are called {\it Yamabe functions} and the corresponding metrics
$f^{4/(n-2)} \ g$ are called {\it Yamabe metrics}. Since the critical points
of the total scalar curvature functional restricted to a conformal
class of metrics are precisely the metrics of constant scalar
curvature in the conformal class, Yamabe metrics are metrics of
constant scalar curvature.
It is well known that by considering functions supported in a small
normal neighborhood of a point one can prove that
$Y(M^n,[g]) \leq Y(S^n ,[g_0 ])$, where $g_0$ is the round metric
of radius one on the sphere and $(M^n ,g)$ is any closed n-dimensional
Riemannian manifold (\cite{Aubin}).
We will use the notation $Y_n = Y(S^n ,[g_0 ])$ and
$V_n =Vol(S^n ,g_0 )$. Therefore $Y_n =n(n-1)V_n^{\frac{2}{n}}$.
Then one
defines the {\it Yamabe invariant} of a closed manifold $M$
\cite{Kobayashi, Schoen2} as
$$Y(M)=\sup_g Y(M,[g]) \leq Y_n .$$
It follows that $Y(M)$ is positive if and only if $M$ admits a
metric of positive scalar curvature. Moreover, the sign of
$Y(M)$ determines the technical difficulties in understanding
the invariant. When the Yamabe constant of a conformal class
is non-positive there is a unique metric (up to multiplication
by a positive constant) of constant scalar curvature in the
conformal class and if $g$ is any metric in the conformal
class, the Yamabe constant is bounded from below by
$(\inf_M {\bf s}_g ) \ (Vol(M,g))^{2/n}$. This can be used for instance
to study the behavior of the invariant under surgery and so to
obtain information using cobordism theory \cite{Yun, Petean, Botvinnik}.
Note also that in the non-positive case the Yamabe invariant
coincides with Perelman's invariant \cite{Ishida}.
The previous estimate is no longer true
in the positive case, but one does get a lower bound in the case of
positive Ricci curvature by a theorem of S. Ilias:
if $Ricci(g)\geq \lambda g $
($\lambda >0$) then $Y(M,[g]) \geq n \lambda (Vol(M,g))^{2/n}$
(\cite{Ilias}). Then in order to use this inequality to
find lower bounds on the Yamabe invariant of a closed
manifold $M$ one would try to maximize the volume of the manifold
under some positive lower bound of the Ricci curvature.
Namely, if one denotes ${\bf Rv} (M)= \sup \{ Vol(M,g): Ricci(g)\geq
(n-1) g \} $ then one gets $Y(M) \geq n(n-1) ({\bf Rv} (M))^{2/n}$
(one should define ${\bf Rv} (M) =0$ if $M$ does not admit
a metric of positive Ricci curvature). Very little is known
about the invariant ${\bf Rv} (M)$. Of course, Bishop's inequality
tells us that for any n-dimensional closed manifold
${\bf Rv} (M^n) \leq {\bf Rv} (S^n )$
(which is of course attained by the volume
of the metric of constant sectional curvature 1). Moreover,
G. Perelman \cite{Perelman} proved that there is a
constant $\delta =\delta_n >0$ such that if ${\bf Rv} (M) \geq
{\bf Rv} (S^n ) -\delta_n $ then
$M$ is homeomorphic to $S^n$. Beyond this, results on
${\bf Rv} (M)$ have been obtained by computing Yamabe invariants, so
for instance ${\bf Rv} ({\bf CP}^2 )= 2 \pi^2 $
(achieved by the Fubini-Study
metric as shown by C. LeBrun \cite{Lebrun} and M. Gursky and C.
LeBrun \cite{Gursky}) and ${\bf Rv} ({\bf RP}^3) = \pi^2$ (achieved by the
metric of constant sectional curvature as shown by H. Bray and
A. Neves \cite{Bray}).
Of course, there is no hope to apply the previous comments directly
when the fundamental group of $M$ is infinite. Nevertheless it
seems that even in this case the Yamabe invariant is
realized by conformal classes of metrics which maximize volume
with a fixed positive lower bound on the Ricci curvature
``in certain sense''. The standard example is $S^{n-1} \times
S^1$. The fact that $Y(S^n \times S^1 ) =Y_{n+1}$ is one
of the first things we learned about the Yamabe invariant
\cite{Kobayashi, Schoen2}. One way to see this is as follows:
first one notes that $\lim_{T\rightarrow \infty}
Y(S^n \times S^1 ,[g_0 + T^2 dt^2 ])=
Y(S^n \times {\mathbb R}, [g_0 + dt^2 ])$ \cite{Akutagawa}
(the Yamabe constant for a non-compact Riemannian manifold
is computed as the infimum of the Yamabe functional over
compactly supported functions).
But the Yamabe
function for $g_0 + dt^2$ is precisely the conformal factor
between $S^n \times {\mathbb R}$ and $S^{n+1} -\{ S, N \}$. Therefore
one can think of $Y(S^n \times S^1 ) =Y_{n+1}$ as realized
by the positive
Einstein metric on $S^{n+1} -\{ S, N \} $. We will see in this
article that a similar situation occurs for any closed positive
Einstein manifold $(M,g)$ (although we only get the lower
bound for the invariant).
\vspace{.3cm}
Let $(N,h)$ be a closed Riemannian manifold. An
{\it isoperimetric region}
is an open subset $U$ with boundary $\partial U$ such that
$\partial U$ minimizes area among hypersurfaces bounding a
region of volume $Vol(U)$. Given any positive number $s$,
$s<Vol(N,h)$, there exists an isoperimetric region of
volume $s$. Its boundary is a stable constant mean curvature
hypersurface with some singularities of codimension at least 7.
Of course one does not need a closed Riemannian manifold
to consider isoperimetric regions, apriori one only
needs to be able to compute volumes of open subsets and areas
of hypersurfaces. One defines the {\it isoperimetric function}
of $(N,h)$ as $I_h :(0,1) \rightarrow {\mathbb R}>0$ by
$$I_h (\beta) =\inf \{ Vol(\partial U)/Vol(N,h) :
Vol(U,h) = \beta Vol(N,h) \},$$
\noindent
where $Vol(\partial U)$ is measured with the Riemannian metric
induced by $h$ (on the non-singular part of $\partial U$).
Given a closed Riemannian manifold $(M,g)$ we will call
the {\it spherical cone} on $M$ the space $X$ obtained collapsing
$M \times \{0 \} $ and $M\times \{ \pi \}$ in
$M\times [0,\pi ]$ to points $S$ and $N$ (the vertices)
with the metric ${\bf g} =\sin^2 (t)g + dt^2$
(which is a Riemannian metric on $X-\{ S,N \}$). Now if
$Ricci(g) \geq (n-1) g$ one can see that $Ricci({\bf g})
\geq n{\bf g}$. One should compare this with the Euclidean cones
considered by F. Morgan and M. Ritor\'{e} in \cite{Morgan}:
$\hat{g} =t^2 g + dt^2$ for which $Ricci(g) \geq (n-1)g $
implies that $Ricci(\hat{g}) \geq 0$. The importance of these
spherical cones for the study of Yamabe constants is that
if one takes out the vertices the corresponding (non-complete)
Riemannian manifold is conformal to
$M\times {\mathbb R}$. But using the (warped product version) of the
Ros Product Theorem \cite[Proposition 3.6]{Ros} (see
\cite[Section 3]{Morgan2}) and the Levy-Gromov isoperimetric
inequality \cite{Gromov} one can understand isoperimetric
regions in these spherical cones. Namely,
\begin{Theorem} Let $(M^n,g)$ be a compact manifold with
Ricci curvature $Ricci(g) \geq (n-1)g$. Let $(X,{\bf g})$ be
its spherical cone. Then geodesic balls around any of the
vertices are isoperimetric.
\end{Theorem}
But now, since the spherical cone over $(M,g)$
is conformal to $(M\times {\mathbb R} ,
g+ dt^2 )$ we can use the previous result
and symmetrization of a function with respect to the
geodesic balls centered at a vertex to prove:
\begin{Theorem} Let $(M,g)$ be a closed Riemannian manifold of
positive Ricci curvature, $Ricci(g) \geq (n-1)g$ and volume $V$.
Then
$$Y(M\times {\mathbb R} ,[g+dt^2 ]) \geq
(V/V_n )^{\frac{2}{n+1}} \ Y_{n+1} .$$
\end{Theorem}
\vspace{.2cm}
As we mentioned before one of the differences between the positive
and non-positive cases in the study of the Yamabe constant is
the non-uniqueness of constant scalar curvature metrics on
a conformal class with positive Yamabe constant. And the simplest
family of examples of non-uniqueness comes from Riemannian
products. If $(M,g)$ and $(N^n ,h)$ are closed Riemannian manifolds
of constant scalar curvature and ${\bf s}_g$ is positive then
for small $\delta >0$, $\delta g + h$ is a constant scalar
curvature metric on $M \times N$ which cannot be a Yamabe
metric. If $(M,g)$ is Einstein and $Y(M)=Y(M,[g])$ it seems
reasonable that $Y(M\times N)= \lim_{\delta \rightarrow 0}
Y(M\times N ,[ \delta g + h ])$.
Moreover as it is shown in \cite{Akutagawa}
$$ \lim Y(M\times N , [\delta g + h ]) =Y(M\times {\mathbb R}^n,[ g+ dt^2 ]).$$
The only case which is well understood is when $M=S^n$ and $N=S^1$.
Here every Yamabe function is a function of the $S^1$-factor
\cite{Schoen2} and the Yamabe function for $(S^n \times {\mathbb R} , g_0 +
dt^2 )$ is the factor which makes $S^n\times {\mathbb R}$ conformal to
$S^{n+1} -\{ S, N \}$. It seems possible that under
certain conditions on $(M,g)$ the Yamabe functions of
$(M \times {\mathbb R}^n , g+dt^2 )$ depend only on the second
variable. The best case scenario would be that this is true
if $g$ is a Yamabe metric but it seems more attainable the
case when $g$ is Einstein. It is a corollary to the previous
theorem that this is actually true in the case $n=1$. Namely,
using the notation (as in \cite{Akutagawa})
$Y_N (M\times N , g +h)$
to denote the infimum of the $(g+h)$-Yamabe functional restricted
to functions of the $N$-factor we have:
\begin{Corollary} Let $(M^n,g)$ be a closed positive Einstein manifold
with Ricci curvature $Ricci(g)=(n-1)g$. Then
$$Y(M\times {\mathbb R} , [g+ dt^2 ])=Y_{{\mathbb R}}(M\times {\mathbb R} , g+ dt^2 )=
{\left( \frac{V}{V_n} \right) }^{\frac{2}{n+1}} \ Y_{n+1}.$$
\end{Corollary}
\vspace{.3cm}
As $Y(M\times {\mathbb R} , [g+ dt^2 ]) = \lim_{T\rightarrow \infty }
Y(M\times S^1 ,[g+T dt^2 ])$ it also follows from Theorem 1.2
that:
\begin{Corollary} If $(M^n ,g)$ is a closed Einstein manifold
with $Ricci(g) = (n-1)g$ and volume $V$ then
$$Y(M\times S^1) \geq (V/V_n )^{\frac{2}{n+1}} \ Y_{n+1} .$$
\end{Corollary}
\vspace{.3cm}
So for example using the product metric we get
$$Y(S^2 \times S^2 \times S^1 )\geq {\left(\frac{2}{3}
\right)}^{(2/5)} \ Y_5 $$
\noindent
and using the Fubini-Study metric we get
$$Y({\bf CP}^2 \times S^1 ) \geq {\left(\frac{3}{4} \right)}^{(2/5)}
\ Y_5 .$$
\vspace{.4cm}
{\it Acknowledgements:} The author would like to thank
Manuel Ritor\'{e}, Kazuo Akutagawa and Frank Morgan
for several useful comments on the first drafts of
this manuscript.
\section{Isoperimetric regions in spherical cones}
As we mentioned in the introduction, the isoperimetric
problem for spherical cones (over manifolds with
Ricci curvature $\geq n-1$) is understood using
the Levy-Gromov isoperimetric inequality
(to compare the isoperimetric functions of
$M$ and of $S^n$) and the Ros Product Theorem for warped products
(to compare then the isoperimetric functions of
the spherical cone over $M$ to the isoperimetric function
of $S^{n+1}$).
See for example section 3 of \cite{Morgan2}
(in particular {\bf 3.2} and the remark after it). For the
reader familiar with isoperimetric problems, this should be
enough to understand Theorem 1.1. In this section, for the
convenience of the reader, we will
give a brief outline on these issues. We will mostly
discuss and follow section 3 of \cite{Ros} and ideas
in \cite{Morgan, Montiel} which we think might be useful in
dealing with other problems arising from the study of Yamabe
constants.
Let $(M^n ,g)$ be a closed Riemannian manifold
of volume $V$ and Ricci curvature $Ricci(g) \geq (n-1)g$.
We will
consider $(X^{n+1}, \bf{g}) $ where as a topological space $X$ is the
suspension of $M$ ($X=M\times [0,\pi ]$ with $M\times \{ 0 \}$ and
$M\times \{ \pi \}$ identified to points $S$ and $N$)
and $\bf{g}$ $ =\sin^2 (t) \ g \ +
dt^2$. Of course $X$ is not a manifold (except when $M$ is $S^n$) and
$\bf{g} $ is a Riemannian metric only on $X-\{ S,N \}$.
The following is a standard result in geometric measure theory.
$\bf{Theorem:}$ For any positive number $r< Vol(x)$ there exists
an isoperimetric open subset $U$ of $X$ of volume $r$. Moreover
$\partial U$ is a smooth stable constant mean curvature
hypersurface of $X$ except for a singular piece $\partial_1 U$
which consists of (possibly)
$S$, $N$, and a subset of codimension at least 7.
Let us call $\partial_0 U$ the regular part of $\partial U$,
$\partial_0 U= \partial U - \partial_1 U$. Let
$X_t$, $t\in (-\varepsilon ,\varepsilon )$,
be a variation of $\partial_0 U$ such that the
volume of the enclosed region $U_t$ remains constant.
Let $\lambda (t)$ be the area of $X_t$. Then $\lambda '(0) =0$
and $\lambda ''(0) \geq 0$. The first condition is satisfied
by hypersurfaces of constant mean curvature and the ones
satisfying the second condition are called ${\it stable}$.
If $N$ denotes a normal
vector field to the hypersurface then variations are obtained
by picking a function $h$ with compact support on $\partial_0 U$ and
moving $\partial_0 U$ in the direction of $h \ N$. Then
we have that if the mean of $h$ on $\partial_0 U$
is 0 then $\lambda_h '(0) =0$
$\lambda_h ''(0) \geq 0$. This last condition is written as
$$Q(h,h)=-\int_{\partial_0 U} h(\Delta h + (Ricci (N,N) +
\sigma^2 )h ) dvol(\partial_0 U) \geq 0.$$
\noindent
Here we consider $\partial_0 U$ as a Riemannian manifold
(with the induced metric) and use the corresponding Laplacian
and volume element. $\sigma^2$ is the square of the norm of the second
fundamental form.
This was worked out by J. L. Barbosa, M. do Carmo and
J. Eschenburg in \cite{Barbosa,
doCarmo}. As we said before, the function $h$
should apriori have compact support
in $\partial_0 U$ but as shown by F. Morgan and M. Ritor\'{e}
\cite[Lemma 3.3]{Morgan} it is enough that $h$ is bounded
and $h\in L^2 (\partial_0 U)$. This is important in order to study
stable constant mean curvature surfaces on a space like $X$ because
$X$ admits what is called a ${\it conformal}$ vector field $V=
\sin (t) \partial /\partial t$ and the function $h$ one wants to
consider is $h=div (V-{\bf g}(V,N) \ N )$ where $N$ is the unit
normal to the hypersurface (and then $h$ is the divergence of
the tangencial part of $V$). This has been used for instance in
\cite{Montiel,Morgan} to classify stable constant mean curvature
hypersurfaces in Riemannian manifolds with a conformal vector field.
When the hypersurface is smooth this function $h$ has mean 0 by
the divergence theorem and one can apply the stability condition.
But when the hypersurface has singularities one would apriori need
the function $h$ to have compact support on the regular part. This
was done by F. Morgan and M. Ritor\'{e} in
\cite[Lemma 3.3]{Morgan}.
We want to prove that the geodesic balls around $S$ are
isoperimetric. One could try to apply the techniques of
Morgan and Ritor\'{e} in \cite{Morgan} and see that they are
the only stable constant mean
curvature hypersurfaces in $X$. This should be possible, and
actually it might be necessary to deal with isoperimetric regions
of more general singular spaces that appear naturally in the study of
Yamabe constants of Riemannian products.
But in this case we will instead
take a more direct approach using the Levy-Gromov
isoperimetric inequality \cite{Gromov} and Ros Product Theorem
\cite{Ros}.
\vspace{.3cm}
The sketch of the proof is as follows: First one has to note that
geodesic balls centered at the vertices {\it produce} the same
isoperimetric function as the one of the round sphere. Therefore
to prove that geodesic balls around the vertices are isoperimetric
is equivalent to prove that the isoperimetric function of ${\bf g}$
is bounded from below by the isoperimetric function of $g_0$. To
do this, given any open subset $U$ of $X$ one considers
its symmetrization
$U^s \subset S^{n+1}$, so the the {\it slices} of $U^s$ are geodesic
balls with the same normalized volumes as the slices of $U$. Then
by the Levy-Gromov isoperimetric inequality we can compare the
normalized areas of the boundaries of the slices. We have to
prove that the normalized area of $\partial U^s$ is at most the
normalized area of $\partial U$.
This follows from
the warped product version of \cite[Proposition 3.6]{Ros}. We will
give an outline following Ros' proof for the Riemannian product case.
We will use the notion of Minkowski
content. This is the bulk of the proof and we will divide it into
Lemma 2.1, Lemma 2.2 and Lemma 2.3.
\vspace{.3cm}
{\it Proof of Theorem 1.1 :}
Let $U\subset X$ be a closed subset.
For any $t\in (0,\pi )$ let
$$U_t =U \cap (M\times \{ t \} ) .$$
Fix any point $E\in S^n$ and let $(U^s )_t$ be the geodesic ball
centered at $E$ with volume
$$Vol((U^s )_t , g_0 ) = \frac{V_n}{V} \ Vol(U_t ,g).$$
\noindent
(recall that $V=Vol(M,g)$ and $V_n = Vol(S^n ,g_0 )$).
Let $U^s
\subset S^{n+1}$ be the corresponding subset (i.e. we consider
$S^{n+1} -\{ S,N \}$ as $S^n \times (0,\pi )$ and $U^s$ is
such that $U^s \cap (S^n \times \{ t \}) $ =$(U^s )_t$.
One might add
$S$ and/or $N$ to make $U^s$ closed and connected). Note
that one can write $(U^s )_t = (U_t )^s = U_t^s$ as long as there
is no confusion (or no difference) on whether we are considering
it as a subset of $S^n$ or as a subset of $S^{n+1}$.
Now
$$Vol(U)=\int_0^{\pi} \sin^n (t) \ Vol(U_t ,g) \ dt $$
$$= \frac{V}{V_n} \int_0^{\pi} \sin^n (t) \ Vol((U^s )_t ,g_0 ) \ dt
= \frac{V}{V_n} Vol(U^s ,g_0 ).$$
Also if $B(r) =M\times [0,r]$ (the geodesic ball of radius
$r$ centered at the vertex at 0) then
$$Vol(B(r))=\int_0^r \sin^n (t) V dt = \frac{V}{V_n}
\int_0^r \sin^n (t) V_n dt = \frac{V}{V_n} Vol (B_0 (r)) \ \ (1)$$
\noindent
where $B_0 (r)$ is the geodesic ball of radius $r$ in the
round sphere. And
$$Vol(\partial B(r))=\sin^n (r) V =\frac{V}{V_n}
Vol(\partial B_0 (r)) \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2).$$
Formulas (1) and (2) tell us that the geodesic balls around the
vertices in $X$ produce the same isoperimetric function as
the round metric $g_0$. Therefore given any open subset $U \subset
X$ we want to compare the area of $\partial U$ with the area
of the boundary
of the geodesic ball in $S^{n+1}$ with the same normalized volume
as $U$.
\vspace{.3cm}
Given a closed set $W$ let $B(W,r)$ be the set of points at distance
at most $r$ from $W$. Then one considers the {\it Minkowski content}
of $W$,
$$\mu^+ (W) = \liminf \frac{Vol (B(W,r) ) -Vol(W)}{r}.$$
\noindent
If $W$ is a smooth submanifold with boundary then
$\mu^+ (W) = Vol (\partial W)$. And this is still true if the
boundary has singularities of codimension $\geq 2$ (and finite
codimension 1 Hausdorff measure).
The Riemannian measure on $(S^n ,g_0 )$, normalized to be a
probability measure is what is called a {\it model measure}:
if $D^t$, $t\in (0,1)$ is the family of geodesic balls
(with volume $Vol(D^t )=t$) centered at some fixed point then
they are
isoperimetric regions which are ordered by volume and such
that for any $t$, $ B(D^t ,r) =D^{t'}$ for some $t'$.
See \cite[Section 3.2]{Ros}. The following result follows
directly from the
Levy-Gromov isoperimetric inequality \cite[Appendix C]{Gromov}
and \cite[Proposition 3.5]{Ros} (see the lemma in
\cite[page 77]{Morgan3} for a more elementary proof and point of view
on \cite[Proposition 3.5]{Ros}).
\begin{Lemma}: Let $(M,g)$ be a closed Riemannian manifold
of volume $V$ and Ricci curvature $Ricci(g) \geq (n-1) g$.
For any nonempty closed subset $\Omega \subset M$ and
any $r\geq 0$ if $B_{\Omega}$ is a geodesic ball in
$(S^n , g_0 )$ with volume $Vol(B_{\Omega})=(V_n /V)
Vol(\Omega )$ then $Vol(B(B_{\Omega} ,r)) \leq
(V_n /V) Vol(B(\Omega ,r))$.
\end{Lemma}
\begin{proof} Given any closed Riemannian manifold $(M,g)$,
dividing the
Riemannian measure by the volume one obtains a probability
measure which we will denote $\mu_g$.
As we said before, the round metric on the sphere
gives a model measure $\mu_{g_0}$. On the other hand the Levy-Gromov
isoperimetric inequality \cite{Gromov}
says that $I_{\mu_g} \geq I_{\mu_{g_0}}$.
The definition of $B_{\Omega}$ says that $\mu_g (\Omega )=\mu_{g_0}
(B_{\Omega})$ and what we want to prove is that $\mu_g (B(\Omega ,r))
\geq \mu_{g_0}
(B(B_{\Omega} ,r) )$ .
Therefore the
statement of the lemma is precisely \cite[Proposition 3.5]{Ros}.
\end{proof}
Fix a positive constant $\lambda$. Note that the previous lemma
remains unchanged if we replace $g$ and $g_0$ by $\lambda g$
and $\lambda g_0$: the correspondence $\Omega \rightarrow
B_{\Omega}$ is the same and $\mu_{\lambda g} = \mu_g$.
\begin{Lemma} For any $t_0 \in (0,\pi )$
$B((U^s )_{t_0} ,r) \subset (B(U_{t_0} ,r ))^s $.
\end{Lemma}
\begin{proof} First note that the distance from a point
$(x,t) \in X$ to a vertex depends only on $t$ and not on $x$
(or even on $X$). Therefore if $r$ is greater than the
distance $\delta$ between $t_0$ and $0$ or $\pi$
then both sets in the lemma
will contain a geodesic ball of radius $r-\delta$ around the
corresponding vertex.
Also observe that the distance between points $(x,t_0 )$ and
$(y,t)$ depends only on the distance between $x$ and $y$
(and $t$, $t_0$, and the function in the warped product,
which in this case is $\sin$) but not on $x, y$ or $X$.
In particular for any $t$ so that $|t-t_0 |<r$,
$(B((U^s )_{t_0} ,r) )_t$ is a geodesic ball.
We have to prove that for any $t$
$$(B((U^s )_{t_0} ,r) )_t \subset ((B(U_{t_0} ,r ))^s )_t.$$
\noindent
But since they are both geodesic balls centered at the same point
it is enough to prove that the volume of the subset on the left is
less than or equal to the volume of the subset on the right.
By the definition of symmetrization the normalized volume of
$ ((B(U_{t_0} ,r ))^s )_t$ is equal to the normalized volume of
$(B(U_{t_0} ,r ))_t$. But from the previous comment there exist
$\rho >0$ such that, considered as subsets of $M$,
$$(B(U_{t_0} ,r ))_t = B(U_{t_0} ,\rho )$$
\noindent
and, as subsets of $S^n$,
$$(B((U^s )_{t_0} ,r) )_t =B(U^s_{t_0} ,\rho ).$$
The lemma then follows from Lemma 2.1 (and the comments after it).
\end{proof}
Now for any closed subset $U\subset X$ let $B_U$ be a
geodesic ball in $(S^{n+1} ,g_0 )$ with volume
$Vol(B_U ,g_0 )= (V_n /V)
Vol(U,{\bf g})$. Since geodesic balls in round spheres are isoperimetric
(and $Vol(B_U ,g_0 )=Vol(U^s ,g_0 )$)
it follows that $Vol(\partial B_U )\leq \mu^+ (U^s )$.
\begin{Lemma} Given any closed set $U\subset X$, $\mu^+(U)
\geq (V/V_n ) Vol(\partial B_U )$.
\end{Lemma}
\begin{proof}
Since $(B(U,r) )^s$ is closed
and $B(U^s ,r)$ is the closure of
$\cup_{t\in (0,\pi )} \ B(U_t ^s ,r)$
we have from the previous lemma that
$$B(U^s ,r) \subset (B(U,r ) )^s .$$
Then
$$Vol(\partial B_U )\leq \mu^+ (U^s )
=\liminf \frac{Vol(B(U^s ,r) ) - Vol (U^s )}{r}$$
$$\leq \liminf \frac{Vol((B(U ,r))^s ) - Vol (U^s )}{r}$$
$$=(V_n /V)\liminf \frac{Vol(B(U,r) ) - Vol (U)}{r}
=(V_n /V) \mu^+ (U) $$
\noindent
and the lemma follows.
\end{proof}
Now if we let $B_U^M$ be a geodesic ball around a vertex in $X$
with volume
$$Vol(B_U^M ,{\bf g}) = Vol(U,{\bf g} ) =
\frac{V}{V_n} Vol(B_U, g_0 )$$
\noindent
then it follows from (1) and (2) in the beginning of the proof that
$$Vol(\partial B_U^M ,{\bf g}) = \frac{V}{V_n} Vol(\partial B_U ,g_0 ).$$
\noindent
and so by Lemma 2.3
$$Vol(\partial B_U^M ,{\bf g}) \leq \mu^+ (U)$$
\noindent
and Theorem 1.1 is proved.
{\hfill$\Box$\medskip}
\section{The Yamabe constant of $M\times {\mathbb R}$}
Now assume that $g$ is a metric of positive Ricci curvature,
$Ricci(g) \geq (n-1)g$ on $M$ and consider as before the
spherical cone $(X,{\bf g})$ with ${\bf g} =\sin^2 (t) g + dt^2$.
By a direct
computation the sectional curvature of ${\bf g}$ is given by:
$$K_{{\bf g}} (v_i ,v_j )=\frac{K_g (v_i ,v_j )-\cos^2 (t)}{\sin^2 (t)}$$
$$K_{\bf g} (v_i ,\partial /\partial t)=1,$$
\noindent
for a $g$-orthonormal basis $\{ v_1 ,...,v_n \}$. And the Ricci
curvature is given by:
$$Ricci({\bf g}) (v_i ,\partial /\partial t )=0$$
$$Ricci({\bf g}) (v_i ,v_j )= Ricci(g) (v_i ,v_j ) - (n-1)\cos^2 (t)\delta_i^j
+\sin^2 (t) \delta_i^j$$
$$Ricci({\bf g}) (\partial_t ,\partial_t )=n.$$
Therefore by picking $\{ v_1 ,...,v_n \}$ which diagonalizes $Ricci(g)$ one
easily sees that if $Ricci(g)\geq (n-1)g$ then $Ricci({\bf g})\geq n
{\bf g}$. Moreover, if $g$ is an Einstein metric with Einstein
constant $n-1$ the ${\bf g}$ is Einstein with Einstein constant $n$.
Let us recall that for non-compact
Riemannian manifolds one defines
the Yamabe constant of a metric as the infimum of the Yamabe
functional of the metric
over smooth compactly supported functions (or functions
in $L_1^2$, of course). So for instance if $g$ is a Riemannian metric
on the closed manifold $M$ then
$$Y(M\times {\mathbb R} ,[g+dt^2 ]) =\inf_{f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}^2 +
{\bf s}_g \ f^2 \ \right)
dvol(g+dt^2)}{
{\| f \|}_{p_{n+1}}^2 } .$$
\vspace{.2cm}
{\it Proof of Theorem 1.2 :}
We have a closed Riemannian manifold $(M^n ,g)$
such that $Ricci(g) \geq (n-1) g$. Let $f_0 (t)= \cosh^{-2} (t)$
and consider the diffeomorphism
$$H: M \times (0, \pi ) \rightarrow M \times {\mathbb R} $$
\noindent
given by $H(x,t)=(x,h_0 (t))$, where $h_0 :(0,\pi ) \rightarrow {\mathbb R} $
is the diffeomorphism defined by $h_0 (t) =cosh^{-1} ( (\sin
(t))^{-1})$
on $[\pi /2, \pi )$ and $h_0 (t)=-h_0 (\pi /2 -t)$ if
$t\in(0,\pi /2 )$.
By a direct computation $H^* ( f_0 (g+dt^2))=
{\bf g}= \sin^2 (t) g +dt^2$ on $M\times (0,\pi )$.
Therefore by conformal invariance if we call $g_{f_0} = f_0 (g+dt^2)$
$$Y(M\times {\mathbb R} , [g+dt^2 ] )
=\inf_{ f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}_{g+dt^2}^2 +
{\bf s}_g f^2 \right) \ dvol(g+dt^2)}{
{\| f \|}_{p_{n+1}}^2 } $$
$$=\inf_{f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}^2_{g_{f_0}}
+ {\bf s}_{g_{f_0}} f^2 \ \right) \
dvol(g_{f_0} )}{
{\| f \|}_{p_{n+1}}^2 } $$
$$=\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } \ \left( a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ {\bf s}_{\bf g}
f^2 \ \right) \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}}^2 } =Y(M\times (0,\pi ),[{\bf g}]).$$
Now, as we showed in the previous section, $Ricci({\bf g})
\geq n$. Therefore ${\bf s}_{\bf g} \geq n(n+1)$. So we get
$$Y(M\times {\mathbb R} , [g+dt^2 ]) \geq
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } \ \left( a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ n(n+1)
f^2 \ \right) \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}}^2 }.$$
To compute the infimum one needs to consider only non-negative
functions.
Now for any non-negative function
$f \in C^{\infty}_0 (M\times (0,\pi ) \ )$ consider its symmetrization
$f_* :X \rightarrow {\mathbb R}_{\geq 0}$ defined by $f_* (S) =\sup f$ and
$f_* (x,t) =s$ if and only if $Vol(B(S,t), {\bf g} )=
Vol(\{ f > s \} ,{\bf g})$ (i.e. $f_*$ is a
non-increasing function of $t$
and $Vol(\{ f_* > s \})=Vol(\{ f > s \}) $ for any $s$).
It is inmediate that the $L^q$-norms of $f_*$ and $f$ are the
same for any $q$. Also, by the coarea formula
$$\int
\| \nabla f \|_{\bf g}^2 = \int_0^{\infty}
\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g} d\sigma_t \right) dt.$$
$$ \geq \int_0^{\infty} (\mu (f^{-1} (t)))^2
{\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt$$
\noindent
by H\"{o}lder's inequality, where $d\sigma_t$ is the measure induced
by ${\bf g}$ on $\{ f^{-1} (t) \}$. But
$$\int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
=-\frac{d}{dt} (\mu\{ f>t \})$$
$$=-\frac{d}{dt} (\mu\{ f_* >t \}) =
\int_{f_*^{-1}(t)} \| \nabla f_* \|_{\bf g}^{-1} d\sigma_t $$
\noindent
and since $f^{-1} (t) =\partial \{ f>t \}$ by Theorem 1.1
we have $\mu (f^{-1} (t))\geq \mu (f_*^{-1} (t))$. Therefore
$$ \int_0^{\infty} (\mu (f^{-1} (t)))^2
{\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt$$
$$ \geq \int_0^{\infty} (\mu (f_*^{-1} (t)))^2
{\left( \int_{f_*^{-1}(t)} \| \nabla f_* \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt $$
\noindent
(and since $\| \nabla f_* \|_{\bf g}$ is constant along
$f_*^{-1}(t)$ )
$$=\int_0^{\infty}\mu (f_*^{-1} (t)) \| \nabla f_* \|_{\bf g} \ dt$$
$$= \int_0^{\infty}
\left( \int_{f_* ^{-1}(t)} \| \nabla f_* \|_{\bf g} d\sigma_t \right)
dt =\int
\| \nabla f_* \|_{\bf g}^2 .$$
Considering $S^{n+1}$ as the spherical cone over $S^n$ we have
the function $f^0_* : S^{n+1} \rightarrow {\mathbb R}_{\geq 0}$ which
corresponds to $f_*$.
Then for all $s$
$$Vol (\{ f_*^0 >s \} ) =
\left( \frac{V_n}{V} \right) \ Vol( \{ f_* >s \},$$
\noindent
and so for any $q$,
$$\int (f^0_*)^q dvol(g_0 ) = \left( \frac{V_n}{V} \right)
\int (f_* )^q dvol({\bf g}).$$
Also for any $s\in (0,\pi )$
$$\mu ( (f_*^0 )^{-1} (s)) = \frac{V_n}{V} \mu (f_*^{-1} (s)),$$
\noindent
and since ${\| \nabla f_*^0 \|}_{g_0} = {\| \nabla f_* \| }_{\bf g}$
we have
$$ \int
\| \nabla f^0_* \|_{g_0}^2 = \frac{V_n}{V} \int
\| \nabla f_* \|_{\bf g}^2 .$$
We obtain
$$Y(M\times {\mathbb R} , [g+dt^2 ]) \geq
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ n(n+1)
f^2 \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}} ^2 }$$
$$\geq \inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f_* \|}^2_{\bf g}
+ n(n+1)
f_*^2 \ dvol({\bf g})}{
{\| f_* \|}_{p_{n+1}}^2 }$$
$$={\left( \frac{V}{V_n} \right)}^{1-(2/p_{n+1})}
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f^0_* \|}^2_{g_0}
+ n(n+1)
{f^0_*}^2 dvol({g_0})}{
{\| f^0_* \|}_{p_{n+1}}^2 }$$
$$ \geq
{\left( \frac{V}{V_n} \right)}^{2/(n+1)} \ Y_{n+1}$$
This finishes the proof of Theorem 1.2.
{\hfill$\Box$\medskip}
{\it Proof of Corollary 1.3 :} Note that if
${\bf s}_g$ is constant $Y_{{\mathbb R}} (M \times {\mathbb R} , g +
dt^2)$
only depends on ${\bf s}_g$ and $V=Vol(M,g)$ Actually,
$$Y_{{\mathbb R}} (M\times {\mathbb R} ,g +dt^2 )=
\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2} V
+ {\bf s}_g V f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p} \ V^{2/p}}$$
$$=V^{1-(2/p)}
\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2}
+ {\bf s}_g f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p}}.$$
But as we said
$$\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2}
+ {\bf s}_g f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p}}$$
\noindent
is independent of $(M,g)$ and it is known to be equal to
$Y_{n+1} V_n^{-2/(n+1)}$. Corollary 1.3 then follows
directly from Theorem 1.2.
{\hfill$\Box$\medskip}
|
\section{Introduction}
Located at about 1$'$ to the NW of the Orion Trapezium, the
BN/KL region has been, as
the closest region of massive star formation, the subject of extensive studies.
Recently, Rodr\'\i guez et al. (2005) and G\'omez et al. (2005)
reported large proper motions (equivalent to velocities of the order of
a few tens of km s$^{-1}$) for the radio sources associated with the infrared sources
BN and n, as well as for the radio source I. All three objects
are located at the core of the BN/KL region and appear
to be moving away from a common point where they must all have been
located about 500 years ago.
Even when these proper motions are now available, there is no
radial velocity information for these three sources, with the
exception of the near-infrared spectroscopic study of BN
made by Scoville et al. (1983), that report an LSR radial
velocity of +21 km s$^{-1}$ for this source.
In this paper we present 7 mm continuum and H53$\alpha$
radio recombination line observations of the BN/KL region in an
attempt to obtain additional information on the radial velocities of
these sources.
\section{Observations}
The 7 mm observations were made in the B configuration
of the VLA of the NRAO\footnote{The National Radio
Astronomy Observatory is operated by Associated Universities
Inc. under cooperative agreement with the National Science Foundation.},
during 2007 December 14. The central rest frequency observed was
that of the H53$\alpha$ line, 42951.97 MHz,
and we integrated on-source for a total of
approximately 3 hours. We observed in the spectral line
mode, with 15 channels of 1.56 MHz each (10.9 km s$^{-1}$)
and both circular polarizations. The bandpass calibrator was
0319+415. A continuum channel recorded the
central 75\% of the full spectral window. The absolute amplitude
calibrator was 1331+305
(with an adopted flux density of 1.47 Jy)
and the phase calibrator was 0541$-$056 (with a bootstrapped flux density
of 1.78$\pm$0.08 Jy). The phase noise rms was about 30$^\circ$,
indicating good weather conditions. The phase center of these observations was at
$\alpha(2000) = 05^h~35^m~14\rlap.^s13;~\delta(2000) = -05^\circ~22{'}~26\rlap.^{''}6$.
The data were acquired and reduced using the recommended VLA procedures
for high frequency data, including the fast-switching mode with a
cycle of 120 seconds.
Clean maps were
obtained using the task IMAGR of AIPS with the ROBUST parameter set to 0.
\section{Continuum Analysis}
\subsection{Spectral Indices}
In Figure 1 we show the image obtained from the continuum channel.
Three sources, BN, I and n, are evident in the image. No other sources
were detected above a 5-$\sigma$ lower limit of 1.75 mJy in our $1'$
field of view. The positions, flux
densities, and deconvolved angular sizes of these sources are
given in Table 1. The continuum flux density of the sources
has been obtained from the line-free channels.
The line emission will be discussed below.
The flux density obtained at 7 mm by us
for BN is in good agreement with the values previously reported in
the literature:
we obtain a flux density of 28.6$\pm$0.6 mJy, while
values of 31$\pm$5 and 28.0$\pm$0.6 were obtained by Menten \& Reid (1995)
and Chandler \& Wood (1997), respectively.
In the case of source I, the agreement is acceptable,
since we obtain a flux density of 14.5$\pm$0.7 mJy,
while values of
13$\pm$2 and 10.8$\pm$0.6 mJy were reported by Menten \& Reid (1995)
and Chandler \& Wood (1997), respectively.
Careful monitoring would be required
to test if the radio continuum from source I is variable in time.
The spectral indices determined from our 7 mm observations and the
3.6 cm observations of G\'omez et al. (2008) are given in the last column of Table 2.
Our spectral indices for BN and
I are in excellent agreement in this spectral range with the more detailed analysis
presented by Plambeck et al. (1995) and Beuther et al. (2004).
We have detected source n for the first time
at 7 mm and this detection allows the first estimate of the spectral index of this source
over a wide frequency range.
The value of 0.2$\pm$0.1 suggests marginally thick free-free emission, as expected in
an ionized outflow. This supports the interpretation of this source
as an ionized outflow by G\'omez et al. (2008).
The position given by us in Table 1 is consistent with the
extrapolation of the proper motions of this source discussed by G\'omez et al. (2008).
\subsection{Deconvolved Angular Sizes}
The radio source I has parameters
consistent with an optically thick free-free source (spectral
index of $1.5\pm0.1$).
Beuther et al. (2004) suggest that this spectral index is either the result of
optically thick free-free plus dust emission, or $H^-$ free-free emission
that gives rise to a power-law spectrum with an index of $\sim$1.6.
In the case of the radio source associated with the infrared source n
we only have an upper limit to its size at 7 mm. In addition,
G\'omez et al. (2008) report important morphological variations
over time in this source
that suggest that comparisons at different frequencies should be made
only from simultaneous observations.
In the case of BN,
the frequency dependences of flux density and angular size (this last
parameter taken to
be the geometric mean of the major and minor axes reported in Tables 1 and 2) can be accounted for with
a simple model of a sphere of ionized gas in which
the electron density
decreases as a power-law function of radius, $n_e \propto r^{-\alpha}$.
In this case, the flux density of the source is expected to go with
frequency as $S_\nu \propto \nu^{(6.2-4\alpha)/(1-2\alpha)}$ and the angular size is expected to go with
frequency as $\theta_\nu \propto \nu^{2.1/(1-2\alpha)}$ (Reynolds 1986).
The frequency dependences of flux density ($S_\nu \propto \nu^{1.1\pm0.1}$) and angular
size ($\theta_\nu \propto \nu^{-0.36\pm0.12}$) for
BN are consistent with a steeply declining electron density
distribution
with power law index of
$\alpha = 3.0\pm0.3$. The continuum spectrum of BN produced
by Plambeck et al. (1995) indicates that a constant
spectral index extends from 5 to 100 GHz.
\section{Analysis of the H53$\alpha$ Recombination Line Emission}
\subsection{Radial LSR Velocity}
We clearly detected the H53$\alpha$ line emission only from BN.
The spectrum is shown in Figure 2. The parameters of
the Gaussian least squares fit to the profile are given in Table 3.
We note that the radial LSR velocity determined by us, $+20.1\pm2.1$
km s$^{-1}$, agrees well with the value of $+21$ km s$^{-1}$
reported by Scoville et al. (1983) from near-IR spectroscopy.
In a single dish study of the H41$\alpha$ line made with an
angular resolution of 24$''$ toward
Orion IRc2, Jaffe \& Mart\'\i n-Pintado (1999) report emission
with $v_{LSR}$ = -3.6 km s$^{-1}$.
Most likely, this is emission from the ambient H~II region, since
its radial velocity practically coincides with the
value determined for the large H~II region (Orion A) ionized by
the Trapezium stars (e. g. Peimbert et al. 1988).
The single dish observations of the H51$\alpha$ emission
of Hasegawa \& Akabane (1984), made with an angular resolution of 33$''$,
most probably come also from the ambient ionized gas and not
from BN.
\subsection{LTE Interpretation}
If we assume that the line emission is optically thin and in LTE,
the electron temperature, $T_e^*$, is given by
(Mezger \& H\"oglund 1967; Gordon 1969; Quireza et al. 2006):
\begin{equation}\Biggl[{{T_e^*} \over {K}}\Biggr] = \Biggl[7100 \biggl({{\nu_L} \over {GHz}} \biggr)^{1.1}
\biggl({{S_C} \over {S_L}} \biggr) \biggl({{\Delta v} \over {km~s^{-1}}}\biggr)^{-1}
(1 + y^+)^{-1} \Biggr]^{0.87}, \end{equation}
\noindent where $\nu_L$ is the line frequency, $S_C$ is the continuum flux density,
$S_L$ is the peak line flux density, $\Delta v$ is the FWHM line width, and
$y^+$ is the ionized helium to ionized hydrogen abundance ratio.
In the case of BN, we can adopt $y^+ \simeq 0$ given that the
source is not of very high luminosity, and using the values given in Tables 1 and 3,
we obtain $T_e^* \simeq 8,200$ K. This value is similar to that
determined for the nearby Orion A from radio recombination lines (e. g. Lichten, Rodr\'\i guez, \&
Chaisson 1979).
It is somewhat
surprising that we get a very reasonable estimate for $T_e^*$ when our previous discussion
seemed to imply that BN is partially optically thick at 7 mm.
One possibility is that we have two effects fortuitously canceling each other. For example, the
optical thickness of the source will diminish the
line emission, while maser effects (such as those observed
in MWC 349; Mart\'\i n-Pintado et al. 1989) will amplify the line.
However, in an attempt to understand this result in LTE conditions, we will discuss the expected
LTE radio recombination line emission from
a sphere of ionized gas in which the electron density
decreases as a power-law function of radius, $n_e \propto r^{-\alpha}$.
As noted before, the modeling of the continuum emission from such a source
was presented in detail by Panagia \& Felli (1975) and Reynolds (1986). The radio recombination line emission
for the case $\alpha = 2$ has been discussed by Altenhoff, Strittmatter, \&
Wendker (1981) and Rodr\'\i guez (1982).
Here we generalize the derivation of the recombination line emission
to the case of $\alpha > 1.5$. This lower limit is
adopted to avoid the total emission from the source to diverge.
For a sphere of ionized gas, the free-free continuum emission will be given by
(Panagia \& Felli 1975):
\begin{equation}S_C = 2 \pi {{r_0^2} \over {d^2}} B_\nu \int_0^\infty
\biggl(1 - exp[-\tau_C(\xi)]\biggr)~ \xi~ d\xi, \end{equation}
\noindent where $r_0$ is a reference radius, $d$ is the distance to the source,
$B_\nu$ is Planck's function, $\xi$ is the projected radius in units of $r_0$,
and $\tau_C(\xi)$ is the continuum optical depth along the line of sight with
projected radius $\xi$. On the other hand, the free-free continuum plus
radio recombination line emission will be given by an equation similar to eqn. (2), but with the
continuum opacity substituted by the continuum plus line opacity (Rodr\'\i guez 1982):
\begin{equation}S_{L+C} = 2 \pi {{r_0^2} \over {d^2}} B_\nu \int_0^\infty \biggl(1 - exp[-\tau_{L+C}(\xi)]
\biggr) \xi d\xi, \end{equation}
\noindent where $\tau_{L+C}(\xi)$ is the line plus continuum optical depth along the line of sight with
projected radius $\xi$.
The line-to-continuum ratio will be given by:
\begin{equation}{{S_L} \over {S_C}} = {{S_{L+C} - S_C} \over {S_C}}. \end{equation}
The opacity of these emission processes depends on projected radius as (Panagia \& Felli 1975):
\begin{equation}\tau(\xi) \propto \xi^{-(2 \alpha -1)}. \end{equation}
We now introduce the definite integral (Gradshteyn \& Ryzhik 1994)
\begin{equation}\int_0^\infty [1- exp(-\mu x^{-p})]~x~ dx =
- {{1} \over {p}}~ \mu^{{2} \over{p}}~ \Gamma(-{{2} \over{p}}), \end{equation}
\noindent valid for $\mu > 0$ and $p > 0$ and with $\Gamma$ being the Gamma function.
Substituting eqns. (2) and (3) in eqn. (4), and using the integral
defined in eqn. (7), it can be shown that
\begin{equation}{{S_L} \over {S_C}} = \Biggl[{{\kappa_L + \kappa_C}
\over {\kappa_C}} \Biggr]^{1/(\alpha -0.5)} - 1, \end{equation}
\noindent where $\kappa_L$ and $\kappa_C$ are the line and continuum absorption coefficients
at the frequency of observation, respectively.
In this last step we have also
assumed that the opacity of the line and continuum processes are proportional to
the line and continuum absorption coefficients, respectively, that is, that the
physical depths producing the line and continuum emissions are the
same. Under the LTE assumption, we have
that
\begin{equation}{{\kappa_L} \over {\kappa_C}} = 7100 \biggl({{\nu_L} \over {GHz}} \biggr)^{1.1}
\biggl({{T_e^*} \over {K}} \biggr)^{-1.1} \biggl({{\Delta v} \over {km~s^{-1}}}\biggr)^{-1}
(1 + y^+)^{-1}. \end{equation}
For $\nu \leq$ 43 GHz and typical parameters of an H II region, we
can see from eqn. (8) that $\kappa_L<\kappa_C$, and
eqn. (7) can be approximated by:
\begin{equation}{{S_L} \over {S_C}} \simeq {{1} \over
{(\alpha -0.5)}} \Biggl[{{\kappa_L} \over {\kappa_C}} \Biggr]. \end{equation}
That is, the expected optically-thin, LTE line-to-continuum ratio:
\begin{equation}{{S_L} \over {S_C}} \simeq \Biggl[{{\kappa_L} \over {\kappa_C}} \Biggr], \end{equation}
\noindent becomes attenuated by a factor $1/(\alpha -0.5)$. In the case of $\alpha = 2$,
the factor is 2/3, and we reproduce the result of Altenhoff, Strittmatter, \&
Wendker (1981) and Rodr\'\i guez (1982). In the case of BN, we have that $\alpha \simeq 3$, and
we expect the attenuation factor to be 2/5. If BN can be modeled this way, we would have expected
to derive electron temperatures under the LTE assumption (see eqn. 1) of order
\begin{equation}T_e^*(\alpha = 3) \simeq 2.2~ T_e^*(thin). \end{equation}
However, from the discussion in the first paragraph of this section observationally
we determine that
\begin{equation}T_e^*(\alpha = 3) \simeq T_e^*(thin). \end{equation}
Summarizing: i) BN seems to have significant optical depth in the continuum at
7 mm, ii) this significant optical depth should attenuate the observed recombination
line emission with respect to the optically-thin case, but iii) the line emission seems
to be as strong as in the optically-thin case.
As possible explanations for the ``normal'' (apparently optically-thin and in LTE)
radio recombination line emission
observed from BN we can think of two options.
The first is that, as noted before, there is a non-LTE line-amplifying
mechanism that approximately compensates for the optical depth attenuation.
The second possibility is that the free-free emission from BN at 7 mm is already optically thin.
However, this last possibility seems to be in contradiction with the results
of Plambeck et al. (1995) that suggest a single spectral index
from 5 to 100 GHz. Observations of radio recombination lines around
100 GHz are needed to solve this problem.
A comparison with the H53$\alpha$ emission from the hypercompact H~II
region G28.20-0.04N is also of interest.
The continuum flux densities from this source at
21, 6, 3.6, and 2 cm are 49, 135, 297, and 543 mJy, respectively
(Sewilo et al. 2004). At 7 mm the continuum flux density is 641 mJy
(Sewilo et al. 2008), indicating
that the source has become optically thin at this wavelength.
Using the H53$\alpha$ line parameters given by (Sewilo et al. 2008)
we derive an LTE electron temperature of $T_e^* \simeq 7,600$ K,
similar to the value for BN and in this case consistent with
the optically-thin nature of G28.20-0.04N.
The non detection of H53$\alpha$ emission from radio source I is consistent
with its expected large optical depth. The formulation above implies $\alpha \simeq 5$, and an
attenuation factor of 2/9.
This confirms the notion that BN and radio source I are two sources
intrinsically very different in nature.
This difference is also evident in the brightness temperature of both sources.
At 7 mm, the brightness temperature of a source is
\begin{equation}\Biggl[{{T_B} \over {K}} \Biggr] \simeq 0.96 \Biggl[{{S_\nu} \over {mJy}}
\Biggr] \Biggl[{{\theta_{maj} \times
\theta_{min}} \over {arcsec^2}} \Biggr]^{-2}. \end{equation}
Using the values of Table 1, we get $T_B \simeq$ 7,800 K for BN, confirming
its nature as photoionized gas. However, for the radio source I we get
$T_B \simeq$ 2,600 K. So, even when source I seems to be optically thick, its
brightness temperature is substantially lower than that expected for
a photoionized region. Reid et al. (2007) have discussed as possible
explanations for this low brightness temperature $H^-$ free-free opacity or
a photoionized disk.
Following the discussion of Reid et al. (2007), we consider
it unlikely that dust emission could be a dominant contributor to the 7 mm emission of BN or
Orion I. A dense, warm, dusty disk would be expected to show many molecular lines at
millimeter/submillimeter wavelengths. While Beuther et al. (2006) and Friedel
\& Snyder(2008) find numerous, strong,
molecular lines toward the nearby "hot core", they find no strong lines toward the position of
Orion I (with the exception of
the strong SiO masers slightly offset from Orion I) or BN.
Also, the brightness temperatures derived by us at 7 mm (7,800 K for BN and
2,600 K for source I) are
high enough to sublimate dust and suggest that free-free emission from
ionized gas dominates the continuum emission.
Finally, the continuum spectra of BN and of source I measured by Plambeck et al.(1995)
and Beuther et al. (2006), respectively, suggest that the dust
emission becomes dominant only above $\sim$300 GHz.
In the case of source n, no detection was expected given its
weakness even in the continuum.
\subsection{Spatial Distribution of the H53$\alpha$ Line Emission}
The H53$\alpha$ line emission in the individual velocity
channels shows evidence of structure but unfortunately the signal-to-noise
ratio is not large enough to reach reliable conclusions from the
analysis of these individual channels. However, an image
with good signal-to-noise ratio can be obtained averaging over the velocity
range of -21.2 to +66.1 km s$^{-1}$, using the task MOMNT in
AIPS. This line image is compared
in Figure 3 with a continuum image
made from the line-free channels.
The larger apparent size of the continuum image is simply the
result of its much better signal-to-noise ratio.
For the total line emission we obtain an upper limit of
$0\rlap.{''}12$ for its size, that is consistent with the
size of the continuum emission given in Table 1.
We also show images of the blueshifted (-21.2 to +22.5 km s$^{-1}$)
and redshifted (+22.5 to 66.1 km s$^{-1}$) line emission in Figure 3.
The cross in the figure indicates the centroid of the total line
emission. The centroid of the line emission does not appear to
coincide with the centroid of the continuum emission and
we attribute this to opacity effects.
An interesting conclusion comes from comparing the total
line emission, with the blueshifted and redshifted components.
The blueshifted emission seems slightly shifted to the SW, while the
redshifted emission seems slightly shifted to the NE, suggesting a
velocity gradient. This result supports the suggestion of
Jiang et al. (2005) of the presence of an outflow in BN along a
position angle of 36$^\circ$. Given the modest signal-to-noise ratio
of the data, it is difficult to estimate the magnitude
of the velocity shift and we crudely assume it is of order one
channel ($\sim$10 km s$^{-1}$), since most of the line
emission is concentrated in the central two channels
of the spectrum (see Figure 2). The position shift between the blueshifted and
the redshifted emissions is $0\rlap.{''}028 \pm 0\rlap.{''}007$
($12 \pm 3$ AU at the distance of 414 pc given by Menten et al. 2007), significant to the
4-$\sigma$ level. Unfortunately, the data of Jiang et al. (2005) does not
include line observations and there is no kinematic information in their paper to
compare with our results.
The small velocity gradient observed by us in BN is consistent with a
slow bipolar outflow but also with Keplerian rotation around a central mass
of only 0.2 $M_\odot$.
\section{Conclusions}
We presented observations of the H53$\alpha$ recombination line
and adjacent continuum toward the Orion BN/KL region.
In the continuum we detect the BN object, the radio source
I (GMR I) and the radio counterpart of the infrared source n
(Orion-n) and discuss its parameters.
In the H53$\alpha$ line we only detect the BN object,
the first time that radio recombination lines have been detected from this source.
The LSR radial velocity of BN from the H53$\alpha$ line, $v_{LSR} = 20.1 \pm 2.1$
km s$^{-1}$,
is consistent with that found from previous studies in near-infared lines,
$v_{LSR} = 21$ km s$^{-1}$.
We discuss the line-to-continuum ratio from BN and present evidence
for a possible velocity gradient across this source.
\acknowledgments
LFR and LAZ acknowledge the support
of CONACyT, M\'exico and DGAPA, UNAM.
{\it Facilities:} \facility{VLA}
|
\section{Introduction}
The study of magnetic models has
generated considerable progresses in the understanding
of magnetic materials,
and lately, it has overcome the frontiers of magnetism,
being considered in many areas of knowledge.
Certainly, the Ising model represents one of the most
studied and important models of magnetism and
statistical mechanics~\cite{huang,reichl},
and it has been employed also to typify a wide variety of
physical systems, like lattice gases, binary alloys, and
proteins (with a particular interest in the problem of protein
folding).
Although real magnetic systems should be properly
described by means of Heisenberg spins (i.e.,
three-dimensional variables), many materials are
characterized by anisotropy fields that make these
spins prefer given directions in space, explaining
why simple models, characterized by
binary variables, became so important
for the area of magnetism.
Particularly, models defined in terms of Ising variables
have shown the ability for exhibiting a wide variety
of multicritical behavior
by introducing randomness, and/or competing
interactions, has attracted the attention of many
researchers~(see, e.g., Refs.~\cite{aharony,mattis,kaufman,nogueira98
nuno08a,nuno08b,salmon1,morais12}).
Certainly, the simplicity of Ising variables,
which are very suitable for both analytical and numerical
studies, has led to proposals of important models outside
the scope of magnetism, particularly in the
area of complex systems.
These models have been successful for describing
a wide variety of relevant
features in such systems, and have raised
interest in many fields, like
financial markets, optimization problems,
biological membranes, and social behavior.
In some cases, more than one Ising variable have been used,
especially by considering a coupling between them, as
proposed within the framework of choice
theories~\cite{fernandez}, or in plastic
crystals~\cite{plastic1,brandreview,folmer}.
In the former case, each set of Ising variables represents
a group of identical individuals, all of which can make two
independent binary choices.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=5.5cm]{figure1.eps}
\end{center}
\vspace{-1cm}
\caption{Illustrative pictures of the three phases as the temperature
increases, low-temperature (ordered) solid, intermediate
plastic crystal, and high-temperature (disordered) liquid phase.
In the plastic state the centers of mass of the molecules form a
regular crystalline lattice but the molecules are
disordered with respect to the orientational degrees of freedom.}
\label{fig:fasesdecristais}
\end{figure}
The so-called plastic
crystals~\cite{plastic1,brandreview,folmer,michel85,michel87,%
galam87,galam89,salinas1,salinas2} appear as states
of some compounds considered to be simpler than those of canonical
glasses, but still presenting rather nontrivial
relaxation and equilibrium properties. Such a plastic
phase corresponds to an intermediate stable state, between a
high-temperature (disordered) liquid phase, and a low-temperature
(ordered) solid phase and both transitions,
namely, liquid-plastic and plastic-solid, are first order.
In this intermediate phase, the rotational disorder coexists
with a translationally ordered state, characterized by
the centers of mass of the molecules forming a regular crystalline
lattice with the molecules presenting disorder in their
orientational degrees of freedom, as shown
in Fig.~\ref{fig:fasesdecristais}.
Many materials undergo a liquid-plastic phase transition,
where the lower-temperature phase presents such a
partial orientational order, like the plastic-crystal
of Fig.~\ref{fig:fasesdecristais}.
The property of translational invariance makes the plastic crystals
much simpler to be studied from both analytical and numerical
methods, becoming very useful towards a proper
understanding of the glass transition~\cite{plastic1,brandreview,folmer}.
In some plastic-crystal models one introduces a coupling
between two Ising models, associating each of these
systems respectively, to the translational and rotational degrees of
freedom~\cite{galam87,galam89,salinas1,salinas2},
as a proposal for explaining satisfactorily
thermodynamic properties of the plastic phase.
Accordingly, spin variables $\{t_{i}\}$ and $\{r_{i}\}$ are introduced in
such a way to mimic translational
and rotational degrees of freedom of each molecule $i$, respectively.
The following Hamiltonian is
considered~\cite{galam87,galam89,salinas1,salinas2},
\begin{equation}
\label{eq:hamplastcrystals}
{\cal H} = - J_{t}\sum_{\langle ij \rangle}t_{i}t_{j}
- J_{r} \sum_{\langle ij \rangle}r_{i}r_{j}
- \sum_{i} (\alpha t_{i} + h_{i})r_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
where $\sum_{\langle ij \rangle}$ represents a sum over
distinct pairs of nearest-neighbor spins.
In the first summation, the Ising variables $t_{i}=\pm 1$
may characterize two lattices A and B (or occupied and vacant sites).
One notices that the rotational
variables $r_{i}$ could be, in principle, continuous variables,
although the fact that the minimization of the coupling contribution
$\alpha t_{i}r_{i}$ is achieved
for $t_{i}r_{i} =1$ ($\alpha>0$), or
for $t_{i}r_{i} =-1$ ($\alpha<0$), suggests the simpler choice
of binary variables ($r_{i}=\pm 1$) to be appropriate,
based on the energy minimization requirement.
In the present model the variables $t_{i}$ and
$r_{i}$ represent very different characteristics of a
molecule. Particularly, the rotational variables $r_{i}$
are expected to change more freely than the translational ones;
for this reason, one introduces a random field acting only
on the rotational degrees of freedom.
In fact, the whole contribution $\sum_{i} (\alpha t_{i} + h_{i})r_{i}$
is known to play a fundamental role for the plastic phase of
ionic plastic crystals, like the alkalicyanides KCN, NaCN and RbCN.
In spite of its simplicity, the above Hamiltonian is able to capture
the most relevant features of the plastic-crystal phase, as well
as the associated phase transitions,
namely, liquid-plastic and plastic-solid ones~\cite{michel85,michel87,%
galam87,galam89,salinas1,salinas2,vives}.
A system described by a Hamiltonian slightly different
from the one of~\eq{eq:hamplastcrystals}, in which the
whole contribution
$\sum_{i} (\alpha t_{i} + h_{i})r_{i}$ was replaced by
$\sum_{i} \alpha_{i} t_{i}r_{i}$, i.e., with no random
field acting on variable $r_{i}$ separately, was considered
in Ref.~\cite{salinas2}. In such a work one finds a detailed
analysis of the phase diagrams and order-parameter behavior
of the corresponding model. However, to our knowledge,
previous investigations on the model defined
by~\eq{eq:hamplastcrystals} have not
considered thoroughly the effects of the random
field $h_{i}$, with a particular attention to the phase diagrams
for the case of a randomly distributed bimodal
one, $h_{i}=\pm h_{0}$;
this represents the main motivation
of the present work.
In the next section we define the model, determine its free-energy
density, and describe the
numerical procedure to be used.
In Section III we exhibit typical phase diagrams
and analyze the behavior of the corresponding order parameters,
for both zero and finite temperatures; the ability of the model
to exhibit a rich variety of phase diagrams, characterized
by multicritical behavior, is shown.
Finally, in Section IV we present our main conclusions.
\section{The Model and Free-Energy Density}
Based on the discussion of the previous section, herein
we consider a system composed by two interacting Ising models,
described by the Hamiltonian
\begin{equation}
\label{eq:hamiltonian1}
{\cal H}(\{h_{i}\}) = - J_{\sigma} \sum_{(ij)}\sigma_{i}\sigma_{j}
- J_{\tau} \sum_{(ij)}\tau_{i}\tau_{j} + D\sum_{i=1}^{N}\tau_{i}\sigma_{i}
-\sum_{i=1}^{N}h_{i}\tau_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
where $\sum_{(ij)}$ represent sums over all distinct pairs of spins,
a limit for which the mean-field approximation becomes exact. Moreover,
$\tau_{i}= \pm 1$ and $\sigma_{i}= \pm 1$ ($i=1,2, \cdots , N$) depict
Ising variables,
$D$ stands for a real parameter, whereas both $J_{\sigma}$ and
$J_{\tau}$ are positive coupling constants, which will be
restricted herein to
the symmetric case, $J_{\sigma}=J_{\tau}=J>0$. Although this later
condition may seem as a rather artificial simplification of the
Hamiltonian in~\eq{eq:hamplastcrystals}, the application of a
random field $h_{i}$ acting separately on one set of variables, will
produce the expected distinct physical behavior associated with
$\{ \tau_{i} \}$ and $\{ \sigma_{i} \}$. The random fields
$\{ h_{i} \}$ will be considered as following
a symmetric bimodal probability distribution function,
\begin{equation}
\label{eq:hpdf}
P(h_{i}) = \frac{1}{2} \, \delta(h_{i}-h_{0}) +\frac{1}{2} \, \delta(h_{i}+h_{0})~.
\end{equation}
\vskip \baselineskip
\noindent
The infinite-range character of the interactions allows one to write the above
Hamiltonian in the form
\begin{equation}
\label{eq:hamiltonian2}
{\cal H}(\{h_{i}\})= - \frac{J}{2N}{\left (\sum_{i=1}^{N}\sigma_{i} \right )}^{2}
- \frac{J}{2N}{\left (\sum_{i=1}^{N}\tau_{i} \right )}^{2}
+D\sum_{i=1}^{N}\tau_{i}\sigma_{i} -\sum_{i=1}^{N}h_{i}\tau_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
from which one may calculate the partition function associated with
a particular configuration of the fields $\{ h_{i}\}$,
\begin{equation}
Z(\{h_{i}\}) = {\rm Tr} \exp \left[- \beta {\cal H}(\{h_{i}\}) \right]~,
\end{equation}
\vskip \baselineskip
\noindent
where $\beta=1/(kT)$ and
${\rm Tr} \equiv {\rm Tr}_{\{ \tau_{i},\sigma_{i}=\pm 1 \}} $ indicates a sum over
all spin configurations. One can now make use of
the Hubbbard-Stratonovich transformation~\cite{dotsenkobook,nishimoribook}
to linearize the quadratic terms,
\begin{equation}
Z(\{h_{i}\}) = \frac{1}{\pi} \int_{-\infty}^{\infty}dx dy \exp(-x^{2}-y^{2})
\prod_{i=1}^{N} {\rm Tr} \exp [ H_{i}(\tau,\sigma)]~,
\end{equation}
\vskip \baselineskip
\noindent
where $H_{i}(\tau,\sigma)$ depends on the random
fields $\{ h_{i}\}$,
as well as on the spin variables, being given by
\begin{equation}
H_{i}(\tau,\sigma) = \sqrt{\frac{2\beta J}{N}} \ x \tau + \sqrt{\frac{2\beta J}{N}} \ y \sigma
- \beta D \tau \sigma + \beta h_{i} \tau~.
\end{equation}
\vskip \baselineskip
\noindent
Performing the trace over the spins and defining new variables, related to
the respective order parameters,
\begin{equation}
\label{eq:mtausigma}
m_{\tau} = \sqrt{\frac{2kT}{JN}} \ x~; \qquad
m_{\sigma} = \sqrt{\frac{2kT}{JN}} \ y~,
\end{equation}
\vskip \baselineskip
\noindent
one obtains
\begin{equation}
Z(\{h_{i}\})= \frac{\beta J N}{2 \pi} \int_{-\infty}^{\infty} dm_{\tau} dm_{\sigma} \exp[N g_{i} (m_{\tau},m_{\sigma})]~,
\end{equation}
\vskip \baselineskip
\noindent
where
\begin{eqnarray}
g_{i}(m_{\tau},m_{\sigma}) &=& - \frac{1}{2} \beta J m_{\tau}^{2}
- \frac{1}{2} \beta J m_{\sigma}^{2} + \log \left \{
2e^{-\beta D} \cosh[\beta J(m_{\tau}+m_{\sigma}+h_{i}/J)]
\right. \nonumber \\ \nonumber \\
\label{eq:gimtausigma}
&+& \left. 2e^{\beta D} \cosh[\beta J(m_{\tau}-m_{\sigma}+h_{i}/J)] \right \}~.
\end{eqnarray}
\vskip \baselineskip
\noindent
Now, one takes the thermodynamic limit ($N \rightarrow \infty$), and uses the saddle-point
method to obtain
\begin{equation}
Z = \displaystyle \frac{\beta J N}{2 \pi} \int_{-\infty}^{\infty} dm_{\tau} dm_{\sigma}
\exp[-N \beta f(m_{\tau},m_{\sigma})]~,
\end{equation}
\vskip \baselineskip
\noindent
where the free-energy density functional $f(m_{\tau},m_{\sigma})$ results
from a quenched average of
$g_{i}(m_{\tau},m_{\sigma})$ in~\eq{eq:gimtausigma}, over the
bimodal probability distribution of~\eq{eq:hpdf},
\begin{equation}
\label{eq:freeenergy}
f(m_{\tau},m_{\sigma}) = \displaystyle \frac{1}{2} J m_{\tau}^{2}
+ \frac{1}{2} J m_{\sigma}^{2} - \frac{1}{2\beta}\log Q(h_{0})
- \frac{1}{2\beta}\log Q(-h_{0})~,
\end{equation}
\vskip \baselineskip
\noindent
with
\begin{equation}
Q(h_{0}) = 2e^{-\beta D} \cosh[\beta J(m_{\tau}+m_{\sigma} + h_{0}/J)]
+2e^{\beta D} \cosh[\beta J(m_{\tau}-m_{\sigma} + h_{0}/J)]~.
\end{equation}
\vskip \baselineskip
\noindent
The extremization of the free-energy density above with respect to the
parameters $m_{\tau}$ and $m_{\sigma}$ yields the following equations of state,
\begin{eqnarray}
\label{eq:mtau}
m_{\tau} &=& \frac{1}{2} \frac{R_{+}(h_{0})}{Q(h_{0})}
+ \frac{1}{2} \frac{R_{+}(-h_{0})}{Q(-h_{0})}~,
\\ \nonumber \\
\label{eq:msigma}
m_{\sigma} &=& \frac{1}{2} \frac{R_{-}(h_{0})}{Q(h_{0})}
+ \frac{1}{2} \frac{R_{-}(-h_{0})}{Q(-h_{0})}~,
\end{eqnarray}
\vskip \baselineskip
\noindent
where
\begin{equation}
R_{\pm}(h_{0}) = e^{-\beta D} \sinh[\beta J(m_{\tau}+m_{\sigma} + h_{0}/J)]
\pm e^{\beta D} \sinh[\beta J(m_{\tau}-m_{\sigma} +h_{0}/J)]~.
\end{equation}
\vskip \baselineskip
\noindent
In the following section we present numerical results for the
order parameters and phase diagrams of the model, at both
zero and finite temperatures.
All phase diagrams are represented
by rescaling conveniently the energy parameters of the system, namely,
$kT/J$, $h_{0}/J$ and $D/J$.
Therefore, for given values of these dimensionless parameters,
the equations of state [Eqs.(\ref{eq:mtau}) and~(\ref{eq:msigma})]
are solved numerically for $m_{\tau}$ and $m_{\sigma}$.
In order to avoid metastable states, all solutions obtained for
$m_{\tau} \in [-1,1]$ and $m_{\sigma} \in [-1,1]$ are
substituted in~\eq{eq:freeenergy},
to check for the minimization of the free-energy density.
The continuous (second order) critical frontiers are found by the set
of input values for which the order parameters fall continuously down to
zero, whereas the first-order frontiers were found through
Maxwell constructions.
Both ordered ($m_{\tau} \neq 0$ and $m_{\sigma} \neq 0$)
and partially-ordered ($m_{\tau}=0$ and $m_{\sigma} \neq 0$)
phases have appeared in our analysis, and will be labeled
accordingly.
The usual paramagnetic phase ({\bf P}),
given by $m_{\tau}=m_{\sigma}=0$, always occurs for sufficiently
high temperatures.
A wide variety of critical points appeared in our analysis
(herein we follow the classification due to Griffiths~\cite{griffiths}):
(i) a tricritical point signals the encounter of a continuous frontier
with a first-order line with no change of slope;
(ii) an ordered critical point corresponds to an isolated critical
point inside the ordered region, terminating a first-order line that
separates two distinct ordered phases;
(ii) a triple point, where three distinct phases coexist, signaling the
encounter of three first-order critical frontiers.
In the phase diagrams we shall use distinct symbols and
representations for the critical points and frontiers, as described below.
\begin{itemize}
\item Continuous (second order) critical frontier: continuous line;
\item First-order critical frontier: dotted line;
\item Tricritical point: located by a black circle;
\item Ordered critical point: located by a black asterisk;
\item Triple point: located by an empty triangle.
\end{itemize}
\section{Phase Diagrams and Behavior of Order Parameters}
\subsection{Zero-Temperature Analisis}
At $T=0$, one has to analyze the different spin orderings that
minimize the Hamiltonian of~\eq{eq:hamiltonian2}.
Due to the coupling between the two
sets of spins, the minimum-energy configurations will correspond to
$\{\tau_{i}\}$ and $\{\sigma_{i}\}$ antiparallel ($D>0$), or parallel ($D<0$).
Therefore, in the absence of random fields ($h_{0}=0$) one should have
$m_{\tau}=-m_{\sigma}$ ($D>0$), and
$m_{\tau}=m_{\sigma}$ ($D<0$), where $m_{\sigma}=\pm1$.
However, when random fields act on the $\{\tau_{i}\}$ spins, there will
be a competition between these fields and the coupling parameter $D$,
leading to several phases, as represented in Fig.~\ref{fig:groundstate},
in the plane $h_{0}/J$ versus $D/J$. One finds three ordered
phases for sufficiently low values of $h_{0}/J$
and $|D|/J$, in addition to {\bf P} phases for $(|D|/J)>0.5$ and $(h_{0}/J)>1$.
All frontiers shown in Fig.~\ref{fig:groundstate} are first-order critical lines.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=5.5cm]{figure2.eps}
\end{center}
\caption{Phase diagram of the model defined by Hamiltonian
of~\eq{eq:hamiltonian2}, at zero temperature. All critical frontiers
represent first-order phase transitions; the empty triangles denote
triple points.}
\label{fig:groundstate}
\end{figure}
When $(h_{0}/J) \leq 1/2$ one finds ordered phases for all values of $D/J$,
with a vertical straight line at $D=0$ separating the
symmetric state ($D<0$), where $m_{\tau}=m_{\sigma}$, from the
antisymmetric one ($D>0$), characterized by $m_{\tau}=-m_{\sigma}$.
Two critical frontiers (symmetric under a reflection operation)
emerge from the triple point at
$(D/J)=0.0$ and $(h_{0}/J)=0.5$, given, respectively, by
$(h_{0}/J)=0.5 + (D/J)$ for $D>0$, and
$(h_{0}/J)=0.5 - (D/J)$ for $D<0$.
These critical frontiers terminate at $(h_{0}/J)=1.0$ and
separate the low random-field-valued ordered phases from
a partially-ordered
phase, given by $m_{\tau}=0$ and $m_{\sigma}= \pm 1$.
As shown in Fig.~\ref{fig:groundstate}, three triple points
appear, each of them signaling the encounter
of three first-order lines, characterized by a coexistence of three phases,
defined by distinct values of the magnetizations $m_{\tau}$ and
$m_{\sigma}$, as described below.
\begin{itemize}
\item $[(D/J)=-0.5$ and $(h_{0}/J)=1.0]$~:
$(m_{\tau},m_{\sigma})=\{ (0,0);(0,\pm 1); (\pm 1, \pm 1) \}$.
\item $[(D/J)=0.5$ and $(h_{0}/J)=1.0]$~:
$(m_{\tau},m_{\sigma})=\{ (0,0);(0,\pm 1); (\pm 1, \mp 1) \}$.
\item $[(D/J)=0.0$ and $(h_{0}/J)=0.5]$~:
$(m_{\tau},m_{\sigma})=\{ (\pm 1,\pm 1);(\pm 1, \mp 1); (0, \pm 1) \}$.
\end{itemize}
Such a rich critical behavior shown for $T=0$ suggests that interesting
phase diagrams should occur when the temperature is taken
into account. From now on, we investigate the model
defined by the Hamiltonian of~\eq{eq:hamiltonian2} for finite temperatures.
\subsection{Finite-Temperature Analysis}
As shown above, the zero-temperature phase diagram presents
a reflection symmetry with respect
to $D=0$ (cf. Fig.~\ref{fig:groundstate}).
The only difference between the two sides of this phase
diagram concerns the magnetization solutions
characterizing the ordered phases for low random-field values,
where one has
$m_{\tau}=-m_{\sigma}$ ($D>0$), or
$m_{\tau}=m_{\sigma}$ ($D<0$).
These results come as a consequence
of the symmetry of the Hamiltonian of~\eq{eq:hamiltonian2},
which remains unchanged under the operations,
$D \rightarrow -D$, $\sigma_{i} \rightarrow -\sigma_{i}$ $(\forall i)$, or
$D \rightarrow -D$, $\tau_{i} \rightarrow -\tau_{i}$,
$h_{i} \rightarrow -h_{i}$ $(\forall i)$.
Hence, the finite-temperature phase diagrams should present similar
symmetries with respect to a change $D \rightarrow -D$. From now on,
for the sake of simplicity, we will restrict ourselves to the
case $(D/J) \geq 0$, for which the zero-temperature and
low-random-field magnetizations present
opposite signals, as shown in Fig.~\ref{fig:groundstate}, i.e.,
$m_{\tau}=-m_{\sigma}$~.
\begin{figure}[htp]
\begin{center}
\vspace{.5cm}
\includegraphics[height=5cm]{figure3a.eps}
\hspace{0.5cm}
\includegraphics[height=5cm]{figure3b.eps}
\end{center}
\vspace{-.5cm}
\caption{Phase diagrams of the model
defined by the Hamiltonian of~\eq{eq:hamiltonian2} in two
particular cases:
(a) The plane of dimensionless variables $kT/J$ versus $D/J$,
in the absence of random fields $(h_{0}=0)$;
(b) The plane of dimensionless variables $kT/J$
versus $h_{0}/J$, for $D=0$.
The full lines represent continuous phase transitions,
whereas the dotted line stands for a
first-order critical frontier.
For sufficiently high temperatures one finds a
paramagnetic phase ({\bf P}), whereas
the magnetizations $m_{\tau}$ and
$m_{\sigma}$ become nonzero
by lowering the temperature.
In case (b), two low-temperature phases appear, namely,
the ordered (lower values of $h_{0}$) and
the partially-ordered (higher values of $h_{0}$).
These two phases are separated by a continuous
critical frontier (higher temperatures), which turns into
a first-order critical line (lower temperatures) at a tricritical
point (black circle). The type of phase
diagram exhibited in (b) will be called herein of topology I.}
\label{fig:tdh00}
\end{figure}
In Fig.~\ref{fig:tdh00} we exhibit phase diagrams of the model
in two particular cases, namely, in the absence of fields $(h_{0}=0)$
[Fig.~\ref{fig:tdh00}(a)] and for zero coupling
$(D=0)$ [Fig.~\ref{fig:tdh00}(b)].
These figures provide useful
reference data in the numerical procedure
to be employed for constructing phase diagrams in
more general situations, e.g., in the plane
$kT/J$ versus $h_{0}/J$, for several values of $(D/J)>0$.
In Fig.~\ref{fig:tdh00}(a) we present the phase diagram
of the model in the plane of dimensionless variables $kT/J$ versus $D/J$,
in the absence of random fields ($h_{0}=0$),
where one sees the point $D=0$ that corresponds
to two noninteracting Ising models, leading to the well-known mean-field
critical temperature of the Ising model [$(kT_{c}/J)=1$].
Also in Fig.~\ref{fig:tdh00}(a),
the ordered solution $m_{\tau}=-m_{\sigma}$
minimizes the free energy at low temperatures
for any $D>0$; a second-order frontier separates this ordered phase
from the paramagnetic one that appears for sufficiently high temperatures.
For high values of $D/J$ one sees that this critical frontier approaches
asymptotically $(kT/J) = 2$.
Since the application of a random field results in
a decrease of the critical temperature, when compared with the one
of the case $h_{0}=0$~\cite{aharony,mattis,kaufman},
the result of Fig.~\ref{fig:tdh00}(a) shows that no
ordered phase should occur for $h_{0}>0$ and $(kT/J)>2$.
The phase diagram for $D=0$ is shown in
the plane of dimensionless variables $kT/J$
versus $h_{0}/J$ in Fig.~\ref{fig:tdh00}(b).
The {\bf P} phase occurs for $(kT/J)>1$, whereas
for $(kT/J)<1$ two phases appear, namely,
the ordered one (characterized by
$m_{\sigma} \neq 0$ and $m_{\tau} \neq 0$, with
$|m_{\sigma}| \geq |m_{\tau}|$),
as well as the partially-ordered phase
($m_{\sigma} \neq 0$ and $m_{\tau} = 0$).
Since the two Ising models are uncorrelated for $D=0$
and the random fields act only on the $\{\tau_{i}\}$
variables, one finds that the critical behavior associated
with variables $\{\sigma_{i}\}$ and $\{\tau_{i}\}$ occur
independently:
(i) The variables $\{\sigma_{i}\}$ order at $(kT/J)=1$, for
all values of $h_{0}$;
(ii) The critical frontier shown in
Fig.~\ref{fig:tdh00}(b), separating the two
low-temperature phases, is characteristic of an
Ising ferromagnet in the presence of a bimodal
random field~\cite{aharony}. The black circle
denotes a tricritical point, where the higher-temperature
continuous frontier meets the lower-temperature
first-order critical line. The type of phase
diagram exhibited in Fig.~\ref{fig:tdh00}(b)
will be referred herein as topology I.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7.0cm,clip,angle=-90]{figure4a.eps}
\hspace{0.1cm}
\includegraphics[height=7.0cm,clip,angle=-90]{figure4b.eps} \\
\vspace{0.5cm} \hspace{-0.5cm}
\includegraphics[height=4.5cm,clip]{figure4c.eps}
\hspace{1.0cm}
\includegraphics[height=4.5cm,clip]{figure4d.eps}
\end{center}
\vspace{-.2cm}
\caption{Phase diagram and order parameters in the case
$(D/J)=0.1$.
(a) Phase diagram in the plane of dimensionless variables $kT/J$
versus $h_{0}/J$. At low temperatures, a first-order
critical frontier that terminates in an
ordered critical point (black asterisk) separates
the ordered phase (lower values of $h_{0}/J$) from
the partially-ordered phase (higher values of $h_{0}/J$);
this type of phase
diagram will be referred herein as topology II.
The order parameters $m_{\tau}$ and $m_{\sigma}$
are represented versus the dimensionless temperature
$kT/J$ for typical values of $h_{0}/J$:
(b) As one goes through the ordered
phase (low temperatures) to the {\bf P} phase;
(c) As one goes through the first-order critical
frontier, which separates the two ordered phases,
up to the {\bf P} phase;
(d) As one goes through the partially-ordered phase
(slightly to the right of the first-order critical frontier) up
to the {\bf P} phase. Equivalent solutions exist by
inverting the signs of $m_{\tau}$ and $m_{\sigma}$.}
\label{fig:d01}
\end{figure}
The effects of a small interaction [$(D/J)=0.1$]
between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$ are presented in Fig.~\ref{fig:d01}, where
one sees that the topology I [Fig.~\ref{fig:tdh00}(b)] goes
through substantial changes, as shown
in Fig.~\ref{fig:d01}(a) (to be called herein as topology II).
As expected from the behavior presented
in Fig.~\ref{fig:tdh00}(a), one notices that
the border of the {\bf P} phase (a continuous frontier)
is shifted to higher temperatures.
However, the most significant difference between
topologies I and II consists in
the low-temperature frontier
separating the ordered and partially-ordered phases.
Particularly, the continuous frontier, as well
as the tricritical point shown
in Fig.~\ref{fig:tdh00}(b), give place to an
ordered critical point~\cite{griffiths}, at which
the low-temperature first-order critical
frontier terminates.
Such a topology has been found also in some
random magnetic systems, like the Ising and Blume-Capel
models, subject to random fields and/or
dilution~\cite{kaufman,salmon1,salmon2,benyoussef,
carneiro,kaufmankanner}.
In the present model, we verified that topology II holds
for any $0<(D/J)<1/2$, with
the first-order frontier starting at zero temperature and
$(h_{0}/J)=(D/J)+1/2$, which in Fig.~\ref{fig:d01}(a)
corresponds to $(h_{0}/J)=0.6$. Such a first-order line
essentially affects the parameter $m_{\tau}$, as will be
discussed next.
In Figs.~\ref{fig:d01}(b)--(d) the order parameters
$m_{\tau}$ and $m_{\sigma}$ are exhibited versus
$kT/J$ for conveniently chosen values of $h_{0}/J$,
corresponding to distinct physical situations of the
phase diagram for $(D/J)=0.1$.
A curious behavior is
presented by the magnetization
$m_{\tau}$ by varying $h_{0}/J$, and more
particularly, around the first-order critical line.
For $(h_{0}/J)=0.59$ [Fig.~\ref{fig:d01}(c)],
one starts at low temperatures
essentially to the left of the critical frontier and by increasing
$kT/J$ one crosses this critical frontier at $(kT/J)=0.499$,
very close to the ordered critical point.
At this crossing point,
$|m_{\tau}|$ presents an abrupt decrease, i.e.,
a discontinuity, corresponding
to a change to the partially-ordered phase; on
the other hand, the magnetization $m_{\sigma}$
remains unaffected when going through this critical frontier.
For higher temperatures,
$|m_{\tau}|$ becomes very small, but still finite,
turning up zero only at the {\bf P} boundary; in fact,
the whole region around the ordered critical point
is characterized by a finite small value of $|m_{\tau}|$.
Another unusual effect is presented in
Fig.~\ref{fig:d01}(d), for which $(h_{0}/J)=0.65$, i.e.,
slightly to the right of the first-order critical frontier:
the order parameter $m_{\tau}$ is zero
for low temperatures, but becomes nonzero by increasing the
temperature, as one becomes closer to the critical ordered
point. This rather curious phenomenon is directly related to
the correlation between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$: since for $(kT/J) \approx 0.5$ the magnetization
$m_{\sigma}$ is still very close to its maximum value,
a small value for $|m_{\tau}|$ is induced, so that both
order parameters go to zero together only at the {\bf P} frontier.
Behind the results presented in Figs.~\ref{fig:d01}(a)--(d)
one finds a very interesting feature, namely, the
possibility of going continuously from the ordered phase to the
partially-ordered phase by circumventing the ordered critical point.
This is analogous to what happens in many substances, e.g., water,
where one goes continuously (with no latent heat)
from the liquid to the gas
phase by circumventing a critical end point~\cite{huang,reichl}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=6.5cm,angle=-90]{figure5a.eps}
\hspace{0.2cm}
\includegraphics[height=6.5cm,angle=-90]{figure5b.eps}
\end{center}
\vspace{-.5cm}
\caption{The first-order critical line in Fig.~\ref{fig:d01}(a),
corresponding to $(D/J)=0.1$, is amplified, and
the dimensionless free-energy density $f/J$ of~\eq{eq:freeenergy}
(shown in the insets) is analyzed
at two distinct points along this frontier:
(a) A low-temperature point located at $[(h_{0}/J)=0.599,(kT/J)=0.010]$, showing the
coexistence of the ordered ($|m_{\tau}|=1$) and partially-ordered ($m_{\tau}=0$)
solutions;
(b) A higher-temperature point located at $[(h_{0}/J)=0.594,(kT/J)=0.387]$,
showing the coexistence of solutions with $|m_{\tau}|>0$, namely,
$|m_{\tau}|=0.868$ and $|m_{\tau}|=0.1$.
In both cases (a) and (b) the free energy presents four minima,
associated with distinct pairs of solutions
$(m_{\tau},m_{\sigma})$: the full lines show the two minima
with positive $m_{\sigma}$, whereas the dashed lines correspond
to the two minima with negative $m_{\sigma}$.}
\label{fig:freeenergyd01}
\end{figure}
In Fig.~\ref{fig:freeenergyd01} the free-energy density of~\eq{eq:freeenergy}
is analyzed at two different points along the first-order critical frontier of
Fig.~\ref{fig:d01}(a), namely, a low-temperature
one [Fig.~\ref{fig:freeenergyd01}(a)], and a point at a higher
temperature [Fig.~\ref{fig:freeenergyd01}(b)].
In both cases the free energy presents four minima
associated with distinct pairs of solutions
$(m_{\tau},m_{\sigma})$. The point at $(kT/J)=0.010$ presents
$(m_{\tau},m_{\sigma})=\{(-1,1);(0,1); (0,-1);(1,-1)\}$, whereas the
point at $(kT/J)=0.387$ presents
$(m_{\tau},m_{\sigma})=\{(-0.868, 0.991); (-0.100,0.991); (0.100, -0.991);
(0.868, -0.991)\}$.
The lower-temperature point represents a coexistence of the two phases
shown in the case $D=0$ [cf. Fig.~\ref{fig:tdh00}(b)], namely, the
ordered ($|m_{\tau}|=1$) and partially-ordered ($m_{\tau}=0$) phases.
However, the higher-temperature point typifies the phenomenon
discussed in Fig.~\ref{fig:d01}, where distinct solutions with
$|m_{\tau}|>0$ coexist, leading to a jump in this
order parameter as one crosses the critical frontier,
like illustrated in Fig.~\ref{fig:d01}(c) for the point
$[(h_{0}/J)=0.59,(kT/J)=0.499]$. Although the
magnetization $m_{\tau}$ presents a very
curious behavior in topology II [cf., e.g.,
Figs.~\ref{fig:d01}(b)--(d)],
$m_{\sigma}$ remains essentially
unchanged by the presence of the first-order
critical frontier of
Fig.~\ref{fig:d01}(a), as shown also in
Fig.~\ref{fig:freeenergyd01}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,angle=-90]{figure6a.eps}
\hspace{0.2cm}
\includegraphics[height=7cm,angle=-90]{figure6b.eps}
\end{center}
\vspace{-.5cm}
\caption{Phase diagrams in the plane of dimensionless variables $kT/J$
versus $h_{0}/J$ for two different values of $D/J$:
(a) $(D/J)=0.5$, to be referred as topology III;
(b) $(D/J)=0.7$, to be referred as topology IV.}
\label{fig:phasediagd0507}
\end{figure}
In Fig.~\ref{fig:phasediagd0507} we present two other possible phase
diagrams, namely, the cases $(D/J)=0.5$ [Fig.~\ref{fig:phasediagd0507}(a),
called herein topology III] and
$(D/J)=0.7$ [Fig.~\ref{fig:phasediagd0507}(b), called herein topology IV].
Whereas topology III represents
a special situation that applies only for $(D/J)=0.5$, exhibiting the
richest critical behavior of the present model, topology IV holds
for any $(D/J)>0.5$.
In Fig.~\ref{fig:phasediagd0507}(a) one observes the appearance of
several multicritical points, denoted by the black circle (tricritical
point), black asterisk (ordered critical point), and
empty triangles (triple points):
(i) The tricritical point, which signals the
encounter of the higher-temperature continuous phase transition
with the lower-temperature first-order phase transition,
found in the $D=0$ phase diagram [cf. Fig.~\ref{fig:tdh00}(b)],
have curiously disappeared for $0<(D/J)<0.5$,
and emerged again for $(D/J)=0.5$;
(ii) The ordered critical point exists for any $0 < (D/J) \leq 0.5$
[as shown in Fig.~\ref{fig:d01}(a)];
(iii) Two triple points, one at a finite temperature, whereas the
other one occurs at zero temperature. It should be mentioned
that such a zero-temperature triple point corresponds
precisely to the one of Fig.~\ref{fig:groundstate}, at
$(D/J)=0.5$ and $(h_{0}/J)=1.0$.
The value $(D/J)=0.5$ is very special and will be considered as
a threshold for both multicritical behavior and correlations
between the two systems. We have observed that for
$(D/J) \gtrsim 0.5$, the critical points shown in
Fig.~\ref{fig:phasediagd0507}(a) disappear, except for the
tricritical point that survives for
$(D/J)>0.5$ [as shown in Fig.~\ref{fig:phasediagd0507}(b)].
Changes similar to those occurring
herein between topologies II and III, as well as
topologies III and IV,
were found also in some
magnetic systems, like the Ising and Blume-Capel
models, subject to random fields and/or
dilution~\cite{kaufman,salmon1,salmon2,benyoussef,
carneiro,kaufmankanner}.
Particularly, the splitting of the
low-temperature first-order critical frontier into
two higher-temperature first-order lines that terminate
in the ordered and tricritical points,
respectively [as exhibited in Fig.~\ref{fig:phasediagd0507}(a)],
is consistent with results found in
the Blume-Capel model under
a bimodal random magnetic, by
varying the intensity of the crystal
field~\cite{kaufmankanner}.
Another important feature of topology III concerns the
lack of any type of
magnetic order at finite temperatures for $(h_{0}/J)>1.1$,
in contrast to the phase diagrams for
$0 \leq (D/J) < 0.5$, for which there is $m_{\sigma} \neq 0$
for all $h_{0}/J$
[see, e.g., Figs.~\ref{fig:tdh00}(b) and~\ref{fig:d01}(a)].
This effect shows that $(D/J)=0.5$ represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$, so that for $(D/J) \geq 0.5$ the
correlations among these variables become significant.
As a consequence of these correlations, the fact
of no magnetic
order on the $\tau$-system ($m_{\tau} =0$)
drives the the magnetization of the
$\sigma$-system to zero as well, for $(h_{0}/J)>1.1$.
It is important to notice that the $T=0$ phase diagram
of Fig.~\ref{fig:groundstate}
presents a first-order critical line for $(D/J)=0.5$ and
$(h_{0}/J)>1.0$, at which
$m_{\tau} =0$, whereas in the $\sigma$-system both
$m_{\sigma}=0$ and $|m_{\sigma}|=1$ minimize the Hamiltonian.
By analyzing numerically the free-energy density
of~\eq{eq:freeenergy} at low temperatures and $(h_{0}/J)>1.0$,
we have verified that for any infinitesimal value of
$kT/J$ destroys such a coexistence of solutions, leading to
a minimum free energy at
$m_{\tau}=m_{\sigma}=0 \ (\forall \, T>0)$. Consequently,
one finds that the low-temperature region in the interval
$1.0 \leq (h_{0}/J) \leq 1.1$ becomes part of the {\bf P} phase.
Hence, the phase diagram in
Fig.~\ref{fig:phasediagd0507}(a) presents
a reentrance phenomena for
$1.0 \leq (h_{0}/J) \leq 1.1$. In this region, by lowering
the temperature gradually, one goes from a {\bf P} phase
to the ordered phase
($m_{\tau} \neq 0$ ; $m_{\sigma} \neq 0$), and then back
to the {\bf P} phase. This effect appears frequently
in both theoretical and experimental investigations of
disordered magnets~\cite{dotsenkobook,nishimoribook}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,clip,angle=-90]{figure7a.eps}
\hspace{0.5cm} \vspace{0.7cm}
\includegraphics[height=7cm,clip,angle=-90]{figure7b.eps} \\
\vspace{0cm} \hspace{-0.8cm}
\includegraphics[height=4.5cm,clip]{figure7c.eps}
\hspace{1.2cm}
\includegraphics[height=4.5cm,clip]{figure7d.eps}
\end{center}
\vspace{0.2cm}
\caption{(a) The region of multicritical points of the phase diagram for
$(D/J)=0.5$ [Fig.~\ref{fig:phasediagd0507}(a)] is amplified and three
thermodynamic paths are chosen for analyzing the magnetizations
$m_{\tau}$ and $m_{\sigma}$.
(b) Order parameters along thermodynamic path (1):
$(h_{0}/J)=0.97$ and increasing temperatures.
(c) Order parameters along thermodynamic path (2):
$(h_{0}/J)=1.05$ and increasing temperatures.
(d) Order parameters along thermodynamic path (3):
$(kT/J)=0.42$ and varying the field
strength in the interval $0.9 \leq (h_{0}/J) \leq 1.15$.
Equivalent solutions exist by inverting the signs of
$m_{\tau}$ and $m_{\sigma}$.}
\label{fig:magpaths123}
\end{figure}
In Fig.~\ref{fig:magpaths123} we analyze the behavior of the
$m_{\tau}$ and $m_{\sigma}$ for topology III,
in the region of multicritical
points of the phase diagram for
$(D/J)=0.5$, along three typical thermodynamic paths, as
shown in Fig.~\ref{fig:magpaths123}(a).
In Fig.~\ref{fig:magpaths123}(b) we exhibit the behavior of
$m_{\tau}$ and $m_{\sigma}$ along path (1), where one
sees that both parameters go through a jump by
crossing the first-order critical line [$(kT/J)=0.445$],
expressing a coexistence of different types
of solutions for $m_{\tau}$ and $m_{\sigma}$ at this
point. One notices a larger jump in $m_{\tau}$, so that
to the right of the ordered critical point
one finds a behavior similar to the one verified in topology II,
where $|m_{\tau}|$ becomes very small, whereas
$m_{\sigma}$ still presents significant values.
Then, by further increasing the temperature, these parameters
tend smoothly to zero at the continuous critical frontier
separating the ordered and {\bf P} phases.
In Fig.~\ref{fig:magpaths123}(c) we show the magnetizations
$m_{\tau}$ and $m_{\sigma}$ along path (2),
within the region of the phase diagram
where the reentrance phenomenon occurs; along this path,
one increases the temperature, going from the {\bf P} phase
to the ordered phase and then to the {\bf P} phase again.
Both parameters are zero for low enough temperatures,
jumping to nonzero values at $(kT/J)=0.396$, as one
crosses the first-order critical line. After such jumps,
by increasing the temperature, these parameters
tend smoothly to zero at the border of the
{\bf P} phase. The behavior shown in
Fig.~\ref{fig:magpaths123}(c) confirm the reentrance
effect, discussed previously.
Finally, in Fig.~\ref{fig:magpaths123}(d) we exhibit
the order parameters along thermodynamic path (3),
for which the temperature is fixed at $(kT/J)=0.42$, with
the field varying in the range
$0.9 \leq (h_{0}/J) \leq 1.15$. One sees that both
magnetizations $m_{\tau}$ and $m_{\sigma}$ display
jumps as one crosses each of the two first-order lines,
evidencing a
coexistence of different ordered states at the lower-temperature
jump, as well as a coexistence of the ordered and {\bf P} states
at the higher-temperature jump.
The behavior presented by the order parameters in
Figs.~\ref{fig:magpaths123}(b)--(d) shows clearly
the fact that $(D/J)=0.5$ represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$. In all these cases, one sees that jumps
in the magnetization $m_{\sigma}$ are correlated with
corresponding jumps in $m_{\tau}$.
These results should be contrasted with those for the
cases $(D/J)<0.5$, as illustrated
in Fig.~\ref{fig:d01}(c), where a discontinuity
in $m_{\tau}$ does not affect the smooth behavior presented
by $m_{\sigma}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,angle=-90]{figure8a.eps}
\hspace{0.2cm}
\includegraphics[height=7cm,angle=-90]{figure8b.eps}
\end{center}
\vspace{-.5cm}
\caption{The order parameters $m_{\tau}$ and $m_{\sigma}$
are represented versus the dimensionless temperature
$kT/J$ for $(D/J)=8.0$ and two typical values of $h_{0}/J$:
(a) Slightly to the left of the tricritical point;
(b) Slightly to the right of the tricritical point.
The associated phase diagram corresponds
to topology IV [cf. Fig.~\ref{fig:phasediagd0507}(b)].
Equivalent solutions exist by inverting the signs of
$m_{\tau}$ and $m_{\sigma}$.}
\label{fig:magd80}
\end{figure}
The phase diagram shown in Fig.~\ref{fig:phasediagd0507}(b),
which corresponds to topology IV, is valid for any
for any $(D/J)>0.5$. Particularly,
the critical point where the low-temperature
first-order critical
frontier touches the zero-temperature axis
is kept at $(h_{0}/J)=1$, for all $(D/J)>0.5$,
in agreement with Fig.~\ref{fig:groundstate}.
We have verified
only quantitative changes in such a phase diagram
by increasing
the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$. Essentially, the whole continuous critical
frontier moves towards
higher temperatures, leading to an increase in the values
of the critical temperature
for $(h_{0}/J)=0$, as well as in the temperature
associated with the tricritical point, whereas the abscissa
of this point remains typically unchanged. Moreover,
in what concerns the order parameters,
the difference between $|m_{\tau}|$
and $m_{\sigma}$ decreases, in such a way
that for $(D/J) \rightarrow \infty$, one obtains
$m_{\tau}=-m_{\sigma}$.
This later effect is illustrated in
Fig.~\ref{fig:magd80}, where we represent the
order parameters $m_{\tau}$ and $m_{\sigma}$
versus temperature, for a sufficiently large value of
$D/J$, namely, $(D/J)=8.0$, in
two typical choices of $h_{0}/J$, close
to the tricritical point.
In Fig.~\ref{fig:magd80}(a) $m_{\tau}$ and $m_{\sigma}$
are analyzed slightly to the left of the tricritical
point, exhibiting the usual continuous behavior,
whereas in Fig.~\ref{fig:magd80}(b) they
are considered slightly to the right of the tricritical
point, presenting jumps as one crosses the
first-order critical frontier. However,
the most important conclusion from Fig.~\ref{fig:magd80}
concerns the fact that in both cases one has essentially
$m_{\tau}=-m_{\sigma}$, showing that the random
field applied solely to the $\tau$-system influences the
$\sigma$-system in a similar way, due to the
high value of $D/J$ considered.
We have verified that for $(D/J)=8.0$
the two systems become so strongly
correlated, such that
$m_{\tau}=-m_{\sigma}$ holds along
the whole phase diagram,
within our numerical accuracy.
\section{Conclusions}
We have analyzed the effects of a coupling $D$
between two Ising models, defined in terms of variables
$\{\tau_{i}\}$ and $\{\sigma_{i}\}$.
The model was considered in the limit of infinite-range
interactions, where all spins in each system
interact by means of an exchange coupling $J>0$, typical
of ferromagnetic interactions.
Motivated by a qualitative description of
systems like plastic crystals,
the variables $\{\tau_{i}\}$ and $\{\sigma_{i}\}$ would
represent rotational and translational degrees
of freedom, respectively. Since the rotational
degrees of freedom are expected to change more
freely than the translational ones,
a random field acting only on the variables
$\{\tau_{i}\}$ was considered.
For this purpose, a bimodal random field,
$h_{i} = \pm h_{0}$, with equal probabilities,
was defined on the $\tau$-system.
The model was investigated through its free energy
and its two order parameters, namely,
$m_{\tau}$ and $m_{\sigma}$.
We have shown that such a system presents a very rich
critical behavior, depending on the particular choices
of $D/J$ and $h_{0}/J$.
Particularly, at zero temperature, the phase diagram in the plane
$h_{0}/J$ versus $D/J$ exhibits ordered, partially-ordered,
and disordered phases. This phase diagram is symmetric
around $(D/J)=0$, so that for sufficiently low values of
$h_{0}/J$ one finds ordered phases characterized by
$m_{\sigma}=m_{\tau}=\pm 1$ ($D<0$) and
$m_{\sigma}=-m_{\tau}=\pm 1$ ($D>0$).
We have verified that $|D/J|=1/2$
plays an important role in the present model, such
that at zero temperature one has the disordered
phase ($m_{\sigma}=m_{\tau}=0$)
for $|D/J|>1/2$ and $(h_{0}/J)>1$.
Moreover, the partially-ordered phase,
where $m_{\sigma}=\pm 1$ and $m_{\tau}=0$,
occurs for $(h_{0}/J)>1/2+|D/J|$ and $|D/J|<1/2$.
In this phase diagram all phase transitions are
of the first-order type, and three triple points were found.
In the case of plastic crystals,
the sequence of transitions from the disordered
to the partially-ordered, and then to the
ordered phases, would correspond to the
sequence of transitions from the liquid to
the plastic crystal, and then to ordered crystal
phases.
Due to the symmetry around $D=0$, the
finite-temperature phase diagrams were considered
only for $D>0$, for which the ordered
phase was identified by $m_{\sigma}>0$ and
$m_{\tau}<0$, whereas the partially-ordered phase
by $m_{\sigma}>0$ and
$m_{\tau}=0$ (equivalent solutions also exist by
inverting the signs of these order parameters).
Several phase diagrams in the
plane $kT/J$ versus $h_{0}/J$ were studied,
by varying gradually $D/J$. We have found
four qualitatively different types of phase diagrams,
denominated as topologies I [$(D/J)=0$], II [$0<(D/J)<1/2$],
III [$(D/J)=1/2$], and IV [$(D/J)>1/2$]. Such a
classification reflects the fact that $(D/J)=1/2$
represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$, so that for $(D/J) \geq 1/2$ the
correlations among these variables become significant,
as verified through the behavior of the order parameters
$m_{\tau}$ and $m_{\sigma}$.
From all these cases, only topology IV
typifies a well-known phase diagram,
characterized by a tricritical point, where the
higher-temperature continuous frontier meets
the lower-temperature first-order critical line.
This phase diagram is qualitatively similar to
the one found for the
Ising ferromagnet in the presence of a bimodal
random field~\cite{aharony}, and it does not
present the herein physically relevant
partially-ordered phase.
For $(D/J) \geq 1/2$, even though the random field
is applied only in the $\tau$-system, the correlations
lead the $\sigma$-system to follow a qualitatively
similar behavior.
The phase diagrams referred as topologies I and II
exhibit all three phases. In the later case we have found
a first-order critical line terminating at an ordered
critical point, leading to the potential physical realization
of going continuously from the ordered phase to the
partially-ordered phase by circumventing this
critical point.
In these two topologies, the sequence of transitions
from the disordered
to the partially-ordered, and then to the
ordered phase, represents the physical
situation that occurs in plastic crystals.
For conveniently chosen thermodynamic paths,
i.e., varying temperature and random field appropriately,
one may go from the liquid phase
($m_{\sigma}=m_{\tau}=0$), to a plastic-crystal phase
($m_{\sigma} \neq 0$; $m_{\tau}=0$), where the rotational degrees
of freedom are found in a disordered state, and then,
to an ordered crystal phase
($m_{\sigma} \neq 0$; $m_{\tau} \neq 0$).
From the point of view of multicritical behavior,
topology III [$(D/J)=1/2$] corresponds to
the richest type of phase diagram, being
characterized by several critical lines and
multicritical points; one finds its most
complex criticality around $(h_{0}/J)=1$, signaling
a great competition among the different types of orderings.
Although the partially-ordered phase
does not appear in this particular case, one has also
the possibility of circumventing the ordered critical point,
such as to reach a region of the phase diagram
along which $|m_{\tau}|$ becomes very small,
resembling a partially-ordered phase.
Since the infinite-range interactions among
variables of each Ising system correspond to a limit
where mean-field approach becomes exact, an immediate
question concerns whether some of the results obtained above
represent an artifact of such limit.
Certainly, such a relevant point is directly related with the
existence of some of these features in the associated
short-range three-dimensional magnetic models. For example, the
tricritical point found in topologies III and IV is essentially
the same that appears within the mean-field approach of the
Ising model in the presence of a bimodal random field.
This later model has been extensively investigated on a cubic
lattice through different numerical approaches, where the
existence of this tricritical point is still very controversial.
On the other hand, a first-order critical frontier terminating
at an ordered critical point, and the fact that one can
go from one phase to another by
circumventing this point, represents a typical
physical situation that occurs in real
substances. The potential for exhibiting such a
relevant feature represents an important advantage
of the present model.
Finally, we emphasize that the rich critical behavior presented
in the phase diagrams corresponding to topologies II and III
suggest the range $0<(D/J) \leq 1/2$ as appropriate
for describing plastic crystals.
The potential of exhibiting successive transitions from the
ordered to the partially-ordered and then to the
disordered phase should be useful for a better
understanding of these systems.
Furthermore, the characteristic
of going continuously from the ordered phase
to the partially-ordered phase by circumventing an ordered
critical point represents a typical physical situation that
occurs in many substances,
and opens the possibility for
the present model to describe a wider range of materials.
\vskip 2\baselineskip
{\large\bf Acknowledgments}
\vskip \baselineskip
\noindent
The partial financial supports from CNPq,
FAPEAM-Projeto-Universal-Amazonas,
and FAPERJ (Brazilian agencies) are acknowledged.
\vskip 2\baselineskip
|
\section{Introduction}
\label{intro}
It is now a well-established fact that according to our present theory of gravity, 85\%~of the matter content of our universe is missing. Observational evidence for this discrepancy ranges from macroscopic to microscopic scales, e.g. gravitational lensing in galaxy clusters, galactic rotation curves and fluctuations measured in the Cosmic Microwave Background. This has resulted in the hypothesised existence of a new type of matter called Dark Matter. Particle physics provides a well-motivated explanation for this hypothesis: The existence of (until now unobserved) massive weakly interacting particles (WIMPs). A favorite amongst the several WIMP candidates is the neutralino, the lightest particle predicted by Supersymmetry, itself a well-motivated extension of the Standard Model.
If Supersymmetry is indeed realised in Nature, Supersymmetric particles would have been copiously produced at the start of our Universe in the Big Bang. Initially these particles would have been in thermal equilibrium. After the temperature of the Universe dropped below the neutralino mass, the neutralino number density would have decreased exponentially. Eventually the expansion rate of the Universe would have overcome the neutralino annihilation rate, resulting in a neutralino density in our Universe today similar to the cosmic microwave background.
These relic neutralinos could then accumulate in massive celestial bodies in our Universe like our Sun through weak interactions with normal matter and gravity. Over time the neutralino density in the core of the object would increase considerably, thereby increasing the local neutralino annihilation probability. In the annihilation process new particles would be created, amongst which neutrinos. This neutrino flux could be detectable as a localised emission with earth-based neutrino telescopes like ANTARES.
This paper gives a brief overview of the prospects for the detection of neutrinos originating from neutralino annihilation in the Sun with the ANTARES neutrino telescope.
\begin{figure}[b]
\center{
\includegraphics[width=0.45\textwidth,angle=0]{NEA_60kHz0XOFF_off.png}
\caption{The ANTARES Neutrino Effective Area vs. $E_\nu$.}
\label{fig:1}
}
\end{figure}
\begin{figure*}[t]
\center{
\includegraphics[width=0.8\textwidth,angle=0]{psflux.png}
\caption{Predicted $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun in mSUGRA parameter space.}
\label{fig:2}
}
\end{figure*}
\section{The ANTARES neutrino telescope}
\label{sec:1}
The ANTARES undersea neutrino telescope consists of a 3D~grid of 900~photomultiplier tubes arranged in 12~strings, at a depth of 2475~m in the Mediterranean Sea. Three quarters of the telescope have been deployed and half of the detector is already fully operational, making ANTARES the largest neutrino telescope on the northern hemisphere. The angular resolution of the telescope is of the order of one degree at low energy, relevant to dark matter searches, and reaches 0.3 degree at high energies ($>$~10~TeV).
The sensitivity of a neutrino detector is conventionally expressed as the Neutrino Effective Area, $A_{\rm eff}^{\nu}$. The $A_{\rm eff}^{\nu}$ is a function of neutrino energy $E_\nu$ and direction $\Omega$, and is defined as
\begin{equation}
A_{\rm eff}^{\nu}(E_\nu,\Omega) \;=\; V_{\rm eff}(E_\nu,\Omega)\;\sigma(E_\nu)\;\rho N_A\;P_E(E_\nu,\Omega)
\label{eq:1}
\end{equation}
\noindent where $\sigma(E_\nu)$ is the neutrino interaction cross section, $\rho\,N_A$ is the nucleon density in/near the detector,\linebreak $P_E(E_\nu,\Omega)$ is the neutrino transmission probability\linebreak through the Earth and $V_{\rm eff}(E_\nu,\Omega)$ represents the effective detector volume. This last quantity depends not only on the detector geometry and instrumentation, but is also on the efficiency of the trigger and reconstruction algorithms that are used.
The ANTARES $A_{\rm eff}^{\nu}$ for upgoing $\nu_\mu$ and $\bar{\nu}_\mu$'s, integrated over all directions as a function of the neutrino energy is shown in Fig.~\ref{fig:1}. The curves represent the $A_{\rm eff}^{\nu}$ after triggering only (``{\em Trigger level}'', in blue) and after reconstruction and selection (``{\em Detection level}'', in red). The increase of the $A_{\rm eff}^{\nu}$ with neutrino energy is mainly due to the fact that $\sigma(E_\nu)$ as well as the muon range are both proportional to the neutrino energy.
The detection rate $R(t)$ for a certain neutrino flux $\Phi(E_\nu,\Omega,t)$ is then defined as
\begin{equation}
R(t) \;=\; \iint\,A_{\rm eff}^{\nu}(E_\nu,\Omega)\;\frac{d\Phi(E_\nu,\Omega,t)}{dE_\nu\,d\Omega}\;dE_\nu\,d\Omega
\label{eq:2}
\end{equation}
\section{Neutralino annihilation in the Sun}
\label{sec:2}
We calculated the $\nu_\mu+\bar{\nu}_\mu$ flux resulting from neutralino annihilation in the centre of the Sun using the DarkSUSY simulation package \cite{DarkSUSY}. Furthermore, the effects of neutrino oscillations in matter and vacuum as well as absorption were taken into account. The top quark mass was set to 172.5~GeV and the NFW-model for the Dark Matter halo with a local halo density \mbox{$\rho_0 = 0.3$~GeV/cm$^3$} was used. Instead of the general Supersymmetric framework, we used the more constrained approach of minimal Supergravity (mSUGRA). In mSUGRA, models are characterized by four parameters and a sign: A common gaugino mass $m_{1/2}$, scalar mass $m_0$ and tri-linear scalar coupling $A_0$ at the GUT scale ($10^{16}$ GeV), the ratio of vacuum expectation values of the two Higgs fields $tan(\beta)$ and the sign of the Higgsino mixing parameter $\mu$. We considered only $\mu=+1$ models within the following parameter ranges: \mbox{$0<m_0<8000$~GeV,} \mbox{$0<m_{1/2}<2000$~GeV,}\linebreak \mbox{$-3m_0<A_0<3m_0$} and \mbox{$0<tan(\beta)<60$.} The SUSY parameters were subsequently calculated using the\linebreak ISASUGRA package \cite{Isasugra}.
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{neutrino_flux_relic_density.png}
\caption{Predicted $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun vs. $m_\chi$.}
\label{fig:3}
\end{figure}
\begin{figure*}[t]
\center{
\includegraphics[width=0.8\textwidth,angle=0]{psexcl.png}
\caption{mSUGRA models 90\% CL excludable by ANTARES in mSUGRA parameter space.}
\label{fig:4}
}
\end{figure*}
\pagebreak Only a small subset of all mSUGRA models possess a relic neutralino density $\Omega_\chi h^2$ that is compatible with the Cold Dark Matter energy density $\Omega_{\rm CDM} h^2$ as measured by WMAP \cite{WMAP}. To investigate specifically those mSUGRA models, we sampled the mSUGRA parameter space using a random walk method based on the Metropolis algorithm where $\Omega_\chi h^2$ acted as a guidance parameter \cite{MarkovChain}.
The resulting $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun per~km$^{2}$ per year, integrated above \mbox{$E_\nu=10$~GeV}, can be seen in the \mbox{$m_0$-$m_{1/2}$~plane} for different ranges of $tan(\beta)$ in Fig.~\ref{fig:2}. The white regions correspond to mSUGRA models without radiative electroweak symmetry breaking, models with $\Omega_\chi h^2>1$, models that are already experimentally excluded, or models where the neutralino is not the lightest superpartner. Models in the so-called ``Focus Point'' region\,\footnote{The region of mSUGRA parameter space around $(m_0,m_{1/2}) = (2000,400)$.} produce the highest neutrino flux: In this region the neutralino has a relatively large Higgsino component \cite{Nerzi}. This enhances the neutralino capture rate through $Z$-boson exchange as well as the neutralino annihilation through the\linebreak \mbox{$\chi\chi\rightarrow WW/ZZ$} channel, resulting in a large flux of relatively high energetic neutrinos.
The $\nu_\mu+\bar{\nu}_\mu$ flux can also be plotted against the neutralino mass $m_\chi$, as is shown in Fig.~\ref{fig:3}. In this plot, the mSUGRA models are subdivided into three categories according to how well their $\Omega_\chi h^2$ agrees with $\Omega_{\rm CDM} h^2$ as measured by WMAP\,\footnote{WMAP: $\Omega_{\rm CDM} h^2 = 0.1126_{-0.013}^{+0.008}$}: \mbox{$\Omega_\chi h^2-\Omega_{\rm CDM}h^2<2\sigma$} (black), \mbox{$0< \Omega_\chi h^2 < \Omega_{\rm CDM} h^2$} (blue) and \mbox{$\Omega_{\rm CDM} h^2 < \Omega_\chi h^2 < 1$} (magenta).
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{detection_rate_relic_density.png}
\caption{ANTARES detection rate per 3~years vs. $m_\chi$.}
\label{fig:5}
\end{figure}
\section{ANTARES prospects to detect neutralino annihilation in the Sun}
\label{sec:3}
The ANTARES detection rate (See Eq.~\ref{eq:2}) for the detection of neutralino annihilation in the Sun was calculated as follows: For each mSUGRA model considered in Sect.~\ref{sec:2}, the differential $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun was convoluted with the Sun's zenith angle distribution as well as the ANTARES $A_{\rm eff}^{\nu}$ (see Eq.~\ref{eq:1} and Fig.~\ref{fig:1}). The resulting detection rate in ANTARES per 3~years is shown as a function of the neutralino mass in Fig.~\ref{fig:5}. The color coding in the plot corresponds to the one used in Fig.~\ref{fig:3}.
The ANTARES exclusion limit for the detection of neutralino annihilation in the Sun was calculated as follows: As can be seen from Fig.~\ref{fig:5}, the expected detection rates for all mSUGRA model considered in Sect.~\ref{sec:2} are small. Therefore the Feldman Cousins approach \cite{FeldmanCousins} was used to calculate 90\%~CL exclusion limits. The two sources of background were taken into account as follows: Since we know the Sun's position in the sky, the atmospheric neutrino background (Volkova parametrisation) was considered only in a 3~degree radius search cone around the Sun's position. After applying the event selection criteria used to determine the $A_{\rm eff}^{\nu}$ in Fig.~\ref{fig:1}, the misreconstructed atmospheric muon background was considered to be 10\% of the atmospheric neutrino background. mSUGRA models that are excludable at 90\%~CL by ANTARES in 3~years are shown in blue in Fig.~\ref{fig:6}, those that are non-excludable are shown in red. Bright colors indicate models which have \mbox{$\Omega_\chi h^2-\Omega_{\rm CDM}h^2<2\sigma$}. The fraction of ANTARES excludable models in mSUGRA parameter space is shown in Fig.~\ref{fig:4}.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth,angle=0]{detection_rate_exclusion.png}
\caption{mSUGRA models 90\% CL excludable by ANTARES per 3~years vs. $m_\chi$.}
\label{fig:6}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{crossection_exclusion_direct_limits.png}
\caption{Spin-independent $\chi p$~cross section vs. $m_\chi$.}
\label{fig:7}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.44\textwidth,angle=0]{NEA_triggercomparison_off.png}
\caption{The ANTARES Neutrino Effective Area at the trigger level vs. $E_\nu$.}
\label{fig:8}
\end{figure}
\section{Comparison to direct detection}
To compare with direct detection experiments, the spin-independent $\chi p$~cross section versus the neutralino mass for all mSUGRA models considered in Sect.~\ref{sec:2} is shown in Fig.~\ref{fig:7}. The color coding in the plot corresponds to the one used in Fig.~\ref{fig:6}. The limits in this plot were taken from the Dark Matter Limit Plot Generator \cite{DirectDetection}. The spin-independent cross section is driven by CP-even Higgs boson exchange \cite{Nerzi}. Therefore, mSUGRA models in which the neutralino is of the mixed gaugino-Higgsino type will produce the largest cross sections. This implies a correlation between the models that are excludable by direct detection experiments and models excludable ANTARES, as can be seen from Fig.~\ref{fig:7}.
\section{Conclusion \& Outlook}
Nearly half of the ANTARES detector has been operational since January this year. The detector is foreseen to be completed in early 2008. The data show that the detector is working within the design specifications.
As can be seen from Fig.~\ref{fig:4}, mSUGRA models that are excludable by ANTARES at 90\%~CL are found in the Focus Point region. The same models should also be excludable by future direct detection experiments, as is shown in Fig.~\ref{fig:7}.
To improve the ANTARES sensitivity, a directional trigger algorithm has recently been implemented in the data acquisition system. In this algorithm, the known position of the potential neutrino source is used to lower the trigger condition. This increases the trigger efficiency, resulting in a larger $A_{\rm eff}^{\nu}$. In Fig.~\ref{fig:8}, the $A_{\rm eff}^{\nu}$ at the trigger level for the standard- and the directional trigger algorithm are shown in black (``{\em trigger3D}'') and red (``{\em triggerMX}'') respectively.
|
\section{Introduction}\label{S1}
The multiple access interferences (MAI) is the root of user
limitation in CDMA systems \cite{R1,R3}. The parallel least mean
square-partial parallel interference cancelation (PLMS-PPIC) method
is a multiuser detector for code division multiple access (CDMA)
receivers which reduces the effect of MAI in bit detection. In this
method and similar to its former versions like LMS-PPIC \cite{R5}
(see also \cite{RR5}), a weighted value of the MAI of other users is
subtracted before making the decision for a specific user in
different stages \cite{cohpaper}. In both of these methods, the
normalized least mean square (NLMS) algorithm is engaged
\cite{Haykin96}. The $m^{\rm th}$ element of the weight vector in
each stage is the true transmitted binary value of the $m^{\rm th}$
user divided by its hard estimate value from the previous stage. The
magnitude of all weight elements in all stages are equal to unity.
Unlike the LMS-PPIC, the PLMS-PPIC method tries to keep this
property in each iteration by using a set of NLMS algorithms with
different step-sizes instead of one NLMS algorithm used in LMS-PPIC.
In each iteration, the parameter estimate of the NLMS algorithm is
chosen whose element magnitudes of cancelation weight estimate have
the best match with unity. In PLMS-PPIC implementation it is assumed
that the receiver knows the phases of all user channels. However in
practice, these phases are not known and should be estimated. In
this paper we improve the PLMS-PPIC procedure \cite{cohpaper} in
such a way that when there is only a partial information of the
channel phases, this modified version simultaneously estimates the
phases and the cancelation weights. The partial information is the
quarter of each channel phase in $(0,2\pi)$.
The rest of the paper is organized as follows: In section \ref{S4}
the modified version of PLMS-PPIC with capability of channel phase
estimation is introduced. In section \ref{S5} some simulation
examples illustrate the results of the proposed method. Finally the
paper is concluded in section \ref{S6}.
\section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\label{S4}
We assume $M$ users synchronously send their symbols
$\alpha_1,\alpha_2,\cdots,\alpha_M$ via a base-band CDMA
transmission system where $\alpha_m\in\{-1,1\}$. The $m^{th}$ user
has its own code $p_m(.)$ of length $N$, where $p_m(n)\in \{-1,1\}$,
for all $n$. It means that for each symbol $N$ bits are transmitted
by each user and the processing gain is equal to $N$. At the
receiver we assume that perfect power control scheme is applied.
Without loss of generality, we also assume that the power gains of
all channels are equal to unity and users' channels do not change
during each symbol transmission (it can change from one symbol
transmission to the next one) and the channel phase $\phi_m$ of
$m^{th}$ user is unknown for all $m=1,2,\cdots,M$ (see
\cite{cohpaper} for coherent transmission). According to the above
assumptions the received signal is
\begin{equation}
\label{e1} r(n)=\sum\limits_{m=1}^{M}\alpha_m
e^{j\phi_m}p_m(n)+v(n),~~~~n=1,2,\cdots,N,
\end{equation}
where $v(n)$ is the additive white Gaussian noise with zero mean and
variance $\sigma^2$. Multistage parallel interference cancelation
method uses $\alpha^{s-1}_1,\alpha^{s-1}_2,\cdots,\alpha^{s-1}_M$,
the bit estimates outputs of the previous stage, $s-1$, to estimate
the related MAI of each user. It then subtracts it from the received
signal $r(n)$ and makes a new decision on each user variable
individually to make a new variable set
$\alpha^{s}_1,\alpha^{s}_2,\cdots,\alpha^{s}_M$ for the current
stage $s$. Usually the variable set of the first stage (stage $0$)
is the output of a conventional detector. The output of the last
stage is considered as the final estimate of transmitted bits. In
the following we explain the structure of a modified version of the
PLMS-PIC method \cite{cohpaper} with simultaneous capability of
estimating the cancelation weights and the channel phases.
Assume $\alpha_m^{(s-1)}\in\{-1,1\}$ is a given estimate of
$\alpha_m$ from stage $s-1$. Define
\begin{equation}
\label{e6} w^s_{m}=\frac{\alpha_m}{\alpha_m^{(s-1)}}e^{j\phi_m}.
\end{equation}
From (\ref{e1}) and (\ref{e6}) we have
\begin{equation}
\label{e7} r(n)=\sum\limits_{m=1}^{M}w^s_m\alpha^{(s-1)}_m
p_m(n)+v(n).
\end{equation}
Define
\begin{subequations}
\begin{eqnarray}
\label{e8} W^s&=&[w^s_{1},w^s_{2},\cdots,w^s_{M}]^T,\\
\label{e9}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!X^{s}(n)\!\!\!&=&\!\!\![\alpha^{(s-1)}_1p_1(n),\alpha^{(s-1)}_2p_2(n),\cdots,\alpha^{(s-1)}_Mp_M(n)]^T.
\end{eqnarray}
\end{subequations}
where $T$ stands for transposition. From equations (\ref{e7}),
(\ref{e8}) and (\ref{e9}), we have
\begin{equation}
\label{e10} r(n)=W^{s^T}X^{s}(n)+v(n).
\end{equation}
Given the observations $\{r(n),X^{s}(n)\}^{N}_{n=1}$, in modified
PLMS-PPIC, like the PLMS-PPIC \cite{cohpaper}, a set of NLMS
adaptive algorithm are used to compute
\begin{equation}
\label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T,
\end{equation}
which is an estimate of $W^s$ after iteration $N$. To do so, from
(\ref{e6}), we have
\begin{equation}
\label{e13} |w^s_{m}|=1 ~~~m=1,2,\cdots,M,
\end{equation}
which is equivalent to
\begin{equation}
\label{e14} \sum\limits_{m=1}^{M}||w^s_{m}|-1|=0.
\end{equation}
We divide $\Psi=\left(0,1-\sqrt{\frac{M-1}{M}}\right]$, a sharp
range for $\mu$ (the step-size of the NLMS algorithm) given in
\cite{sg2005}, into $L$ subintervals and consider $L$ individual
step-sizes $\Theta=\{\mu_1,\mu_2,\cdots,\mu_L\}$, where
$\mu_1=\frac{1-\sqrt{\frac{M-1}{M}}}{L}, \mu_2=2\mu_1,\cdots$, and
$\mu_L=L\mu_1$. In each stage, $L$ individual NLMS algorithms are
executed ($\mu_l$ is the step-size of the $l^{th}$ algorithm). In
stage $s$ and at iteration $n$, if
$W^{s}_k(n)=[w^s_{1,k},\cdots,w^s_{M,k}]^T$, the parameter estimate
of the $k^{\rm th}$ algorithm, minimizes our criteria, then it is
considered as the parameter estimate at time iteration $n$. In other
words if the next equation holds
\begin{equation}
\label{e17} W^s_k(n)=\arg\min\limits_{W^s_l(n)\in I_{W^s}
}\left\{\sum\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\right\},
\end{equation}
where $W^{s}_l(n)=W^{s}(n-1)+\mu_l \frac{X^s(n)}{\|X^s(n)\|^2}e(n),
~~~ l=1,2,\cdots,k,\cdots,L-1,L$ and
$I_{W^s}=\{W^s_1(n),\cdots,W^s_L(n)\}$, then we have
$W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their
weight estimate by $W^{s}_k(n)$. At time instant $n=N$, this
procedure gives $W^s(N)$, the final estimate of $W^s$, as the true
parameter of stage $s$.
Now consider $R=(0,2\pi)$ and divide it into four equal parts
$R_1=(0,\frac{\pi}{2})$, $R_2=(\frac{\pi}{2},\pi)$,
$R_3=(\pi,\frac{3\pi}{2})$ and $R_4=(\frac{3\pi}{2},2\pi)$. The
partial information of channel phases (given by the receiver) is in
a way that it shows each $\phi_m$ ($m=1,2,\cdots,M$) belongs to
which one of the four quarters $R_i,~i=1,2,3,4$. Assume
$W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T$ is the weight
estimate of the modified algorithm PLMS-PPIC at time instant $N$ of
the stage $s$. From equation (\ref{e6}) we have
\begin{equation}
\label{tt3}
\phi_m=\angle({\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m}).
\end{equation}
We estimate $\phi_m$ by $\hat{\phi}^s_m$, where
\begin{equation}
\label{ee3}
\hat{\phi}^s_m=\angle{(\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m(N))}.
\end{equation}
Because $\frac{\alpha^{(s-1)}_m}{\alpha_m}=1$ or $-1$, we have
\begin{eqnarray}
\hat{\phi}^s_m=\left\{\begin{array}{ll} \angle{w^s_m(N)} &
\mbox{if}~
\frac{\alpha^{(s-1)}_m}{\alpha_m}=1\\
\pm\pi+\angle{w^s_m(N)} & \mbox{if}~
\frac{\alpha^{(s-1)}_m}{\alpha_m}=-1\end{array}\right.
\end{eqnarray}
Hence $\hat{\phi}^s_m\in P^s=\{\angle{w^s_m(N)},
\angle{w^s_m(N)+\pi, \angle{w^s_m(N)}-\pi}\}$. If $w^s_m(N)$
sufficiently converges to its true value $w^s_m$, the same region
for $\hat{\phi}^s_m$ and $\phi_m$ is expected. In this case only one
of the three members of $P^s$ has the same region as $\phi_m$. For
example if $\phi_m \in (0,\frac{\pi}{2})$, then $\hat{\phi}^s_m \in
(0,\frac{\pi}{2})$ and therefore only $\angle{w^s_m(N)}$ or
$\angle{w^s_m(N)}+\pi$ or $\angle{w^s_m(N)}-\pi$ belongs to
$(0,\frac{\pi}{2})$. If, for example, $\angle{w^s_m(N)}+\pi$ is such
a member between all three members of $P^s$, it is the best
candidate for phase estimation. In other words,
\[\phi_m\approx\hat{\phi}^s_m=\angle{w^s_m(N)}+\pi.\]
We admit that when there is a member of $P^s$ in the quarter of
$\phi_m$, then $w^s_m(N)$ converges. What would happen when non of
the members of $P^s$ has the same quarter as $\phi_m$? This
situation will happen when the absolute difference between $\angle
w^s_m(N)$ and $\phi_m$ is greater than $\pi$. It means that
$w^s_m(N)$ has not converged yet. In this case where we can not
count on $w^s_m(N)$, the expected value is the optimum choice for
the channel phase estimation, e.g. if $\phi_m \in (0,\frac{\pi}{2})$
then $\frac{\pi}{4}$ is the estimation of the channel phase
$\phi_m$, or if $\phi_m \in (\frac{\pi}{2},\pi)$ then
$\frac{3\pi}{4}$ is the estimation of the channel phase $\phi_m$.
The results of the above discussion are summarized in the next
equation
\begin{eqnarray}
\nonumber \hat{\phi}^s_m = \left\{\begin{array}{llll} \angle
{w^s_m(N)} & \mbox{if}~
\angle{w^s_m(N)}, \phi_m\in R_i,~~i=1,2,3,4\\
\angle{w^s_m(N)}+\pi & \mbox{if}~ \angle{w^s_m(N)}+\pi, \phi_m\in
R_i,~~i=1,2,3,4\\
\angle{w^n_m(N)}-\pi & \mbox{if}~ \angle{w^s_m(N)}-\pi, \phi_m\in
R_i,~~i=1,2,3,4\\
\frac{(i-1)\pi+i\pi}{4} & \mbox{if}~ \phi_m\in
R_i,~~\angle{w^s_m(N)},\angle
{w^s_m(N)}\pm\pi\notin R_i,~~i=1,2,3,4.\\
\end{array}\right.
\end{eqnarray}
Having an estimation of the channel phases, the rest of the proposed
method is given by estimating $\alpha^{s}_m$ as follows:
\begin{equation}
\label{tt4}
\alpha^{s}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N}
q^s_m(n)e^{-j\hat{\phi}^s_m}p_m(n)\right\}\right\},
\end{equation}
where
\begin{equation} \label{tt5}
q^{s}_{m}(n)=r(n)-\sum\limits_{m^{'}=1,m^{'}\ne
m}^{M}w^{s}_{m^{'}}(N)\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n).
\end{equation}
The inputs of the first stage $\{\alpha^{0}_m\}_{m=1}^M$ (needed for
computing $X^1(n)$) are given by
\begin{equation}
\label{qte5}
\alpha^{0}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N}
r(n)e^{-j\hat{\phi}^0_m}p_m(n)\right\}\right\}.
\end{equation}
Assuming $\phi_m\in R_i$, then
\begin{equation}
\label{qqpp} \hat{\phi}^0_m =\frac{(i-1)\pi+i\pi}{4}.
\end{equation}
Table \ref{tab4} shows the structure of the modified PLMS-PPIC
method. It is to be notified that
\begin{itemize}
\item Equation (\ref{qte5}) shows the conventional bit detection
method when the receiver only knows the quarter of channel phase in
$(0,2\pi)$. \item With $L=1$ (i.e. only one NLMS algorithm), the
modified PLMS-PPIC can be thought as a modified version of the
LMS-PPIC method.
\end{itemize}
In the following section some examples are given to illustrate the
effectiveness of the proposed method.
\section{Simulations}\label{S5}
In this section we have considered some simulation examples.
Examples \ref{ex2}-\ref{ex4} compare the conventional, the modified
LMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced
channels, unbalanced channels and time varying channels. In all
examples, the receivers have only the quarter of each channel phase.
Example \ref{ex2} is given to compare the modified LMS-PPIC and the
PLMS-PPIC in the case of balanced channels.
\begin{example}{\it Balanced channels}:
\label{ex2}
\begin{table}
\caption{Channel phase estimate of the first user (example
\ref{ex2})} \label{tabex5} \centerline{{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\
&&&&\\
\cline{2-5} & \multirow{2}{*}{64}& s = 2 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\
\cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\
\cline{2-5} & \multirow{2}{*}{256}& s = 2 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\
\cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\
\cline{2-5} \hline
\end{tabular} }}
\end{table}
Consider the system model (\ref{e7}) in which $M$ users
synchronously send their bits to the receiver through their
channels. It is assumed that each user's information consists of
codes of length $N$. It is also assumd that the signal to noise
ratio (SNR) is 0dB. In this example there is no power-unbalanced or
channel loss is assumed. The step-size of the NLMS algorithm in
modified LMS-PPIC method is $\mu=0.1(1-\sqrt{\frac{M-1}{M}})$ and
the set of step-sizes of the parallel NLMS algorithms in modified
PLMS-PPIC method are
$\Theta=\{0.01,0.05,0.1,0.2,\cdots,1\}(1-\sqrt{\frac{M-1}{M}})$,
i.e. $\mu_1=0.01(1-\sqrt{\frac{M-1}{M}}),\cdots,
\mu_4=0.2(1-\sqrt{\frac{M-1}{M}}),\cdots,
\mu_{12}=(1-\sqrt{\frac{M-1}{M}})$. Figure~\ref{Figexp1NonCoh}
illustrates the bit error rate (BER) for the case of two stages and
for $N=64$ and $N=256$. Simulations also show that there is no
remarkable difference between results in two stage and three stage
scenarios. Table~\ref{tabex5} compares the average channel phase
estimate of the first user in each stage and over $10$ runs of
modified LMS-PPIC and PLMS-PPIC, when the the number of users is
$M=15$.
\end{example}
Although LMS-PPIC and PLMS-PPIC, as well as their modified versions,
are structured based on the assumption of no near-far problem
(examples \ref{ex3} and \ref{ex4}), these methods and especially the
second one have remarkable performance in the cases of unbalanced
and/or time varying channels.
\begin{example}{\it Unbalanced channels}:
\label{ex3}
\begin{table}
\caption{Channel phase estimate of the first user (example
\ref{ex3})} \label{tabex6} \centerline{{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\
&&&&\\
\cline{2-5} & \multirow{2}{*}{64}& s=2 & $\hat{\phi}^s_m=\frac{2.45\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.36\pi}{8}$ \\
\cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.71\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.80\pi}{8}$ \\
\cline{2-5} & \multirow{2}{*}{256}& s=2 & $\hat{\phi}^s_m=\frac{3.09\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.86\pi}{8}$ \\
\cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.93\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.01\pi}{8}$ \\
\cline{2-5} \hline
\end{tabular} }}
\end{table}
Consider example \ref{ex2} with power unbalanced and/or channel loss
in transmission system, i.e. the true model at stage $s$ is
\begin{equation}
\label{ve7} r(n)=\sum\limits_{m=1}^{M}\beta_m
w^s_m\alpha^{(s-1)}_m c_m(n)+v(n),
\end{equation}
where $0<\beta_m\leq 1$ for all $1\leq m \leq M$. Both the LMS-PPIC
and the PLMS-PPIC methods assume the model (\ref{e7}), and their
estimations are based on observations $\{r(n),X^s(n)\}$, instead of
$\{r(n),\mathbf{G}X^s(n)\}$, where the channel gain matrix is
$\mathbf{G}=\mbox{diag}(\beta_1,\beta_2,\cdots,\beta_m)$. In this
case we repeat example \ref{ex2}. We randomly get each element of
$G$ from $[0,0.3]$. Figure~\ref{Figexp2NonCoh} illustrates the BER
versus the number of users. Table~\ref{tabex6} compares the channel
phase estimate of the first user in each stage and over $10$ runs of
modified LMS-PPIC and modified PLMS-PPIC for $M=15$.
\end{example}
\begin{example}
\label{ex4} {\it Time varying channels}: Consider example \ref{ex2}
with time varying Rayleigh fading channels. In this case we assume
the maximum Doppler shift of $40$HZ, the three-tap
frequency-selective channel with delay vector of $\{2\times
10^{-6},2.5\times 10^{-6},3\times 10^{-6}\}$sec and gain vector of
$\{-5,-3,-10\}$dB. Figure~\ref{Figexp3NonCoh} shows the average BER
over all users versus $M$ and using two stages.
\end{example}
\section{Conclusion}\label{S6}
In this paper, parallel interference cancelation using adaptive
multistage structure and employing a set of NLMS algorithms with
different step-sizes is proposed, when just the quarter of the
channel phase of each user is known. In fact, the algorithm has been
proposed for coherent transmission with full information on channel
phases in \cite{cohpaper}. This paper is a modification on the
previously proposed algorithm. Simulation results show that the new
method has a remarkable performance for different scenarios
including Rayleigh fading channels even if the channel is
unbalanced.
|
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{H}{uman} action recognition is a fast developing research area due to its wide applications
in intelligent surveillance, human-computer interaction, robotics, and so on.
In recent years, human activity analysis based on human skeletal data has attracted a lot of attention,
and various methods for feature extraction and classifier learning have been developed for skeleton-based action recognition \cite{zhu2016handcrafted,presti20163d,han2016review}.
A hidden Markov model (HMM) is utilized by Xia {\emph{et~al.}}~ \cite{HOJ3D} to model the temporal dynamics over a histogram-based representation of joint positions for action recognition.
The static postures and dynamics of the motion patterns are represented via eigenjoints by Yang and Tian \cite{eigenjointsJournal}.
A Naive-Bayes-Nearest-Neighbor classifier learning approach is also used by \cite{eigenjointsJournal}.
Vemulapalli {\emph{et~al.}}~ \cite{vemulapalli2014liegroup} represent the skeleton configurations and action patterns as points and curves in a Lie group,
and then a SVM classifier is adopted to classify the actions.
Evangelidis {\emph{et~al.}}~ \cite{skeletalQuads} propose to learn a GMM over the Fisher kernel representation of the skeletal quads feature.
An angular body configuration representation over the tree-structured set of joints is proposed in \cite{hog2-ohnbar}.
A skeleton-based dictionary learning method using geometry constraint and group sparsity is also introduced in \cite{Luo_2013_ICCV}.
Recently, recurrent neural networks (RNNs) which can handle the sequential data with variable lengths \cite{graves2013speechICASSP,sutskever2014sequence},
have shown their strength in language modeling \cite{mikolov2011extensions,sundermeyer2012lstm,mesnil2013investigation},
image captioning \cite{vinyals2015show,xu2015show},
video analysis \cite{srivastava2015unsupervised,Singh_2016_CVPR,Jain_2016_CVPR,Alahi_2016_CVPR,Deng_2016_CVPR,Ibrahim_2016_CVPR,Ma_2016_CVPR,Ni_2016_CVPR,li2016online},
and RGB-based activity recognition \cite{yue2015beyond,donahue2015long,li2016action,wu2015ACMMM}.
Applications of these networks have also shown promising achievements in skeleton-based action recognition \cite{du2015hierarchical,veeriah2015differential,nturgbd}.
In the current skeleton-based action recognition literature, RNNs are mainly used to model the long-term context information across the temporal dimension by representing motion-based dynamics.
However, there is often strong dependency relations among the skeletal joints in spatial domain also,
and the spatial dependency structure is usually discriminative for action classification.
To model the dynamics and dependency relations in both temporal and spatial domains,
we propose a spatio-temporal long short-term memory (ST-LSTM) network in this paper.
In our ST-LSTM network,
each joint can receive context information from its stored data from previous frames and also from the neighboring joints at the same time frame to represent its incoming spatio-temporal context.
Feeding a simple chain of joints to a sequence learner limits the performance of the network,
as the human skeletal joints are not semantically arranged as a chain.
Instead, the adjacency configuration of the joints in the skeletal data can be better represented by a tree structure.
Consequently, we propose a traversal procedure by following the tree structure of the skeleton
to exploit the kinematic relationship among the body joints for better modeling spatial dependencies.
Since the 3D positions of skeletal joints provided by depth sensors are not always very accurate,
we further introduce a new gating framework, so called ``trust gate'',
for our ST-LSTM network to analyze the reliability of the input data at each spatio-temporal step.
The proposed trust gate gives better insight to the ST-LSTM network about
when and how to update, forget, or remember the internal memory content as the representation of the long-term context information.
In addition, we introduce a feature fusion method within the ST-LSTM unit to better exploit the multi-modal features extracted for each joint.
We summarize the main contributions of this paper as follows.
(1) A novel spatio-temporal LSTM (ST-LSTM) network for skeleton-based action recognition is designed.
(2) A tree traversal technique is proposed to feed the structured human skeletal data into a sequential LSTM network.
(3) The functionality of the ST-LSTM framework is further extended by adding the proposed ``trust gate''.
(4) A multi-modal feature fusion strategy within the ST-LSTM unit is introduced.
(5) The proposed method achieves state-of-the-art performance on seven benchmark datasets.
The remainder of this paper is organized as follows.
In section \ref{sec:relatedwork}, we introduce the related works on skeleton-based action recognition, which used recurrent neural networks to model the temporal dynamics.
In section \ref{sec:approach}, we introduce our end-to-end trainable spatio-temporal recurrent neural network for action recognition.
The experiments are presented in section \ref{sec:exp}.
Finally, the paper is concluded in section \ref{sec:conclusion}.
\section{Related Work}
\label{sec:relatedwork}
Skeleton-based action recognition has been explored in different aspects during recent years \cite{7284883,actionletPAMI,MMMP_PAMI,MMTW,Vemulapalli_2016_CVPR,rahmani2014real,shahroudy2014multi,rahmani2015learning,lillo2014discriminative,
jhuang2013towards,
chen_2016_icassp,liu2016IVC,cai2016TMM,al2016PRL,Tao_2015_ICCV_Workshops
}.
In this section, we limit our review to more recent approaches which use RNNs or LSTMs for human activity analysis
Du {\emph{et~al.}}~ \cite{du2015hierarchical} proposed a Hierarchical RNN network by utilizing multiple bidirectional RNNs in a novel hierarchical fashion.
The human skeletal structure was divided to five major joint groups.
Then each group was fed into the corresponding bidirectional RNN.
The outputs of the RNNs were concatenated to represent the upper body and lower body,
then each was further fed into another set of RNNs.
By concatenating the outputs of two RNNs, the global body representation was obtained, which was fed to the next RNN layer.
Finally, a softmax classifier was used in \cite{du2015hierarchical} to perform action classification.
Veeriah {\emph{et~al.}}~ \cite{veeriah2015differential} proposed to add a new gating mechanism for LSTM to model the derivatives of the memory states and explore the salient action patterns.
In this method, all of the input features were concatenated at each frame and were fed to the differential LSTM at each step.
Zhu {\emph{et~al.}}~ \cite{zhu2016co} introduced a regularization term to the objective function of the LSTM
network to push the entire framework towards learning co-occurrence relations among the joints for action recognition.
An internal dropout \cite{dropout} technique within the LSTM unit was also introduced in \cite{zhu2016co}.
Shahroudy {\emph{et~al.}}~ \cite{nturgbd} proposed to split the LSTM's memory cell to sub-cells to push the network towards learning the context representations for each body part separately.
The output of the network was learned by concatenating the multiple memory sub-cells.
Harvey and Pal \cite{harvey2015semi} adopted an encoder-decoder recurrent network to reconstruct the skeleton sequence and perform action classification at the same time.
Their model showed promising results on motion capture sequences.
Mahasseni and Todorovic \cite{mahasseni2016regularizing} proposed to use LSTM to encode a skeleton sequence as a feature vector.
At each step, the input of the LSTM consists of the concatenation of the skeletal joints' 3D locations in a frame.
They further constructed a feature manifold by using a set of encoded feature vectors.
Finally, the manifold was used to assist and regularize the supervised learning of another LSTM for RGB video based action recognition.
Different from the aforementioned works,
our proposed method does not simply concatenate the joint-based input features to build the body-level feature representation.
Instead, the dependencies between the skeletal joints are explicitly modeled by applying recurrent analysis over temporal and spatial dimensions concurrently.
Furthermore, a novel trust gate is introduced to make our ST-LSTM network more reliable against the noisy input data.
This paper is an extension of our preliminary conference version \cite{liu2016spatio}.
In \cite{liu2016spatio}, we validated the effectiveness of our model on four benchmark datasets.
In this paper, we extensively evaluate our model on seven challenging datasets.
Besides, we further propose an effective feature fusion strategy inside the ST-LSTM unit.
In order to improve the learning ability of our ST-LSTM network, a last-to-first link scheme is also introduced.
In addition, we provide more empirical analysis of the proposed framework.
\section{Spatio-Temporal Recurrent Networks}
\label{sec:approach}
In a generic skeleton-based action recognition problem, the input observations are limited to the 3D locations of the major body joints at each frame.
Recurrent neural networks have been successfully applied to
this problem recently \cite{du2015hierarchical,zhu2016co,nturgbd}.
LSTM networks \cite{lstm} are among the most successful extensions of recurrent neural networks.
A gating mechanism controlling the contents of an internal memory cell is adopted by the LSTM model
to learn a better and more complex representation of long-term dependencies in the input sequential data.
Consequently, LSTM networks are very suitable for feature learning over time series data (such as human skeletal sequences over time).
We will briefly review the original LSTM model in this section,
and then introduce our ST-LSTM network and the tree-structure based traversal approach.
We will also introduce a new gating mechanism for ST-LSTM to handle the noisy measurements in the input data for better action recognition.
Finally, an internal feature fusion strategy for ST-LSTM will be proposed.
\subsection{Temporal Modeling with LSTM}
\label{sec:approach:lstm}
In the standard LSTM model, each recurrent unit contains an input gate $i_t$, a forget gate $f_t$, an output gate $o_t$, and an internal memory cell state $c_t$, together with a hidden state $h_t$.
The input gate $i_{t}$ controls the contributions of the newly arrived input data at time step $t$ for updating the memory cell,
while the forget gate $f_{t}$ determines how much the contents of the previous state $(c_{t-1})$ contribute to deriving the current state $(c_{t})$.
The output gate $o_{t}$ learns how the output of the LSTM unit at current time step should be derived from the current state of the internal memory cell.
These gates and states can be obtained as follows:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{t} \\
f_{t} \\
o_{t} \\
u_{t} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M
\left(
\begin{array}{ccc}
x_{t} \\
h_{t-1} \\
\end{array}
\right)
\right)\\
c_{t} &=& i_{t} \odot u_{t} + f_{t} \odot c_{t-1}
\label{eq:ct}\\
h_{t} &=& o_{t} \odot \tanh( c_{t})
\label{eq:ht}
\end{eqnarray}
where $x_t$ is the input at time step $t$, $u_t$ is the modulated input, $\odot$ denotes the element-wise product,
and $M: \mathbb{R}^{D+d} \to \mathbb{R}^{4d}$ is an affine transformation.
$d$ is the size of the internal memory cell, and $D$ is the dimension of $x_t$.
\subsection{Spatio-Temporal LSTM}
\label{sec:approach:stlstm}
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.338]{STLSTM.pdf}}
\end{minipage}
\caption{
Illustration of the spatio-temporal LSTM network.
In temporal dimension, the corresponding body joints are fed over the frames.
In spatial dimension, the skeletal joints in each frame are fed as a sequence.
Each unit receives the hidden representation of the previous joints and the same joint from previous frames.}
\label{fig:STLSTM}
\end{figure}
RNNs have already shown their strengths in modeling the complex dynamics of human activities as time series data,
and achieved promising performance in skeleton-based human action recognition \cite{du2015hierarchical,zhu2016co,veeriah2015differential,nturgbd}.
In the existing literature, RNNs are mainly utilized in temporal domain to discover the discriminative dynamics and motion patterns for action recognition.
However, there is also discriminative spatial information encoded in the joints' locations and posture configurations at each video frame,
and the sequential nature of the body joints makes it possible to apply RNN-based modeling to spatial domain as well.
Different from the existing methods which concatenate the joints' information as the entire body's representation,
we extend the recurrent analysis to spatial domain by discovering the spatial dependency patterns among different body joints.
We propose a spatio-temporal LSTM (ST-LSTM) network to simultaneously model the temporal dependencies among different frames and also the spatial dependencies of different joints at the same frame.
Each ST-LSTM unit, which corresponds to one of the body joints,
receives the hidden representation of its own joint from the previous time step
and also the hidden representation of its previous joint at the current frame.
A schema of this model is illustrated in \figurename{ \ref{fig:STLSTM}}.
In this section, we assume the joints are arranged in a simple chain sequence, and the order is depicted in \figurename{ \ref{fig:tree16joints}(a)}.
In section \ref{sec:approach:skeltree}, we will introduce a more advanced traversal scheme to take advantage of the adjacency structure among the skeletal joints.
We use $j$ and $t$ to respectively denote the indices of joints and frames,
where $j \in \{1,...,J\}$ and $t \in \{1,...,T\}$.
Each ST-LSTM unit is fed with the input ($x_{j, t}$, the information of the corresponding joint at current time step),
the hidden representation of the previous joint at current time step $(h_{j-1,t})$,
and the hidden representation of the same joint at the previous time step $(h_{j,t-1})$.
As depicted in \figurename{ \ref{fig:STLSTMFig}},
each unit also has two forget gates, $f_{j, t}^{T}$ and $f_{j, t}^{S}$, to handle the two sources of context information in temporal and spatial dimensions, respectively.
The transition equations of ST-LSTM are formulated as follows:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{j, t} \\
f_{j, t}^{S} \\
f_{j, t}^{T} \\
o_{j, t} \\
u_{j, t} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M
\left(
\begin{array}{ccc}
x_{j, t} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
c_{j, t} &=& i_{j, t} \odot u_{j, t} + f_{j, t}^{S} \odot c_{j-1, t} + f_{j, t}^{T} \odot c_{j, t-1}
\\
h_{j, t} &=& o_{j, t} \odot \tanh( c_{j, t})
\end{eqnarray}
\begin{figure}
\centerline{\includegraphics[scale=0.479]{STLSTMFig.pdf}}
\caption{Illustration of the proposed ST-LSTM with one unit.}
\label{fig:STLSTMFig}
\end{figure}
\subsection{Tree-Structure Based Traversal}
\label{sec:approach:skeltree}
\begin{figure}
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{Skeleton16Joints.pdf}}
\centerline{(a)}
\end{minipage}
\begin{minipage}[b]{0.63\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{Tree16Joints.pdf}}
\centerline{(b)}
\end{minipage}
\begin{minipage}[b]{0.99\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{BiTree16Joints.pdf}}
\centerline{(c)}
\end{minipage}
\caption{(a) The skeleton of the human body. In the simple joint chain model, the joint visiting order is 1-2-3-...-16.
(b) The skeleton is transformed to a tree structure.
(c) The tree traversal scheme. The tree structure can be unfolded to a chain with the traversal scheme, and the joint visiting order is 1-2-3-2-4-5-6-5-4-2-7-8-9-8-7-2-1-10-11-12-13-12-11-10-14-15-16-15-14-10-1.}
\label{fig:tree16joints}
\end{figure}
Arranging the skeletal joints in a simple chain order ignores the kinematic interdependencies among the body joints.
Moreover, several semantically false connections between the joints, which are not strongly related, are added.
The body joints are popularly represented as a tree-based pictorial structure \cite{zou2009automatic,yang2011articulated} in human parsing,
as shown in \figurename{ \ref{fig:tree16joints}(b)}.
It is beneficial to utilize the known interdependency relations between various sets of body joints as an adjacency tree structure inside our ST-LSTM network as well.
For instance, the hidden representation of the neck joint (joint 2 in \figurename{ \ref{fig:tree16joints}(a)})
is often more informative for the right hand joints (7, 8, and 9) compared to the joint 6, which lies before them in the numerically ordered chain-like model.
Although using a tree structure for the skeletal data sounds more reasonable here, tree structures cannot be directly fed into our current form of the proposed ST-LSTM network.
In order to mitigate the aforementioned limitation, a bidirectional tree traversal scheme is proposed.
In this scheme, the joints are visited in a sequence, while the adjacency information in the skeletal tree structure will be maintained.
At the first spatial step, the root node (central spine joint in \figurename{ \ref{fig:tree16joints}(c)}) is fed to our network.
Then the network follows the depth-first traversal order in the spatial (skeleton tree) domain.
Upon reaching a leaf node, the traversal backtracks in the tree.
Finally, the traversal goes back to the root node.
In our traversal scheme, each connection in the tree is met twice,
thus it guarantees the transmission of the context data in both top-down and bottom-up directions within the adjacency tree structure.
In other words, each node (joint) can obtain the context information from both its ancestors and descendants in the hierarchy defined by the tree structure.
Compared to the simple joint chain order described in section \ref{sec:approach:stlstm},
this tree traversal strategy, which takes advantage of the joints' adjacency structure, can discover stronger long-term spatial dependency patterns in the skeleton sequence.
Our framework's representation capacity can be further improved by stacking multiple layers of the tree-structured ST-LSTMs and making the network deeper, as shown in \figurename{ \ref{fig:stackedTreeSTLSTM}}.
It is worth noting that at each step of our ST-LSTM framework,
the input is limited to the information of a single joint at a time step,
and its dimension is much smaller compared to the concatenated input features used by other existing methods.
Therefore, our network has much fewer learning parameters.
This can be regarded as a weight sharing regularization for our learning model,
which leads to better generalization in the scenarios with relatively small sets of training samples.
This is an important advantage for skeleton-based action recognition, since the numbers of training samples in most existing datasets are limited.
\begin{figure}
\begin{minipage}[b]{0.99\linewidth}
\centering
\centerline{\includegraphics[scale=.38]{StackedTreeSTLSTM.pdf}}
\end{minipage}
\caption{
Illustration of the deep tree-structured ST-LSTM network.
For clarity, some arrows are omitted in this figure.
The hidden representation of the first ST-LSTM layer is fed to the second ST-LSTM layer as its input.
The second ST-LSTM layer's hidden representation is fed to the softmax layer for classification.
}
\label{fig:stackedTreeSTLSTM}
\end{figure}
\subsection{Spatio-Temporal LSTM with Trust Gates}
\label{sec:approach:trustgate}
In our proposed tree-structured ST-LSTM network, the inputs are the positions of body joints provided by depth sensors (such as Kinect),
which are not always accurate because of noisy measurements and occlusion.
The unreliable inputs can degrade the performance of the network.
To circumvent this difficulty, we propose to add a novel additional gate to our ST-LSTM network to analyze the reliability of the input measurements based on the derived estimations of the input from the available context information at each spatio-temporal step.
Our gating scheme is inspired by the works in natural language processing \cite{sutskever2014sequence},
which use the LSTM representation of previous words at each step to predict the next coming word.
As there are often high dependency relations among the words in a sentence, this idea works decently.
Similarly, in a skeletal sequence, the neighboring body joints often move together,
and this articulated motion follows common yet complex patterns,
thus the input data $x_{j,t}$ is expected to be predictable by using the contextual information ($h_{j-1,t}$ and $h_{j,t-1}$) at each spatio-temporal step.
Inspired by this predictability concept, we add a new mechanism to our ST-LSTM calculating a prediction of the input at each step and comparing it with the actual input.
The amount of estimation error is then used to learn a new ``trust gate''.
The activation of this new gate can be used to assist the ST-LSTM network to learn better decisions about when and how to remember or forget the contents in the memory cell.
For instance, if the trust gate learns that the current joint has wrong measurements,
then this gate can block the input gate and prevent the memory cell from being altered by the current unreliable input data.
Concretely, we introduce a function to produce a prediction of the input at step $(j,t)$ based on the available context information as:
\begin{equation}
p_{j, t} = \tanh
\left(
M_{p}
\left(
\begin{array}{ccc}
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\label{eq:p_j_t}
\end{equation}
where $M_p$ is an affine transformation mapping the data from $\mathbb{R}^{2d}$ to $\mathbb{R}^d$, thus the dimension of $p_{j,t}$ is $d$.
Note that the context information at each step does not only contain the representation of the previous temporal step,
but also the hidden state of the previous spatial step.
This indicates that the long-term context information of both the same joint at previous frames and the other visited joints at the current frame are seamlessly incorporated.
Thus this function is expected to be capable of generating reasonable predictions.
In our proposed network, the activation of trust gate is a vector in $\mathbb{R}^d$ (similar to the activation of input gate and forget gate).
The trust gate $\tau_{j, t}$ is calculated as follows:
\begin{eqnarray}
x'_{j, t} &=& \tanh
\left(
M_{x}
\left(
x_{j, t}
\right)
\right)
\label{eq:x_prime_j_t}
\\
\tau_{j, t} &=& G (p_{j, t} - x'_{j, t})
\label{eq:tau}
\end{eqnarray}
where $M_x: \mathbb{R}^{D} \to \mathbb{R}^{d}$ is an affine transformation.
The activation function $G(\cdot)$ is an element-wise operation calculated as $G(z) = \exp(-\lambda z^{2})$,
for which $\lambda$ is a parameter to control the bandwidth of Gaussian function ($\lambda > 0$).
$G(z)$ produces a small response if $z$ has a large absolute value and a large response when $z$ is close to zero.
Adding the proposed trust gate, the cell state of ST-LSTM will be updated as:
\begin{eqnarray}
c_{j, t} &=& \tau_{j, t} \odot i_{j, t} \odot u_{j, t}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}) \odot f_{j, t}^{S} \odot c_{j-1, t}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}) \odot f_{j, t}^{T} \odot c_{j, t-1}
\end{eqnarray}
This equation can be explained as follows:
(1) if the input $x_{j,t}$ is not trusted (due to the noise or occlusion),
then our network relies more on its history information, and tries to block the new input at this step;
(2) on the contrary, if the input is reliable, then our learning algorithm updates the memory cell regarding the input data.
The proposed ST-LSTM unit equipped with trust gate is illustrated in \figurename{ \ref{fig:TrustGateSTLSTMFig}}.
The concept of the proposed trust gate technique is theoretically generic and can be used in other domains to handle noisy input information for recurrent network models.
\begin{figure}
\centerline{\includegraphics[scale=0.479]{TrustGateSTLSTMFig_X.pdf}}
\caption{Illustration of the proposed ST-LSTM with trust gate.}
\label{fig:TrustGateSTLSTMFig}
\end{figure}
\subsection{Feature Fusion within ST-LSTM Unit}
\label{sec:approach:innerfusion}
\begin{figure}
\centerline{\includegraphics[scale=0.469]{FusionSTLSTMFig.pdf}}
\caption{Illustration of the proposed structure for feature fusion inside the ST-LSTM unit.}
\label{fig:FusionSTLSTMFig}
\end{figure}
As mentioned above, at each spatio-temporal step, the positional information of the corresponding joint at the current frame is fed to our ST-LSTM network.
Here we call joint position-based feature as a geometric feature.
Beside utilizing the joint position (3D coordinates),
we can also extract visual texture and motion features ({\emph{e.g.}}~ HOG, HOF \cite{dalal2006human,wang2011action}, or ConvNet-based features \cite{simonyan2014very,cheron2015p})
from the RGB frames, around each body joint as the complementary information.
This is intuitively effective for better human action representation, especially in the human-object interaction scenarios.
A naive way for combining geometric and visual features for each joint is to concatenate them in the feature level
and feed them to the corresponding ST-LSTM unit as network's input data.
However, the dimension of the geometric feature is very low intrinsically,
while the visual features are often in relatively higher dimensions.
Due to this inconsistency, simple concatenation of these two types of features in the input stage of the network causes degradation in the final performance of the entire model.
The work in \cite{nturgbd} feeds different body parts into the Part-aware LSTM \cite{nturgbd} separately,
and then assembles them inside the LSTM unit.
Inspired by this work, we propose to fuse the two types of features inside the ST-LSTM unit,
rather than simply concatenating them at the input level.
We use $x_{j,t}^{\mathcal{F}}$ (${\mathcal{F}} \in \{1,2\}$) to denote the geometric feature and visual feature for a joint at the $t$-th time step.
As illustrated in \figurename{ \ref{fig:FusionSTLSTMFig}}, at step $(j,t)$, the two features $(x_{j,t}^{1}$ and $x_{j,t}^{2})$ are fed to the ST-LSTM unit separately as the new input structure.
Inside the recurrent unit, we deploy two sets of gates, input gates $(i_{j,t}^{\mathcal{F}})$, forget gates with respect to time $(f_{j,t}^{T, \mathcal{F}})$ and space $(f_{j,t}^{S, \mathcal{F}})$, and also trust gates $(\tau_{j, t}^{\mathcal{F}})$, to deal with the two heterogeneous sets of modality features.
We put the two cell representations $(c_{j,t}^{\mathcal{F}})$ together to build up the multimodal context information of the two sets of modality features.
Finally, the output of each ST-LSTM unit is calculated based on the multimodal context representations,
and controlled by the output gate $(o_{j,t})$ which is shared for the two sets of features.
For the features of each modality, it is efficient and intuitive to model their context information independently.
However, we argue that the representation ability of each modality-based sets of features can be strengthened by borrowing information from the other set of features.
Thus, the proposed structure does not completely separate the modeling of multimodal features.
Let us take the geometric feature as an example.
Its input gate, forget gates, and trust gate are all calculated from the new input $(x_{j,t}^{1})$ and hidden representations $(h_{j,t-1}$ and $h_{j-1,t})$,
whereas each hidden representation is an associate representation of two features' context information from previous steps.
Assisted by visual features' context information,
the input gate, forget gates, and also trust gate for geometric feature can effectively learn how to update its current cell state $(c_{j,t}^{1})$.
Specifically, for the new geometric feature input $(x_{j,t}^{1})$,
we expect the network to produce a better prediction when it is not only based on the context of the geometric features, but also assisted by the context of visual features.
Therefore, the trust gate $(\tau_{j, t}^{1})$ will have stronger ability to assess the reliability of the new input data $(x_{j,t}^{1})$.
The proposed ST-LSTM with integrated multimodal feature fusion is formulated as:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{j, t}^\mathcal{F} \\
f_{j, t}^{S,\mathcal{F}} \\
f_{j, t}^{T,\mathcal{F}} \\
u_{j, t}^\mathcal{F} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M^\mathcal{F}
\left(
\begin{array}{ccc}
x_{j, t}^\mathcal{F} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
p_{j, t}^\mathcal{F} &=& \tanh
\left(
M_{p}^\mathcal{F}
\left(
\begin{array}{ccc}
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
{x'}_{j, t}^\mathcal{F} &=& \tanh
\left(
M_{x}^\mathcal{F}
\left(
\begin{array}{ccc}
x_{j, t}^\mathcal{F}\\
\end{array}
\right)
\right)
\\
\tau_{j, t}^{\mathcal{F}} &=& G ({x'}_{j, t}^{\mathcal{F}} - p_{j, t}^{\mathcal{F}})
\\
c_{j, t}^{\mathcal{F}} &=& \tau_{j, t}^{\mathcal{F}} \odot i_{j, t}^{\mathcal{F}} \odot u_{j, t}^{\mathcal{F}}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}^{\mathcal{F}}) \odot f_{j, t}^{S,\mathcal{F}} \odot c_{j-1, t}^{\mathcal{F}}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}^{\mathcal{F}}) \odot f_{j, t}^{T,\mathcal{F}} \odot c_{j, t-1}^{\mathcal{F}}
\\
o_{j, t} &=& \sigma
\left(
M_{o}
\left(
\begin{array}{ccc}
x_{j, t}^{1} \\
x_{j, t}^{2} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
h_{j, t} &=& o_{j, t} \odot \tanh
\left(
\begin{array}{ccc}
c_{j, t}^{1} \\
c_{j, t}^{2} \\
\end{array}
\right)
\end{eqnarray}
\subsection{Learning the Classifier}
\label{sec:approach:learning}
As the labels are given at video level, we feed them as the training outputs of our network at each spatio-temporal step.
A softmax layer is used by the network to predict the action class $\hat{y}$ among the given class set $Y$.
The prediction of the whole video can be obtained by averaging the prediction scores of all steps.
The objective function of our ST-LSTM network is as follows:
\begin{equation}
\mathcal{L} = \sum_{j=1}^J \sum_{t=1}^T l(\hat{y}_{j,t}, y)
\end{equation}
where $l(\hat{y}_{j,t}, y)$ is the negative log-likelihood loss \cite{graves2012supervised}
that measures the difference between the prediction result $\hat{y}_{j,t}$ at step $(j,t)$ and the true label $y$.
The back-propagation through time (BPTT) algorithm \cite{graves2012supervised} is often effective for minimizing the objective function for the RNN/LSTM models.
As our ST-LSTM model involves both spatial and temporal steps, we adopt a modified version of BPTT for training.
The back-propagation runs over spatial and temporal steps simultaneously by starting at the last joint at the last frame.
To clarify the error accumulation in this procedure, we use $e_{j,t}^T$ and $e_{j,t}^S$ to denote the error back-propagated from step $(j,t+1)$ to $(j,t)$ and the error back-propagated from step $(j+1,t)$ to $(j,t)$, respectively.
Then the errors accumulated at step $(j,t)$ can be calculated as $e_{j,t}^T+e_{j,t}^S$.
Consequently, before back-propagating the error at each step, we should guarantee both its subsequent joint step and subsequent time step have already been computed.
The left-most units in our ST-LSTM network do not have preceding spatial units, as shown in \figurename{ \ref{fig:STLSTM}}.
To update the cell states of these units in the feed-forward stage,
a popular strategy is to input zero values into these nodes to substitute the hidden representations from the preceding nodes.
In our implementation, we link the last unit at the last time step to the first unit at the current time step.
We call the new connection as last-to-first link.
In the tree traversal, the first and last nodes refer to the same joint (root node of the tree),
however the last node contains holistic information of the human skeleton in the corresponding frame.
Linking the last node to the starting node at the next time step provides the starting node with the whole body structure configuration,
rather than initializing it with less effective zero values.
Thus, the network has better ability to learn the action patterns in the skeleton sequence.
\section{Experiments}
\label{sec:exp}
The proposed method is evaluated and empirically analyzed on seven benchmark datasets for which the coordinates of skeletal joints are provided.
These datasets are NTU RGB+D, UT-Kinect, SBU Interaction, SYSU-3D, ChaLearn Gesture, MSR Action3D, and Berkeley MHAD.
We conduct extensive experiments with different models to verify the effectiveness of individual technical contributions proposed, as follows:
(1) ``ST-LSTM (Joint Chain)''.
In this model, the joints are visited in a simple chain order, as shown in \figurename{ \ref{fig:tree16joints}(a)};
(2) ``ST-LSTM (Tree)''.
In this model, the tree traversal scheme illustrated in \figurename{ \ref{fig:tree16joints}(c) is used to take advantage of the tree-based spatial structure of skeletal joints;
(3) ``ST-LSTM (Tree) + Trust Gate''.
This model uses the trust gate to handle the noisy input.
The input to every unit of of our network at each spatio-temporal step is the location of the corresponding skeletal joint (i.e., geometric features) at the current time step.
We also use two of the datasets (NTU RGB+D dataset and UT-Kinect dataset) as examples
to evaluate the performance of our fusion model within the ST-LSTM unit by fusing the geometric and visual features.
These two datasets include human-object interactions (such as making a phone call and picking up something)
and the visual information around the major joints can be complementary to the geometric features for action recognition.
\subsection{Evaluation Datasets}
\label{sec:exp:datasets}
{\bf NTU RGB+D dataset} \cite{nturgbd} was captured with Kinect (v2).
It is currently the largest publicly available dataset for depth-based action recognition, which contains more than 56,000 video sequences and 4 million video frames.
The samples in this dataset were collected from 80 distinct viewpoints.
A total of 60 action classes (including daily actions, medical conditions, and pair actions) were performed by 40 different persons aged between 10 and 35.
This dataset is very challenging due to the large intra-class and viewpoint variations.
With a large number of samples, this dataset is highly suitable for deep learning based activity analysis.
The parameters learned on this dataset can also be used to initialize the models for smaller datasets to improve and speed up the training process of the network.
The 3D coordinates of 25 body joints are provided in this dataset.
{\bf UT-Kinect dataset} \cite{HOJ3D} was captured with a stationary Kinect sensor.
It contains 10 action classes.
Each action was performed twice by every subject.
The 3D locations of 20 skeletal joints are provided.
The significant intra-class and viewpoint variations make this dataset very challenging.
{\bf SBU Interaction dataset} \cite{yun2012two} was collected with Kinect.
It contains 8 classes of two-person interactions, and includes 282 skeleton sequences with 6822 frames.
Each body skeleton consists of 15 joints.
The major challenges of this dataset are:
(1) in most interactions, one subject is acting, while the other subject is reacting; and
(2) the 3D measurement accuracies of the joint coordinates are low in many sequences.
{\bf SYSU-3D dataset} \cite{jianfang_CVPR15} contains 480 sequences and was collected with Kinect.
In this dataset, 12 different activities were performed by 40 persons.
The 3D coordinates of 20 joints are provided in this dataset.
The SYSU-3D dataset is a very challenging benchmark because:
(1) the motion patterns are highly similar among different activities, and
(2) there are various viewpoints in this dataset.
{\bf ChaLearn Gesture dataset} \cite{escalera2013multi} consists of 23 hours of videos captured with Kinect.
A total of 20 Italian gestures were performed by 27 different subjects.
This dataset contains 955 long-duration videos and has predefined splits of samples as training, validation and testing sets.
Each skeleton in this dataset has 20 joints.
{\bf MSR Action3D dataset} \cite{li2010action} is widely used for depth-based action recognition.
It contains a total of 10 subjects and 20 actions.
Each action was performed by the same subject two or three times.
Each frame in this dataset contains 20 skeletal joints.
{\bf Berkeley MHAD dataset} \cite{ofli2013berkeley} was collected by using a motion capture network of sensors.
It contains 659 sequences and about 82 minutes of recording time.
Eleven action classes were performed by five female and seven male subjects.
The 3D coordinates of 35 skeletal joints are provided in each frame.
\subsection{Implementation Details}
\label{sec:exp:impdetails}
In our experiments, each video sequence is divided to $T$ sub-sequences with the same length, and one frame is randomly selected from each sub-sequence.
This sampling strategy has the following advantages:
(1) Randomly selecting a frame from each sub-sequence can add variation to the input data, and improves the generalization strengths of our trained network.
(2) Assume each sub-sequence contains $n$ frames,
so we have $n$ choices to sample a frame from each sub-sequence.
Accordingly, for the whole video, we can obtain a total number of $n^T$ sampling combinations.
This indicates that the training data can be greatly augmented.
We use different frame sampling combinations for each video over different training epochs.
This strategy is useful for handling the over-fitting issues,
as most datasets have limited numbers of training samples.
We observe this strategy achieves better performance in contrast with uniformly sampling frames.
We cross-validated the performance based on the leave-one-subject-out protocol on the large scale NTU RGB+D dataset, and found $T=20$ as the optimum value.
We use Torch7 \cite{collobert2011torch7} as the deep learning platform to perform our experiments.
We train the network with stochastic gradient descent,
and set the learning rate, momentum, and decay rate to $2$$\times$$10^{-3}$, $0.9$, and $0.95$, respectively.
We set the unit size $d$ to 128, and the parameter $\lambda$ used in $G(\cdot)$ to $0.5$.
Two ST-LSTM layers are used in our stacked network.
Although there are variations in terms of joint number, sequence length, and data acquisition equipment for different datasets,
we adopt the same parameter settings mentioned above for all datasets.
Our method achieves promising results on all the benchmark datasets with these parameter settings untouched, which shows the robustness of our method.
An NVIDIA TitanX GPU is used to perform our experiments.
We evaluate the computational efficiency of our method on the NTU RGB+D dataset and set the batch size to $100$.
On average, within one second, $210$, $100$, and $70$ videos can be processed
by using ``ST-LSTM (Joint Chain)'', ``ST-LSTM (Tree)'', and ``ST-LSTM (Tree) + Trust Gate'', respectively.
\subsection{Experiments on the NTU RGB+D Dataset}
\label{sec:exp:resNTU}
The NTU RGB+D dataset has two standard evaluation protocols \cite{nturgbd}.
The first protocol is the cross-subject (X-Subject) evaluation protocol,
in which half of the subjects are used for training and the remaining subjects are kept for testing.
The second is the cross-view (X-View) evaluation protocol,
in which $2/3$ of the viewpoints are used for training,
and $1/3$ unseen viewpoints are left out for testing.
We evaluate the performance of our method on both of these protocols.
The results are shown in \tablename{ \ref{table:resultNTU}}.
\begin{table}[!htp]
\caption{Experimental results on the NTU RGB+D Dataset}
\label{table:resultNTU}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Method & Feature & X-Subject & X-View \\
\hline
Lie Group \cite{vemulapalli2014liegroup} & Geometric & 50.1\% & 52.8\% \\
Cippitelli {\emph{et~al.}}~ \cite{cippitelli2016evaluation} & Geometric & 48.9\% & 57.7\% \\
Dynamic Skeletons \cite{jianfang_CVPR15} & Geometric & 60.2\% & 65.2\% \\
FTP \cite{rahmani20163d} & Geometric & 61.1\% & 72.6\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 59.1\% & 64.0\% \\
Deep RNN \cite{nturgbd} & Geometric & 56.3\% & 64.1\% \\
Part-aware LSTM \cite{nturgbd} & Geometric & 62.9\% & 70.3\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 61.7\% & 75.5\% \\
ST-LSTM (Tree) & Geometric & 65.2\% & 76.1\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{69.2\%} & \textbf{77.7\%} \\
\hline
\end{tabular}
\end{table}
In \tablename{ \ref{table:resultNTU}},
the deep RNN model concatenates the joint features at each frame and then feeds them to the network to model the temporal kinetics, and ignores the spatial dynamics.
As can be seen, both ``ST-LSTM (Joint Chain)'' and ``ST-LSTM (Tree)'' models outperform this method by a notable margin.
It can also be observed that our approach utilizing the trust gate brings significant performance improvement,
because the data provided by Kinect is often noisy and multiple joints are frequently occluded in this dataset.
Note that our proposed models (such as ``ST-LSTM (Tree) + Trust Gate'') reported in this table only use skeletal data as input.
We compare the class specific recognition accuracies of ``ST-LSTM (Tree)'' and ``ST-LSTM (Tree) + Trust Gate'', as shown in \figurename{ \ref{fig:ClassAccuracy_NTU}}.
We observe that ``ST-LSTM (Tree) + Trust Gate'' significantly outperforms ``ST-LSTM (Tree)'' for most of the action classes,
which demonstrates our proposed trust gate can effectively improve the human action recognition accuracy by learning the degrees of reliability over the input data at each time step.
\begin{figure*}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=0.38]{ClassAccuracy_NTU.pdf}}
\end{minipage}
\caption{Recognition accuracy per class on the NTU RGB+D dataset}
\label{fig:ClassAccuracy_NTU}
\end{figure*}
As shown in \figurename{ \ref{fig:NTUNoisySamples}},
a notable portion of videos in the NTU RGB+D dataset were collected in side views.
Due to the design of Kinect's body tracking mechanism,
skeletal data is less accurate in side view compared to the front view.
To further investigate the effectiveness of the proposed trust gate,
we analyze the performance of the network by feeding the side views samples only.
The accuracy of ``ST-LSTM (Tree)'' is 76.5\%,
while ``ST-LSTM (Tree) + Trust Gate'' yields 81.6\%.
This shows how trust gate can effectively deal with the noise in the input data.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=0.199]{NoisySamples.jpg}}
\end{minipage}
\caption{Examples of the noisy skeletons from the NTU RGB+D dataset.}
\label{fig:NTUNoisySamples}
\end{figure}
To verify the performance boost by stacking layers,
we limit the depth of the network by using only one ST-LSTM layer,
and the accuracies drop to 65.5\% and 77.0\% based on the cross-subject and cross-view protocol, respectively.
This indicates our two-layer stacked network has better representation power than the single-layer network.
To evaluate the performance of our feature fusion scheme,
we extract visual features from several regions based on the joint positions and use them in addition to the geometric features (3D coordinates of the joints).
We extract HOG and HOF \cite{dalal2006human,wang2011action} features from a $80\times80$ RGB patch centered at each joint location.
For each joint, this produces a 300D visual descriptor,
and we apply PCA to reduce the dimension to 20.
The results are shown in \tablename{ \ref{table:resultNTUFusion}}.
We observe that our method using the visual features together with the joint positions improves the performance.
Besides, we compare our newly proposed feature fusion strategy within the ST-LSTM unit with two other feature fusion methods:
(1) early fusion which simply concatenates two types of features as the input of the ST-LSTM unit;
(2) late fusion which uses two ST-LSTMs to deal with two types of features respectively,
then concatenates the outputs of the two ST-LSTMs at each step,
and feeds the concatenated result to a softmax classifier.
We observe that our proposed feature fusion strategy is superior to other baselines.
\begin{table}[h]
\caption{Evaluation of different feature fusion strategies on the NTU RGB+D dataset.
``Geometric + Visual (1)'' indicates the early fusion scheme.
``Geometric + Visual (2)'' indicates the late fusion scheme.
``Geometric $\bigoplus$ Visual'' means our newly proposed feature fusion scheme within the ST-LSTM unit.}
\label{table:resultNTUFusion}
\centering
\begin{tabular}{|l|c|c|}
\hline
Feature Fusion Method & X-Subject & X-View
\\
\hline
Geometric Only & 69.2\% & 77.7\% \\
Geometric + Visual (1) & 70.8\% & 78.6\% \\
Geometric + Visual (2) & 71.0\% & 78.7\% \\
Geometric $\bigoplus$ Visual &73.2\% & 80.6\% \\
\hline
\end{tabular}
\\
\end{table}
We also evaluate the sensitivity of the proposed network with respect to the variation of neuron unit size and $\lambda$ values.
The results are shown in \figurename{ \ref{fig:NTUResultLambda}}.
When trust gate is added,
our network obtains better performance for all the $\lambda$ values compared to the network without the trust gate.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.57]{NTUResultLambda1.pdf}}
\centerline{\includegraphics[scale=.57]{NTUResultLambda2.pdf}}
\end{minipage}
\caption{(a) Performance comparison of our approach using different values of neuron size ($d$) on the NTU RGB+D dataset (X-subject).
(b) Performance comparison of our method using different $\lambda$ values on the NTU RGB+D dataset (X-subject).
The blue line represents our results when different $\lambda$ values are used for trust gate,
while the red dashed line indicates the performance of our method when trust gate is not added.}
\label{fig:NTUResultLambda}
\end{figure}
Finally, we investigate the recognition performance with early stopping conditions
by feeding the first $p$ portion of the testing video to the trained network based on the cross-subject protocol ($p \in \{0.1, 0.2, ..., 1.0\}$).
The results are shown in \figurename{ \ref{fig:NTUResultEarlyStop}}.
We can observe that the results are improved when a larger portion of the video is fed to our network.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.57]{NTUResultEarlyStop.pdf}}
\end{minipage}
\caption{Experimental results of our method by early stopping the network evolution at different time steps.}
\label{fig:NTUResultEarlyStop}
\end{figure}
\subsection{Experiments on the UT-Kinect Dataset}
\label{sec:exp:resUTKinect}
There are two evaluation protocols for the UT-Kinect dataset in the literature.
The first is the leave-one-out-cross-validation (LOOCV) protocol \cite{HOJ3D}.
The second protocol is suggested by \cite{zhu2013fusing}, for which half of the subjects are used for training, and the remaining are used for testing.
We evaluate our approach using both protocols on this dataset.
Using the LOOCV protocol,
our method achieves better performance than other skeleton-based methods,
as shown in \tablename{ \ref{table:resultUTKinectprotocol1}}.
Using the second protocol (see \tablename{ \ref{table:resultUTKinectprotocol2}}),
our method achieves competitive result (95.0\%) to the Elastic functional coding method \cite{anirudh2015elastic} (94.9\%),
which is an extension of the Lie Group model \cite{vemulapalli2014liegroup}.
\begin{table}[!htp]
\caption{Experimental results on the UT-Kinect dataset (LOOCV protocol \cite{HOJ3D})}
\label{table:resultUTKinectprotocol1}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Grassmann Manifold \cite{slama2015accurate} & Geometric & 88.5\% \\
Jetley {\emph{et~al.}}~ \cite{jetley20143d} & Geometric& 90.0\% \\
Histogram of 3D Joints \cite{HOJ3D} & Geometric & 90.9\% \\
Space Time Pose \cite{devanne2013space} & Geometric & 91.5\% \\
Riemannian Manifold \cite{devanne20153d} & Geometric & 91.5\% \\
SCs (Informative Joints) \cite{jiang2015informative} & Geometric & 91.9\% \\
Chrungoo {\emph{et~al.}}~ \cite{chrungoo2014activity} & Geometric & 92.0\% \\
Key-Pose-Motifs Mining\cite{Wang_2016_CVPR_Mining} & Geometric & 93.5\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 91.0\% \\
ST-LSTM (Tree) & Geometric & 92.4\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{97.0\%} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!htp]
\caption{Results on the UT-Kinect dataset (half-vs-half protocol \cite{zhu2013fusing})}
\label{table:resultUTKinectprotocol2}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Skeleton Joint Features \cite{zhu2013fusing} & Geometric & 87.9\% \\
Chrungoo {\emph{et~al.}}~ \cite{chrungoo2014activity} & Geometric & 89.5\% \\
Lie Group \cite{vemulapalli2014liegroup} (reported by \cite{anirudh2015elastic}) & Geometric & 93.6\% \\
Elastic functional coding \cite{anirudh2015elastic} & Geometric & 94.9\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{95.0\%} \\
\hline
\end{tabular}
\end{table}
Some actions in the UT-Kinect dataset involve human-object interactions, thus appearance based features representing visual information of the objects can be complementary to the geometric features.
Thus we can evaluate our proposed feature fusion approach within the ST-LSTM unit on this dataset.
The results are shown in \tablename{ \ref{table:resultUTFusion}.
Using geometric features only, the accuracy is 97\%.
By simply concatenating the geometric and visual features, the accuracy improves slightly.
However, the accuracy of our approach can reach 98\% when the proposed feature fusion method is adopted.
\begin{table}[h]
\caption{Evaluation of our approach for feature fusion on the UT-Kinect dataset (LOOCV protocol \cite{HOJ3D}).
``Geometric + Visual'' indicates we simply concatenate the two types of features as the input.
``Geometric $\bigoplus$ Visual'' means we use the newly proposed feature fusion scheme within the ST-LSTM unit.}
\label{table:resultUTFusion}
\centering
\begin{tabular}{|l|c|c|}
\hline
Feature Fusion Method & Acc. \\
\hline
Geometric Only & 97.0\% \\
Geometric + Visual & 97.5\% \\
Geometric $\bigoplus$ Visual &98.0\% \\
\hline
\end{tabular}
\\
\scriptsize
\end{table}
\subsection{Experiments on the SBU Interaction Dataset}
\label{sec:exp:resSBU}
We follow the standard evaluation protocol in \cite{yun2012two} and perform 5-fold cross validation on the SBU Interaction dataset.
As two human skeletons are provided in each frame of this dataset,
our traversal scheme visits the joints throughout the two skeletons over the spatial steps.
We report the results in terms of average classification accuracy in \tablename{ \ref{table:resultSBU}}.
The methods in \cite{zhu2016co} and \cite{du2015hierarchical} are both LSTM-based approaches, which are more relevant to our method.
\begin{table}[h]
\caption{Experimental results on the SBU Interaction dataset}
\label{table:resultSBU}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Yun {\emph{et~al.}}~ \cite{yun2012two} & Geometric & 80.3\% \\
Ji {\emph{et~al.}}~ \cite{ji2014interactive} & Geometric & 86.9\% \\
CHARM \cite{li2015category} & Geometric & 83.9\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 80.4\% \\
Co-occurrence LSTM \cite{zhu2016co} & Geometric & 90.4\% \\
Deep LSTM \cite{zhu2016co} & Geometric & 86.0\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 84.7\% \\
ST-LSTM (Tree) & Geometric & 88.6\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{93.3\%} \\
\hline
\end{tabular}
\end{table}
The results show that the proposed ``ST-LSTM (Tree) + Trust Gate'' model outperforms all other skeleton-based methods.
``ST-LSTM (Tree)'' achieves higher accuracy than ``ST-LSTM (Joint Chain)'',
as the latter adds some false links between less related joints.
Both Co-occurrence LSTM \cite{zhu2016co} and Hierarchical RNN \cite{du2015hierarchical} adopt the Svaitzky-Golay filter \cite{savitzky1964smoothing} in the temporal domain
to smooth the skeletal joint positions and reduce the influence of noise in the data collected by Kinect.
The proposed ``ST-LSTM (Tree)'' model without the trust gate mechanism outperforms Hierarchical RNN,
and achieves comparable result (88.6\%) to Co-occurrence LSTM.
When the trust gate is used, the accuracy of our method jumps to 93.3\%.
\subsection{Experiments on the SYSU-3D Dataset}
\label{sec:exp:resSYSU}
We follow the standard evaluation protocol in \cite{jianfang_CVPR15} on the SYSU-3D dataset.
The samples from 20 subjects are used to train the model parameters,
and the samples of the remaining 20 subjects are used for testing.
We perform 30-fold cross validation and report the mean accuracy in \tablename{~\ref{table:resultSYSU}}.
\begin{table}[h]
\caption{Experimental results on the SYSU-3D dataset}
\label{table:resultSYSU}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
LAFF (SKL) \cite{hu2016ECCV} & Geometric & 54.2\% \\
Dynamic Skeletons \cite{jianfang_CVPR15} & Geometric & 75.5\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 72.1\% \\
ST-LSTM (Tree) & Geometric & 73.4\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{76.5\%} \\
\hline
\end{tabular}
\end{table}
The results in \tablename{~\ref{table:resultSYSU}} show that our proposed ``ST-LSTM (Tree) + Trust Gate'' method outperforms all the baseline methods on this dataset.
We can also find that the tree traversal strategy can help to improve the classification accuracy of our model.
As the skeletal joints provided by Kinect are noisy in this dataset,
the trust gate, which aims at handling noisy data, brings significant performance improvement (about 3\% improvement).
There are large viewpoint variations in this dataset.
To make our model reliable against viewpoint variations,
we adopt a similar skeleton normalization procedure as suggested by \cite{nturgbd} on this dataset.
In this preprocessing step, we perform a rotation transformation on each skeleton,
such that all the normalized skeletons face to the same direction.
Specifically, after rotation, the 3D vector from ``right shoulder'' to ``left shoulder'' will be parallel to the X axis,
and the vector from ``hip center'' to ``spine'' will be aligned to the Y axis
(please see \cite{nturgbd} for more details about the normalization procedure).
We evaluate our ``ST-LSTM (Tree) + Trust Gate'' method by respectively using the original skeletons without rotation and the transformed skeletons,
and report the results in \tablename{~\ref{table:resultSYSURotation}}.
The results show that it is beneficial to use the transformed skeletons as the input for action recognition.
\begin{table}[h]
\caption{Evaluation for skeleton rotation on the SYSU-3D dataset}
\label{table:resultSYSURotation}
\centering
\begin{tabular}{|l|c|}
\hline
Method & Acc. \\
\hline
With Skeleton Rotation & 76.5\% \\
Without Skeleton Rotation & 73.0\% \\
\hline
\end{tabular}
\\
\end{table}
\subsection{Experiments on the ChaLearn Gesture Dataset}
\label{sec:exp:resChaLearn}
We follow the evaluation protocol adopted in \cite{wang2015hierarchical,fernando2015modeling}
and report the F1-score measures on the validation set of the ChaLearn Gesture dataset.
\begin{table}[h]
\caption{Experimental results on the ChaLearn Gesture dataset}
\label{table:resultChaLearn}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & F1-Score \\
\hline
Portfolios \cite{yao2014gesture} & Geometric & 56.0\% \\
Wu {\emph{et~al.}}~ \cite{wu2013fusing} & Geometric & 59.6\% \\
Pfister {\emph{et~al.}}~ \cite{pfister2014domain} & Geometric & 61.7\% \\
HiVideoDarwin \cite{wang2015hierarchical} & Geometric & 74.6\% \\
VideoDarwin \cite{fernando2015modeling} & Geometric & 75.2\% \\
Deep LSTM \cite{nturgbd} & Geometric & 87.1\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 89.1\% \\
ST-LSTM (Tree) & Geometric & 89.9\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{92.0\%} \\
\hline
\end{tabular}
\end{table}
As shown in \tablename{~\ref{table:resultChaLearn}},
our method surpasses the state-of-the-art methods \cite{yao2014gesture,wu2013fusing,pfister2014domain,wang2015hierarchical,fernando2015modeling,nturgbd},
which demonstrates the effectiveness of our method in dealing with skeleton-based action recognition problem.
Compared to other methods, our method focuses on modeling both temporal and spatial dependency patterns in skeleton sequences.
Moreover, the proposed trust gate is also incorporated to our method to handle the noisy skeleton data captured by Kinect,
which can further improve the results.
\subsection{Experiments on the MSR Action3D Dataset}
\label{sec:exp:resMSR3D}
We follow the experimental protocol in \cite{du2015hierarchical} on the MSR Action3D dataset,
and show the results in \tablename{~\ref{table:resultMSR3D}}.
On the MSR Action3D dataset, our proposed method, ``ST-LSTM (Tree) + Trust Gate'', achieves 94.8\% of classification accuracy,
which is superior to the Hierarchical RNN model \cite{du2015hierarchical} and other baseline methods.
\begin{table}[h]
\caption{Experimental results on the MSR Action3D dataset}
\label{table:resultMSR3D}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Histogram of 3D Joints \cite{HOJ3D} & Geometric & 79.0\% \\
Joint Angles Similarities \cite{hog2-ohnbar} & Geometric & 83.5\% \\
SCs (Informative Joints) \cite{jiang2015informative} & Geometric & 88.3\% \\
Oriented Displacements \cite{gowayyed2013histogram} & Geometric & 91.3\% \\
Lie Group \cite{vemulapalli2014liegroup} & Geometric & 92.5\% \\
Space Time Pose \cite{devanne2013space} & Geometric & 92.8\% \\
Lillo {\emph{et~al.}}~ \cite{lillo2016hierarchical} & Geometric & 93.0\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 94.5\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{94.8\%} \\
\hline
\end{tabular}
\end{table}
\subsection{Experiments on the Berkeley MHAD Dataset}
\label{sec:exp:resMHAD}
\begin{table}[h]
\caption{Experimental results on the Berkeley MHAD dataset}
\label{table:resultMHAD}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Ofli {\emph{et~al.}}~ \cite{Ofli2014jvci} & Geometric & 95.4\% \\
Vantigodi {\emph{et~al.}}~ \cite{vantigodi2013real} & Geometric & 96.1\% \\
Vantigodi {\emph{et~al.}}~ \cite{vantigodi2014action} & Geometric & 97.6\% \\
Kapsouras {\emph{et~al.}}~ \cite{kapsouras2014action} & Geometric & 98.2\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 100\% \\
Co-occurrence LSTM \cite{zhu2016co} & Geometric & 100\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{100\%} \\
\hline
\end{tabular}
\end{table}
We adopt the experimental protocol in \cite{du2015hierarchical} on the Berkeley MHAD dataset.
384 video sequences corresponding to the first seven persons are used for training,
and the 275 sequences of the remaining five persons are held out for testing.
The experimental results in \tablename{ \ref{table:resultMHAD}} show that our method achieves very high accuracy (100\%) on this dataset.
Unlike \cite{du2015hierarchical} and \cite{zhu2016co}, our method does not use any preliminary manual smoothing procedures.
\subsection{Visualization of Trust Gates}
\label{sec:visualization}
In this section, to better investigate the effectiveness of the proposed trust gate scheme, we study the behavior of the proposed framework against the presence of noise in skeletal data from the MSR Action3D dataset.
We manually rectify some noisy joints of the samples by referring to the corresponding depth images.
We then compare the activations of trust gates on the noisy and rectified inputs.
As illustrated in \figurename{ \ref{fig:TrustGateEffect}(a)},
the magnitude of trust gate's output ($l_2$ norm of the activations of the trust gate) is smaller when a noisy joint is fed, compared to the corresponding rectified joint.
This demonstrates how the network controls the impact of noisy input on its stored representation of the observed data.
In our next experiment, we manually add noise to one joint for all testing samples on the Berkeley MHAD dataset, in order to further analyze the behavior of our proposed trust gate.
Note that the Berkeley MHAD dataset was collected with motion capture system, thus
the skeletal joint coordinates in this dataset are much more accurate than those captured with Kinect sensors.
We add noise to the right foot joint by moving the joint away from its original location.
The direction of the translation vector is randomly chosen and the norm is a random value around $30cm$, which is a significant noise in the scale of human body.
We measure the difference in the magnitudes of trust gates' activations between the noisy data and the original ones.
For all testing samples, we carry out the same operations and then calculate the average difference.
The results in \figurename{ \ref{fig:TrustGateEffect}(b)} show that the magnitude of trust gate is reduced when the noisy data is fed to the network.
This shows that our network tries to block the flow of noisy input and stop it from affecting the memory.
We also observe that the overall accuracy of our network does not drop after adding the above-mentioned noise to the input data.
\begin{figure}[htb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[scale=.53]{VisualizationTrustGate1.pdf}}
\end{minipage}
\begin{minipage}[b]{0.52\linewidth}
\centering
\centerline{\includegraphics[scale=.53]{VisualizationTrustGate2.pdf}}
\end{minipage}
\caption{Visualization of the trust gate's behavior when inputting noisy data.
(a) $j_{3'}$ is a noisy joint position, and $j_3$ is the corresponding rectified joint location.
In the histogram, the blue bar indicates the magnitude of trust gate when inputting the noisy joint $j_{3'}$.
The red bar indicates the magnitude of the corresponding trust gate when $j_{3'}$ is rectified to $j_3$.
(b) Visualization of the difference between the trust gate calculated when the noise is imposed at the step $(j_N, t_N)$ and that calculated when inputting the original data.}
\label{fig:TrustGateEffect}
\end{figure}
\begin{table*}[htb]
\caption{Performance comparison of different spatial sequence models}
\label{table:resultDoubleChain}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\
\hline
ST-LSTM (Joint Chain) & 61.7\% & 75.5\% & 91.0\% & 84.7\% & 89.1\% \\
ST-LSTM (Double Joint Chain) & 63.5\% & 75.6\% & 91.5\% & 85.9\% & 89.2\% \\
ST-LSTM (Tree) & 65.2\% & 76.1\% & 92.4\% & 88.6\% & 89.9\% \\
\hline
\end{tabular}
\\
\end{table*}
\begin{table*}[tb]
\caption{Performance comparison of Temporal Average, LSTM, and our proposed ST-LSTM}
\label{table:resultLSTMTG}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture\\
\hline
Temporal Average & 47.6\% & 52.6\% & 81.9\% & 71.5\% & 77.9\% \\
\hline
LSTM & 62.0\% & 70.7\% & 90.5\% & 86.0\% & 87.1\% \\
LSTM + Trust Gate & 62.9\% & 71.7\% & 92.0\% & 86.6\% & 87.6\% \\
\hline
ST-LSTM & 65.2\% & 76.1\% & 92.4\% & 88.6\% & 89.9\% \\
ST-LSTM + Trust Gate & 69.2\% & 77.7\% & 97.0\% & 93.3\% & 92.0\% \\
\hline
\end{tabular}
\\
\end{table*}
\begin{table*}[tb]
\caption{Evaluation of the last-to-first link in our proposed network}
\label{table:resultLTFLink}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\
\hline
Without last-to-first link & 68.5\% & 76.9\% & 96.5\% & 92.1\% & 90.9 \% \\
With last-to-first link & 69.2\% & 77.7\% & 97.0\% & 93.3\% & 92.0 \% \\
\hline
\end{tabular}
\\
\end{table*}
\subsection{Evaluation of Different Spatial Joint Sequence Models}
\label{sec:discussion1}
The previous experiments showed how ``ST-LSTM (Tree)'' outperforms ``ST-LSTM (Joint Chain)'', because ``ST-LSTM (Tree)'' models the kinematic dependency structures of human skeletal sequences.
In this section, we further analyze the effectiveness of our ``ST-LSTM (Tree)'' model and compare it with a ``ST-LSTM (Double Joint Chain)'' model.
The ``ST-LSTM (Joint Chain)'' has fewer steps in the spatial dimension than the ``ST-LSTM (Tree)''.
One question that may rise here is if the advantage of ``ST-LSTM (Tree)'' model could be only due to the higher length and redundant sequence of the joints fed to the network, and not because of the proposed semantic relations between the joints.
To answer this question, we evaluate the effect of using a double chain scheme to increase the spatial steps of the ``ST-LSTM (Joint Chain)'' model.
Specifically, we use the joint visiting order of 1-2-3-...-16-1-2-3-...-16,
and we call this model as ``ST-LSTM (Double Joint Chain)''.
The results in \tablename{~\ref{table:resultDoubleChain}} show that the performance of ``ST-LSTM (Double Joint Chain)'' is better than ``ST-LSTM (Joint Chain)'',
yet inferior to ``ST-LSTM (Tree)''.
This experiment indicates that it is beneficial to introduce more passes in the spatial dimension to the ST-LSTM for performance improvement.
A possible explanation is that the units visited in the second round can obtain the global level context representation from the previous pass,
thus they can generate better representations of the action patterns by using the context information.
However, the performance of ``ST-LSTM (Double Joint Chain)'' is still weaker than ``ST-LSTM (Tree)'',
though the numbers of their spatial steps are almost equal.
The proposed tree traversal scheme is superior because it connects the most semantically related joints
and avoids false connections between the less-related joints (unlike the other two compared models).
\subsection{Evaluation of Temporal Average, LSTM and ST-LSTM}
\label{sec:discussion2}
To further investigate the effect of simultaneous modeling of dependencies in spatial and temporal domains,
in this experiment, we replace our ST-LSTM with the original LSTM which only models the temporal dynamics among the frames without explicitly considering spatial dependencies.
We report the results of this experiment in \tablename{ \ref{table:resultLSTMTG}}.
As can be seen, our ``ST-LSTM + Trust Gate'' significantly outperforms ``LSTM + Trust Gate''.
This demonstrates that the proposed modeling of the dependencies in both temporal and spatial dimensions provides much richer representations than the original LSTM.
The second observation of this experiment is that if we add our trust gate to the original LSTM,
the performance of LSTM can also be improved,
but its performance gain is less than the performance gain on ST-LSTM.
A possible explanation is that we have both spatial and temporal context information at each step of ST-LSTM to generate a good prediction of the input at the current step ((see Eq. (\ref{eq:p_j_t})),
thus our trust gate can achieve a good estimation of the reliability of the input at each step by using the prediction (see Eq. (\ref{eq:tau})).
However, in the original LSTM, the available context at each step is from the previous temporal step,
i.e., the prediction can only be based on the context in the temporal dimension,
thus the effectiveness of the trust gate is limited when it is added to the original LSTM.
This further demonstrates the effectiveness of our ST-LSTM framework for spatio-temporal modeling of the skeleton sequences.
In addition, we investigate the effectiveness of the LSTM structure for handling the sequential data.
We evaluate a baseline method (called ``Temporal Average'') by averaging the features from all frames instead of using LSTM.
Specifically, the geometric features are averaged over all the frames of the input sequence (i.e., the temporal ordering information in the sequence is ignored),
and then the resultant averaged feature is fed to a two-layer network, followed by a softmax classifier.
The performance of this scheme is much weaker than our proposed ST-LSTM with trust gate,
and also weaker than the original LSTM, as shown in \tablename{~\ref{table:resultLSTMTG}}.
The results demonstrate the representation strengths of the LSTM networks for modeling the dependencies and dynamics in sequential data, when compared to traditional temporal aggregation methods of input sequences.
\subsection{Evaluation of the Last-to-first Link Scheme}
\label{sec:discussion3}
In this section, we evaluate the effectiveness of the last-to-first link in our model (see section \ref{sec:approach:learning}).
The results in \tablename{ \ref{table:resultLTFLink}} show the advantages of using the last-to-first link in improving the final action recognition performance.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have extended the RNN-based action recognition method to both spatial and temporal domains.
Specifically, we have proposed a novel ST-LSTM network which analyzes the 3D locations of skeletal joints at each frame and at each processing step.
A skeleton tree traversal method based on the adjacency graph of body joints is also proposed to better represent the structure of the input sequences and
to improve the performance of our network by connecting the most related joints together in the input sequence.
In addition, a new gating mechanism is introduced to improve the robustness of our network against the noise in input sequences.
A multi-modal feature fusion method is also proposed for our ST-LSTM framework.
The experimental results have validated the contributions and demonstrated the effectiveness of our approach
which achieves better performance over the existing state-of-the-art methods on seven challenging benchmark datasets.
\section*{Acknowledgement}
This work was carried out at Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University.
ROSE Lab is supported by the National Research Foundation, Singapore, under its IDM Strategic Research Programme.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Adoption of convolutional neural network(CNN) \cite{lecun1989backpropagation} brought huge success on a lot of computer vision tasks such as classification and segmentation. One of limitation of CNN is its poor scalability with increasing input image size in terms of computation efficiency. With limited time and resources, it is necessary to be smart on selecting where, what, and how to look the image. Facing bird specific fine grained classification task, for example, it does not help much to pay attention on non-dog image part such as tree and sky. Rather, one should focus on regions which play decisive roles on classification such as beak or wings. If machine can learn how to pay attention on those regions will results better performance with lower energy usage.
In this context, \textbf{Recurrent Attention Model(RAM)} \cite{mnih2014recurrent} introduces visual attention method in the problem of fine-grained classification task. By sequentially choosing where and what to look, RAM achieved better performance with lower usage of memory. Even more, attention mechanism tackled the vulnerable point, that deep learning model is the black box model by enabling interpretations of the results. But still there is more room for RAM for improvement. In addition to where and what to look, if one can give some clues on how to look, the task specific hint, learning could be more intuitive and efficient. From this insight, we propose the novel architecture, \textbf{Clued Recurrent Attention Model(CRAM)} which inserts problem solving oriented clue on RAM. These clues, or constraints give directions to machine for faster convergence and better performance.
For evaluation, we perform experiments on two computer vision task classification and inpainting. In classification task, clue is given as the binary saliency of the image which indicates the rough location of the object. In inpainting problem, clue is given as the binary mask which indicates the location the occluded region. Codes are implemented in tensorflow version 1.6.0 and uploaded at https://github.com/brekkanegg/cram.
In summary, the contributions of this work are as follows:
\begin{enumerate}
\item Proposed novel model clued recurrent attention model(CRAM) which inserted clue on RAM for more efficient problem solving.
\item Defined clues for classification and inpainting task respectively for CRAM which are easy to interpret and approach.
\item Evaluated on classification and inpainting task, showing the powerful extension of RAM.
\end{enumerate}
\section{Related Work}
\subsection{Reccurrent Attention Model(RAM)}
RAM \cite{mnih2014recurrent} first proposed recurrent neural network(RNN) \cite{mikolov2010recurrent} based attention model inspired by human visual system. When human are confronted with large image which is too big to be seen at a glance, he processes the image from part by part depending on his interest. By selectively choosing what and where to look RAM showed higher performance while reducing calculations and memory usage. However, since RAM attend the image region by using sampling method, it has fatal weakness of using REINFORCE, not back-propagation for optimization. After works of RAM, Deep Recurrent Attention Model(DRAM) \cite{ba2014multiple} showed advanced architecture for multiple object recognition and Deep Recurrent Attentive Writer(DRAW) \cite{gregor2015draw} introduced sequential image generation method without using REINFORCE.
Spatial transformer network (STN) \cite{jaderberg2015spatial} first proposed a parametric spatial attention module for object classification task. This model includes a localization network that outputs the parameters for selecting region to attend in the input image. Recently, Recurrent Attentional Convolutional-Deconvolutional Network(RACDNN) \cite{kuen2016recurrent} gathered the strengths of both RAM and STN in saliency detection task. By replacing RAM locating module with STN, RACDNN can sequentially select where to attend on the image while still using back-propagation for optimization. This paper mainly adopted the RACDNN network with some technical twists to effectively insert the clue which acts as supervisor for problem solving.
\section{CRAM}
The architecture of CRAM is based on encoder-decoder structure. Encoder is similar to RACDNN\cite{kuen2016recurrent} with modified spatial transformer network\cite{jaderberg2015spatial} and inserted clue. While encoder is identical regardless of the type of task, decoder becomes different where the given task is classification or inpainting. Figure \ref{fig:overall} shows the overall architecture of CRAM.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{overall_architecture.png}
\caption{Overall architecture of CRAM. Note that image and clue become different depending on the task (left under and right under).}
\label{fig:overall}
\end{figure}
\subsection{\bf{Encoder}}
Encoder is composed of 4 subnetworks: context network, spatial transformer network, glimpse network and core recurrent neural network. The overall architecture of encoder is shown in Figure \ref{fig:enc}. Considering the flow of information, we will go into details of each network one by one.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{encoder.jpg}
\caption{Architecture of CRAM encoder. Note that the image is for inpainting task, where clue is given as binary mask that indicates the occluded region.}
\label{fig:enc}
\end{figure}
\textbf{Context Network: }The context network is the first part of encoder which receives image and clue as inputs and outputs the initial state tuple of {$r_{0}^{2}$}. {$r_{0}^{2}$} is first input of second layer of core recurrent neural network as shown in Figure \ref{fig:enc}. Using downsampled image{$(i_{coarse}),$} and downsampled clue{$(c_{coarse})$}, context network provides reasonable starting point for choosing image region to concentrate. Downsampled image and clue are processed with CNN followed by MLP respectively.
\begin{align}\label{eq:cn}
c_{0} = MLP_{c}(CNN_{context}(i_{coarse}, c_{coarse})) \\
h_{0} = MLP_{h}(CNN_{context}(i_{coarse}, c_{coarse}))
\end{align}
where ({$c_{0}$}, {$h_{0}$}) is the first state tuple of {$r_{0}^{2}$}.
\textbf{Spatial Transformer Network: } Spatial transformer network(STN) select region to attend considering given task and clue \cite{jaderberg2015spatial}. Different from existing STN, CRAM uses modified STN which receives image, clue, and output of second layer of core RNN as an inputs and outputs glimpse patch. From now on, glimpse patch indicates the attended image which is cropped and zoomed in. Here, the STN is composed of two parts. One is the localization part which calculates the transformation matrix {$\tau$} with CNN and MLP. The other is the transformer part which zoom in the image using the transformation matrix {$\tau$} above and obtain the glimpse. The affine transformation matrix {$\tau$} with isotropic scaling and translation is given as Equation \ref{eq:tau}.
\begin{equation}\label{eq:tau}
\tau = \begin{bmatrix}
s & 0 & t_{x} \\
0 & s & t_{y}\\
0 & 0 & 1
\end{bmatrix}
\end{equation}
where {$s, t_{x}, t_{y}$} are the scaling, horizontal translation and vertical translation parameter respectively.
Total process of STN is shown in Figure \ref{fig:stn}.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{stn.png}
\caption{Architecture of STN. STN is consists of localisation part which calculates {$\tau$} and transformer part which obtain glimpse. }
\label{fig:stn}
\end{figure}
In equations, the process STN is as follows:
\begin{equation}\label{eq:sn}
glimpse\_patch_{n} = STN(image, clue, \tau_{n})
\end{equation}
where {$n$} in {$ glimpse\_patch_{n}$} is the step of core RNN ranging from 1 to total glimpse number. {$\tau$} is obtained by the below equation.
\begin{equation}\label{eq:en}
\tau_{n} = MLP_{loc}(CNN_{i}(image)\oplus CNN_{c}(clue)\oplus MLP_{r}(r_{n}^{(2)}))
\end{equation}
where {$\oplus$} is concat operation.
\textbf{Glimpse Network: }The glimpse network is a non-linear function which receives current glimpse patch, {$ glimpse\_patch_{n}(gp_{n}$)} and attend region information, {$\tau$} as inputs and outputs current step glimpse vector. Glimpse vector is later used as the input of first core RNN. {$glimpse\_vector_{n}(gv_{n})$} is obtained by multiplicative interaction between extracted features of {$glimpse\_patch_{n}$} and {$\tau$}. The method of interaction is first proposed by \cite{larochelle2010learning}. Similar to other mentioned networks, CNN and MLP are used for feature extraction.
\begin{equation}\label{eq:gn}
\begin{split}
gv_{n} = MLP_{what}(CNN_{what}(gp_{n})) \odot MLP_{where}(\tau_{n})
\end{split}
\end{equation}
where {$\odot$} is a element-wise vector multiplication operation.
\textbf{Core Recurrent Neural Network: } Recurrent neural network is the core structure of CRAM, which aggregates information extracted from the stepwise glimpses and calculates encoded vector z. Iterating for set RNN steps(total glimpse numbers), core RNN receives {$glimpse\_vector_{n}$} at the first layer. The output of second layer {$r_{n}^{(2)}$} is again used by spatial transformer network's localization part as Equation \ref{eq:en}.
\begin{equation}\label{eq:rn}
r_{n}^{(1)} = R_{recur}^{ 1}(glimpse\_vector_{n}, r_{n-1}^{(1)}) \\
\end{equation}
\begin{equation}\label{eq:rn}
r_{n}^{(2)} = R_{recur}^{ 2}(r_{n}^{(1)}, r_{n-1}^{(2)})
\end{equation}
\subsection{\bf{Decoder}}
\subsubsection{Classification}
Like general image classification approach, encoded z is passed through MLP and outputs possibility of each class. Image of decoder for classification is shown in Figure \ref{fig:deccls}.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{cls_decoder.png}
\caption{Architecture of CRAM decoder for image classification.}
\label{fig:deccls}
\end{figure}
\subsubsection{Inpainting}
Utilizing the architecture of DCGAN \cite {radford2015unsupervised}, contaminated image is completed starting from the the encoded z from the encoder. To ensure the quality of completed image, we adopted generative adversarial network(GAN)\cite{goodfellow2014generative} framework in both local and global scale \cite{iizuka2017globally}. Here decoder works as generator and local and global discriminators evaluate its plausibility in local and global scale respectively. Image of decoder for inpainting is shown in Figure \ref{fig:dec}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ip_decoder.jpg}
\caption{Architecture of CRAM decoder and discriminators for image inpainting.}
\label{fig:dec}
\end{figure}
\section{Training}
Loss function of CRAM can be divided into two: encoder related loss({$L_{enc}$}) and decoder-related loss({$L_{dec}$}). {$L_{enc}$} constraints the glimpse patch to be consistent with the clue. For classification task, where the clue is object saliency, it is favorable if the glimpse patches covers the salient part at most. For inpainting case, there should be a supervisor that urges the glimpse patches contains the occluded region considering the region neighbor of occlusion are the most relevant part for completion. In order to satisfy above condition for both classification and inpainting cases {$L_{enc}$} or {$L_{clue}$} is as follows:
\begin{equation}\label{eq:lossg}
L_{enc}=L_{clue}(clue, STN, \tau) = \sum_{n}{ST(clue, \tau_{n})}
\end{equation}
where {$STN$} is the trained spatial transformer network Equation \ref{eq:sn} and {$\tau_{n}$} is obtained from Equation \ref{eq:tau} in each step of core RNN. Decoder loss which is different depending on the given task will be dealt separately shortly. Note that clue is binary image for both classification and inpainting tasks.
Since {$L_{dec}$} is different depending on whether the problem is classification or completion, further explanations for losses will be divided into two.
\subsection{Classification}
Decoder related loss in image classification task utilize cross entropy loss like general classification approach. Then total loss {$L_{tot-cls}$} for image classification becomes:
\begin{align}\label{eq:losscls}
L_{tot-cls} &= L_{enc} + L_{dec} \\
& = L_{clue}(clue, ST, \tau s) + L_{cls}(Y, Y^{*})
\end{align}
where clue is the binary image which takes the value of 1 for salient part and otherwise 0, and {$Y$} and {$Y^{*}$} are predicted class label vector and ground truth class label vector respectively.
\subsection{Inpainting}
Decoder related loss for image inpainting consists of reconstruction loss and gan loss.
Reconstruction loss helps completion to be more stable and gan loss enables better quality of restoration. For reconstruction loss L1 loss considering contaminated region of input is used:
\begin{equation}\label{eq:reconloss}
L_{recon}(z, clue, Y^{*}) = \| clue \odot (G(z) - Y^{*}) \| _{1}
\end{equation}
where z is encoded vector from the encoder, clue is the binary image which takes the value of 1 for occluded region and otherwise 0, G is generator(or decoder) and {$Y^{*}$} is the original image before contamination.
Since there are two discriminators, local and global scale gan loss is summation of local gan loss and global gan loss.
\begin{equation}\label{eq:ganlosses}
\begin{split}
L_{gan} &= L_{global\_gan} + L_{local\_gan}
\end{split}
\end{equation}
GAN loss for local and global scale are defined as follows:
\begin{equation}\label{eq:ganloss}
\begin{split}
L_{local\_gan} &= log(1-D_{local}(Y^{*} \odot clue)) \\ &+ logD_{local}(G(image, clue) \odot clue) \\
\end{split}
\end{equation}
\begin{equation}\label{eq:ganloss2}
\begin{split}
L_{global\_gan} &= log(1-D_{global}(Y^{*} ))\\ &+ logD_{global}(G(image, clue))
\end{split}
\end{equation}
Combining Equation \ref{eq:lossg}, \ref{eq:reconloss} and \ref{eq:ganlosses}, the total loss for image inpainting {$L_{tot-ip}$} becomes:
\begin{align}\label{eq:ganloss3}
L_{tot-ip} &= L_{enc} + L_{dec} \\
&= L_{clue} + \alpha L_{recon} +\beta L_{gan}
\end{align}
where {$\alpha$} and {$\beta$} is weighting hyperparameter and {$L_{gan}$} is summation of {$L_{global\_gan}$} and {$L_{global\_gan}$}.
\section{Implementation Details}
\subsection{Classification}
In order to obtain the clue, saliency map, we use a convolutional deconvolutional network (CNN-DecNN) \cite{noh2015learning} as shown in Figure \ref{fig:cnndecnn}. CNN-DecNN is pre-trained with the MSRA10k\cite{cheng2015global} dataset, which is by far the largest publicly available saliency detection dataset, containing 10,000 annotated saliency images. This CNN-DecNN is trained with Adam\cite{kingma2014adam} in default learning settings. In training and inference period, rough saliency(or clue) is obtained from the pre-trained CNN-DecNN.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{cnndecnn.png}
\caption{CNN-DecNN to obtain rough saliency of image. This rough saliency is the clue classification task.}
\label{fig:cnndecnn}
\end{figure}
As mentioned earlier, encoder consists of 4 subnetworks: context network, spatial transformer network, glimpse network, and core RNN.
Image and clue is 4 times downsampled and used an input of context network. Each passes 3 layer-CNN(3 x 3 kernel size, 1 x 1 stride, same zero padding) each followed by max-pooling layer(3 x 3 kernel size, 2 x 2 stride, same zero padding) and outputs vectors. These vectors are concatenated and once again passes 2 layer MLP and outputs the initial state for second layer of core RNN.
Localization part of spatial transformer network consists of CNN and MLP. For image and clue input, 3 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) is applied. 2 layer-MLP is applied in second core RNN output input. Output vectors of CNN and MLP are concatenated and once again pass through 2-layer MLP for {$s, t_{x}, t_{y}$}, the parameters of {$\tau$}.
Glimpse network receives glimpse patch and {$\tau$} above as an input. 1-layer MLP is applied on {$\tau$} while Glimpse patch passes through 3-layer CNN and 1-layer MLP to match the vector length with the {$\tau$} vector after 1-layer MLP. Glimpse vector is obtained by element-wise vector multiplication operation of above output vectors.
Core RNN is composed of 2 layer with Long-Short-Term Memory (LSTM) units for \cite{hochreiter1997long} for of its ability to learn long-range dependencies and stable learning dynamics.
Decoder is quite simple, only made up of 3-layer MLP.
Filter number of CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.
All CNN and MLP except last layer includes batch normalization\cite{ioffe2015batch} and elu activation\cite{clevert2015fast}.
We used Adam optimizer \cite{kingma2014adam} with learning rate 1e-4.
\subsection{Inpainting}
Encoder settings are identical with image classification case.
Decoder(or generator) consists of fractionally-strided CNN(3 x 3 kernel size, 1/2 stride) until the original image size are recovered.
Both local and global discriminators are based on CNN, extracts the features from the image to judge the input genuineness. Local discriminator is composed of 4 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) and 2-layer MLP. Global discriminator consists of 3-layer CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding)and 2-layer MLP. Sigmoid function is applied on the last outputs of local and global discriminator in order to ensure the output value to be between 0 and 1. All CNN, fractionally-strided CNN, and MLP except last layer includes batch normalization and elu activation. Same as classification settings, filter number of CNNs, filter number of fractionally-strided CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.
\section{Experiment}
\subsection{Image Classification}
Work in progress.
\subsection{Image Inpainting}
\subsubsection{Dataset}
Street View House Numbers (SVHN) dataset\cite{netzer2011reading} is a real world image dataset for object recognition obtained from house numbers in Google street view image. SVHN dataset contains 73257 training digits and 26032 testing digits size of 32 x 32 in RGB color scale.
\subsubsection{Result}
Figure \ref{fig:svhn} showed the result of inpainting with SVHN dataset. 6.25\% pixels of image at the center are occluded. Even though the result is not excellent, it is enough to show the possibility and scalability of CRAM. With better generative model, it is expected to show better performance.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{svhn.png}
\caption{Experiment results on SVHN. From left to right is ground truth, input contaminated image, generated image by CRAM decoder and finally the completed image which was partially replaced with generated image only for missing region.}
\label{fig:svhn}
\end{figure}
\section{Conclusion}
Work in progress.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Over the past decade, our large-scale view of the Universe has
undergone a revolution. Cosmologists have agreed on a standard model
that matches a wide range of astronomical data (eg. Spergel et
al. 2007). However, this $\Lambda$CDM concordance model relies on
three ingredients whose origin and nature are unknown: dark matter,
dark energy and fundamental fields driving a period of inflation,
during which density fluctuations are imprinted on the Universe. All
these elements of the model represent new aspects of fundamental
physics, which can best be studied via astronomy. The nature of the
dark energy, which now comprises the bulk of the mass-energy budget of
the Universe, will determine the ultimate fate of the Universe and is
among the deepest questions in physics.
The most powerful tool that can be brought to bear on these problems
is weak gravitational lensing of distant galaxies; this forms the core
of the DUNE mission\footnote{for further information on DUNE:
www.dune-mission.net}. Gravitational deflection of light by
intervening dark matter concentrations causes the images of background
galaxies to acquire an additional ellipticity of order of a percent,
which is correlated over scales of tens of arcminutes. Measuring this
signature probes the expansion history in two complementary ways: (1)
geometrically, through the distance-redshift relation, and (2)
dynamically, through the growth rate of density fluctuations in the
Universe.
Utilisation of these cosmological probes relies on the measurement of
image shapes and redshifts for several billion galaxies. The
measurement of galaxy shapes for weak lensing imposes tight
requirements on the image quality which can only be met in the absence
of atmospheric phase errors and in the thermally stable environment of
space. For this number of galaxies, distances must be estimated using
photometric redshifts, involving photometry measurements over a wide
wavelength range in the visible and near-IR. The necessary visible
galaxy colour data can be obtained from the ground, using current or
upcoming instruments, complementing the unique image quality of space
for the measurement of image distortions. However, at wavelengths
beyond 1$\mu$m, we require a wide NIR survey to depths that are only
achievable from space.
Given the importance of the questions being addressed and to provide
systematic cross-checks, DUNE will also measure Baryon Acoustic
Oscillations, the Integrated Sachs-Wolfe effect, and galaxy Cluster
Counts. Combining these independent cosmological probes, DUNE will
tackle the following questions: What are the dynamics of dark energy?
What are the physical characteristics of the dark matter? What are
the seeds of structure formation and how did structure grow? Is
Einstein's theory of General Relativity the correct theory of gravity?
DUNE will combine its unique space-borne observation with existing and
planned ground-based surveys, and hence increases the science return
of the mission while limiting costs and risks. The panoramic visible
and NIR surveys required by DUNE's primary science goals will afford
unequalled sensitivity and survey area for the study of galaxy
evolution and its relationship with the distribution of the dark
matter, the discovery of high redshift objects, and of the physical
drivers of star formation. Additional surveys at low galactic
latitudes will provide a unique census of the Galactic plane and
earth-mass exoplanets at distances of 0.5-5 AU from their host star
using the microlensing technique. These DUNE surveys will provide a
unique all-sky map in the visible and NIR and thus complement other
space missions such as Planck, WMAP, eROSITA, JWST, and WISE. The
following describes the science objectives, instrument concept and
mission profile (see Table~\ref{table:summary} for a baseline
summary). A description of an earlier version of the mission without
NIR capability and developped during a CNES phase 0 study can be found
in Refregier et al 2006 and Grange et al. 2006.
\begin{table}
\caption{DUNE Baseline summary}
\label{table:summary}
\begin{tabular}{|l|l|}
\hline
Science objectives & Must: Cosmology and Dark Energy. Should: Galaxy formation\\
& Could: Extra-solar planets\\
\hline
Surveys & Must: 20,000 deg$^2$ extragalactic, Should: Full sky (20,000
deg$^2$ \\
& Galactic), 100 deg$^2$ medium-deep. Could: 4 deg$^2$ planet hunting\\
\hline
Requirements & 1 visible band (R+I+J) for high-precision shape measurements,\\
& 3 NIR bands (Y, J, H) for photometry\\
\hline
Payload & 1.2m telescope, Visible \& NIR cameras with 0.5 deg$^2$ FOV
each\\
\hline
Service module & Mars/Venus express, Gaia heritage \\
\hline
Spacecraft & 2013kg launch mass\\
\hline
Orbit & Geosynchronous\\
\hline
Launch & Soyuz S-T Fregat\\
\hline
Operations & 4 year mission\\
\hline
\end{tabular}
\end{table}
\section{\label{section2}Science Objectives}
The DUNE mission will investigate a broad range of astrophysics and
fundamental physics questions detailed below. Its aims are twofold:
first study dark energy and measure its equation of state parameter
$w$ (see definition below) and its evolution with a precision of 2\%
and 10\% respectively, using both expansion history and structure
growth, second explore the nature of dark matter by testing the Cold
Dark Matter (CDM) paradigm and by measuring precisely the sum of the
neutrino masses. At the same time, it will test the validity of
Einstein's theory of gravity. In addition, DUNE will investigate how
galaxies form, survey all Milky-Way-like galaxies in the 2$\pi$
extra-galactic sky out to $z \sim 2$ and detect thousands of galaxies
and AGN at $6<z<12$. It will provide a detailed visible/NIR map of
the Milky Way and nearby galaxies and provide a statistical
census of exoplanets with masses above 0.1 Earth mass and orbits
greater than 0.5 AU
\subsection{Understanding Dark Energy}
A variety of independent observations overwhelmingly indicate that the
cosmological expansion began to accelerate when the Universe was
around half of its present age. Presuming the correctness of general
relativity this requires a new energy component known as dark
energy. The simplest case would be Einstein's cosmological constant
($\Lambda$), in which the dark energy density would be exactly
homogeneous and independent of time. However, the description of
vacuum energy from current Particle Physics concepts conflicts by 120
orders of magnitude with the observed value, and the discrepancy is
still not understood. Cosmologists are thus strongly motivated to
consider models of a dynamical dark energy, or even to contemplate
modifications to General Relativity. Explaining dark energy may well
require a radical change in our understanding of Quantum Theory or
Gravity, or both. One of the major aims of DUNE is to determine
empirically which of these alternatives is to be preferred. The
properties of dark energy can be quantified by considering its
equation of state parameter $w=p/\rho c^2$, where $p$ and $\rho$ are
its effective pressure and density. Unlike matter, dark energy has the
remarkable property of having negative pressure ($w<0$) and thus of
driving the Universe into a period of accelerated expansion (if
$w<-1/3$). The latter began relatively recently, around $z \le 1$. If
the dark energy resembles a cosmological constant ($w=-1$), it can
only be directly probed in the low-redshift Universe (see
Fig.~\ref{figc1}). This expansion history can be measured in two
distinct ways (see Fig.~\ref{figc1}): (1) the distance-redshift
relation $D(z)$; (2) the growth of structure (i.e. galaxies and
clusters of galaxies). The $D(z)$ relation can be probed geometrically
using 'standard candles' such as supernovae, or via measures of the
angular diameter distance from gravitational lensing or from the
``standard rod'' of Baryon Acoustic Oscillations (BAO). The
accelerated expansion slows down the gravitational growth of density
fluctuations; this growth of structure can be probed by comparing the
amplitude of structure today relative to that when the CMB was formed.
Many models for dark energy and modifications to gravity have been
proposed in which the equation of state parameter $w$ vary with
time. A convenient approximation is a linear dependence on the scale
factor $a=1/(1+z)$: $w(a)=w_n+(a_n-a)w_a$, where $w_n$ is the value of
the equation of state at a pivot scale factor $a_n$ (close to 0.6 for
most probes) and $w_a$ describes the redshift evolution. The goal of
future surveys is to measure $w_n$ and $w_a$ to high precision. To
judge their relative strengths we use a standard dark energy figure of
merit (FoM) (Albrecht et al. 2006), which we define throughout this
proposal as: $FoM=1/(\Delta w_n \Delta w_a)$, where $\Delta w_n$ and
$\Delta w_a$ are the (1$\sigma$) errors on the equation of state
parameters. This FoM is inversely proportional to the area of the
error ellipse in the ($w_n-w_a$) plane.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,angle=0]{figtheo1_2.eps}\includegraphics[width=0.5\textwidth,angle=0]{figtheo2_2.eps}
\end{center}
\caption{Effects of dark energy. Left: Fraction of the density of the
Universe in the form of dark energy as a function of redshift $z$.,
for a model with a cosmological constant ($w=-1$, black solid line),
dark energy with a different equation of state ($w=-0.7$, red dotted
line), and a modified gravity model (blue dashed line). Dark energy
becomes dominant in the low redshift Universe era probed by DUNE,
while the early Universe is probed by the CMB. Right: Growth factor of
structures for the same models. Only by measuring the geometry
(bottom panel) and the growth of structure (top panel) at low
redshifts can a modification of dark energy be distinguished from that
of gravity. Weak lensing measures both effects. }
\label{figc1}
\end{figure}
\subsection{DUNE's Cosmological Probes}
DUNE will deduce the expansion history from the two methods,
distance-redshift relation and growth of structure. DUNE has thus the
advantage of probing the parameters of dark energy in two independent
ways. A single accurate technique can rule out many of the suggested
members of the family of dark energy models, but it cannot test the
fundamental assumptions about gravity theory. If General Relativity is
correct, then either $D(z)$ or the growth of structure can determine
the expansion history. In more radical models that violate General
Relativity, however, this equivalence between $D(z)$ and growth of
structure does not apply (see Fig.~\ref{figc1}). For this purpose,
DUNE will use a combination of the following cosmological probes. The
precision on Dark Energy parameters achieved by DUNE's weak lensing
survey and complementary probes described below is shown in
Fig.~\ref{figc3} and Table~\ref{tableC2}.
{\it Weak Lensing - A Dark Universe Probe:}
As light from galaxies travels towards us, its path is deflected by
the intervening mass density distribution, causing the shapes of these
galaxies to appear distorted by a few percent. The weak lensing method
measures this distortion by
correlating the shapes of background galaxies to probe the density
field of the Universe. By dividing galaxies into redshift (or
distance) bins, we can examine the growth of structure and make
three-dimensional maps of the dark matter. An accurate lensing survey,
therefore, requires precise measurements of the shapes of galaxies as
well as information about their redshifts. High-resolution images of
large portions of the sky are required, with low levels of systematic
errors that can only be achieved via observations from a thermally
stable satellite in space. Analyses of the dark energy require precise
measurements of both the cosmic expansion history and the growth of
structure. Weak lensing stands apart from all other available methods
because it is able to make accurate measurements of both
effects. Given this, the optimal dark energy mission (and dark sector
mission) is one that is centred on weak gravitational lensing and is
complemented by other dark energy probes.
{\it Baryon Acoustic Oscillations (BAO) -- An Expansion History
Probe:}
The scale of the acoustic oscillations caused by the
coupling between radiation and baryons in the early Universe can be
used as a 'standard ruler' to determine the distance-redshift
relation. Using DUNE, we can perform BAO measurements using
photometric redshifts yielding the three-dimensional positions of a
large sample of galaxies. All-sky coverage in the NIR enabled by DUNE,
impossible from the ground, is crucial to reach the necessary
photometric redshift accuracy for this BAO survey.
{\it Cluster Counts (CC) -- A Growth of Structure Probe:}
Counts of the abundance of galaxy clusters (the most massive bound
objects in the Universe) as a function of redshift are a powerful
probe of the growth of structure. There are three ways to exploit the
DUNE large-area survey, optimised for weak lensing, for cluster
detection: strong lensing; weak lensing; and optical richness.
{\it Integrated Sachs-Wolfe (ISW) Effect -- A Higher Redshift
Probe:} The ISW effect is the change in CMB photon energy as it
passes through a changing potential well. Its presence indicates
either space curvature, a dark energy component or a modification to
gravity. The ISW effect is measured by cross-correlating the CMB with
a foreground density field covering the entire extra-galactic sky, as
measured by DUNE. Because it is a local probe of structure growth, ISW
will place complementary constraints on dark energy, at higher
redshifts, relative to the other probes (Douspis et al. 2008).
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,angle=0]{Errors1_new.ps}\includegraphics[width=0.5\textwidth,angle=0]{Errors2_new.ps}
\end{center}
\caption{Left: Expected errors on the dark energy equation of state
parameters for the four probes used by DUNE (68\% statistical
errors). The light blue band indicates the expected errors from
Planck. Of the four methods, weak lensing clearly has the greatest
potential. Right: The combination of BAO, CC and ISW (red solid
line) begins to reach the potential of the lensing survey (blue
dashed line) and provides an additional cross-check on
systematics. The yellow ellipse corresponds to the combination of
all probes and reaches a precision on dark energy of 2\% on $w_n$
and 10\% on $w_a$.}
\label{figc3}
\end{figure}
\subsection{Understanding Dark Matter}
Besides dark energy, one major component of the concordance model of
cosmology is dark matter ($\sim90$\% of the matter in the Universe,
and $\sim 25$\% of the total energy). The standard assumption is that
the dark matter particle(s) is cold and non-collisional (CDM). Besides
direct and indirect dark matter detection experiments, its nature may
well be revealed by experiments such as the Large Hadron Collider
(LHC) at CERN, but its physical properties may prove to be harder to
pin down without astronomical input. one way of testing this is to
study the amount of substructure in dark matter halos on scales
1-100'', which can be done using high order galaxy shape measurements
and strong lensing with DUNE. Weak lensing measurements can constrain
the total neutrino mass and number of neutrino species through
observations of damping of the matter power spectrum on small
scales. Combining DUNE measurements with Planck data would reduce the
uncertainty on the sum of neutrino masses to 0.04eV, and may therefore
make the first measurement of the neutrino mass (Kitching et al 2008).
\subsection{Understanding the Seeds of Structure Formation}
It is widely believed that cosmic structures originated from vacuum
fluctuations in primordial quantum fields stretched to cosmic scales
in a brief period during inflation. In the most basic inflationary
models, the power spectrum of these fluctuations is predicted to be
close to scale-invariant, with a spectral index $n_s$ and amplitude
parameterised by $\sigma_8$. As the Universe evolved, these initial
fluctuations grew. CMB measurements probe their imprint on the
radiation density at $z \sim 1100$. Density fluctuations continued to
grow into the structures we see today. Weak lensing observations with
DUNE will lead to a factor of 20 improvement on the initial
conditions as compared to CMB alone (see Table~\ref{tableC2}).
\subsection{Understanding Einstein's Gravity}
Einstein's General Theory of Relativity, the currently accepted theory
of gravity, has been well tested on solar system and galactic
scales. Various modifications to gravity on large scales (e.g. by
extra dimensions, superstrings, non-minimal couplings or additional
fields) have been suggested to avoid dark matter and dark energy. The
weak lensing measurements of DUNE will be used to test the underlying
theory of gravity, using the fact that modified gravity theories
typically alter the relation between geometrical measures and the
growth of structure (see Fig.~\ref{figc1}). DUNE can measure the the
growth factor exponent $\gamma$ with a precision of 2\%.
\begin{table}
\caption{Dark energy and initial conditions figures of merit for DUNE
and Planck.}
\label{tableC2}
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
& \multicolumn{2}{c}{Dark Energy Sector}\vline & \multicolumn{2}{c}{Initial Conditions
Sector}\vline& DE &IC \\ \hline
& $\Delta w_n$ & $\Delta w_a$ & $\Delta \sigma_8$ & $\Delta n_s$ & FoM & FoM \\ \hline
Planck &0.03 &3 &0.03 &0.004&11 &8,000 \\ \hline
DUNE &0.02 &0.1&0.006 &0.01 &500 &17,000 \\ \hline
DUNE + Planck& 0.01&0.05&0.002&0.003&2000&170,000\\ \hline
\multicolumn{5}{l}{Factor improvement of DUNE + Planck over Planck only}&180&20 \\ \hline
\end{tabular}
\end{table}
\par\bigskip
Meeting the above cosmological objectives necessitates an
extra-galactic all-sky survey (DASS-EX) in the visible/NIR with
galaxies at a median redhift of $z \sim 1$. To this survey, will be
added a shallower survey of the Galactic plane (DASS-G) which will
complete the coverage to the full sky, as well as a medium-deep survey
of 100 deg$^{\rm 2}$ (DUNE-MD) and a pencil beam microlensing survey
for planets in the Galactic bulge.
Focussed on the dark sector, DUNE will produce an invaluable broad survey
legacy. DASS will cover a 10000 times larger area than other
optical/near-IR surveys of the same or better resolution, will be 4mag
deeper than the GAIA photometry and six times higher resolution than
the SDSS. In the infrared, DASS-EX will be nearly 1000 times deeper
(in J) than the all-sky 2MASS survey with an effective
search volume
which will be 5000-fold that of the UKIDDS large area survey currently
underway, and 500-fold that of the proposed VISTA Hemisphere
Survey. It would take VISTA 8000 years to match DASS-EX depth and
20,000 deg$^{\rm 2}$ area coverage. DASS-MD will bridge the gap
between DASS-EX and expected JWST surveys.
\subsection{Tracking the Formation of Galaxies and AGN with DUNE}
While much progress has been made in understanding the formation of
large scale structure, there are still many problems in forming
galaxies within this structure with the observed luminosity function
and morphological properties. This is now a major problem in
astronomy. Obtaining deep high spatial resolution near-IR images will
be central to the study galaxy morphology and clustering. A large area
survey is required for rare but important events, such as the merger
rate of very massive galaxies. DUNE will deliver this key capability.
Using DUNE's weak lensing maps, we will study the relationship between
galaxy mass and light, the bias, by mapping the total mass density and
the stellar mass and luminosity.
Galaxy clusters are the largest scale signposts of
structure formation.
While at present only a few massive clusters
at $z>1$ are known, DUNE will find hundreds of Virgo-cluster-mass
objects at $z>2$, and several thousand clusters of M=$1-2 \times
10^{13}$M$_{\odot}$. The latter are the likely environments in which the peak
of QSO activity at $z\sim2$ takes place, and hold the empirical
key to understanding the heyday of QSO activity.
Using the Lyman-dropout technique in the near-IR, the DUNE-MD survey
will be able to detect the most luminous objects in the early Universe
($z>6$): $\sim 10^4$ star-forming galaxies at $z\sim8$ and up to
$10^3$ at $z\sim10$, for SFRs $>30-100$M$_{\odot}$/yr. It will also be able
to detect significant numbers of high-$z$ quasars: up to $10^4$ at
$z\sim7$, and $10^3$ at $z\sim9$. These will be central to understanding the
reionisation history of the Universe.
Dune will also detect a very large number of strong lensing systems:
about $10^5$ galaxy-galaxy lenses, $10^3$ galaxy-quasar lenses and
5000 strong lensing arcs in clusters (see Menegetthi et al. 2007). It
is also estimated that several tens of galaxy-galaxy lenses will be
\emph{double} Einstein rings (Gavazzi et al. 2008), which are powerful
probes of the cosmological model as they simultaneously probe several redshifts.
In addition, during the course of the DUNE-MD survey (over 6 months),
we expect to detect $\sim 3000$ Type Ia Supernovae with redshifts up
to $z\sim0.6$ and a comparable number of Core Collapse SNe (Types II
and Ib/c) out to $z\sim0.3$. This will lead to measurement of SN rates
thus providing information on their progenitors and on the star
formation history.
\subsection{Studying the Milky Way with DUNE}
DUNE is also primed for a breakthrough in Galactic astronomy. DASS-EX,
complemented by the shallower survey of the Galactic plane (with
$|b|<30\; deg$) will provide all-sky high resolution (0.23'' in the wide red
band, and 0.4'' in YJH) deep imaging of the stellar content of the
Galaxy, allowing the deepest detailed structural studies of the thin
and thick disk components, the bulge/bar, and the Galactic halo
(including halo stars in nearby galaxies such as M31 and M33) in bands
which are relatively insensitive to dust in the Milky Way.
DUNE will be little affected by extinction and will supersede by
orders of magnitude all of the ongoing surveys in terms of angular
resolution and sensitivity.
DUNE will thus
enable the most comprehensive stellar census of late-type dwarfs and
giants, brown dwarfs, He-rich white dwarfs, along with detailed
structural studies, tidal streams and merger fragments. DUNE's
sensitivity will also open up a new discovery space for rare stellar
and low-temperature objects via its H-band imaging. Currently, much
of Galactic structure studies are focussed on the halo. Studying the
Galactic disk components requires the combination of spatial
resolution (crowding) and dust-penetration (H-band) that DUNE can
deliver.
Beyond our Milky Way, DUNE will also yield the most detailed and
sensitive survey of structure and substructure in nearby galaxies,
especially of their outer boundaries, thus constraining their merger
and accretion histories.
\subsection{Search for Exo-Planets}
The discovery of extrasolar planets is the most exciting development
in astrophysics over the past decade, rivalled only by the discovery
of the acceleration of the Universe. Space observations (e.g. COROT, KEPLER), supported by
ground-based high-precision radial velocity surveys will probe
low-mass planets (down to $1 M_\oplus$). DUNE is also
perfectly suited to trace the distribution of matter on very small
scales those of the normally invisible extrasolar planets. Using
microlensing effect, DUNE can provide a statistical census of
exoplanets in the Galaxy with masses over $0.1 M_\oplus$ from orbits
of 0.5 AU to free-floating objects. This includes analogues to all the
solar system's planets except for Mercury, as well as most planets
predicted by planet formation theory. Microlensing is the temporary
magnification of a Galactic bulge source star by the gravitational
potential of an intervening lens star passing near the line of
sight. A planet orbiting the lens star, will have an altered
magnification, showing a brief flash or a dip in the observed light
curve of the star(see Fig. \ref{figc5}).
Because of atmospheric seeing (limiting the monitoring to large source
stars), and poor duty cycle even using networks, ground-based
microlensing surveys are only able to detect a few to 15 $M_\oplus$
planets in the vicinity of the Einstein ring radius (2-3 AU). The high
angular resolution of DUNE, and the uninterrupted visibility and NIR
sensitivity afforded by space observations will provide detections of
microlensing events using as sources G and K bulge dwarfs stars and
therefore can detect planets down to $0.1-1 M_\odot$ from orbits of
0.5 AU. Moreover, there will be a very large number of transiting hot
Jupiters detected towards the Galactic bulge as 'free' ancillary
science. A space-based microlensing survey is thus the only way to
gain a comprehensive census and understanding of the nature of
planetary systems and their host stars. We also underline that the
planet search scales linearly with the surface of the focal plane and
the duration of the experiment.
\begin{figure}
\begin{center}
\includegraphics[width=7cm, height=6cm, angle=0]{duneml.ps}
\caption{Exoplanet discovery parameter space (planet mass vs orbit size)
showing for reference the 8 planets from our solar system (labeled as letters),
those detected by Doppler wobble (T), transit (circle), and
microlensing. We outline regions that can be probed by different
methods. Note the uniqueness of the parameter space probed by DUNE
compared to other techniques.
}
\label{figc5}
\end{center}
\end{figure}
\section{DUNE Surveys: the Need for All-Sky Imaging from Space}
There are two key elements to a high precision weak lensing survey: a
large survey area to provide large statistics, and the control of
systematic errors. Figure \ref{fig:req} shows that to
reach our dark energy target (2\% error on $w_n$) a survey of 20,000
square degrees with galaxies at $z\sim1$ is required. This result is
based on detailed studies showing that, for a fixed observing time,
the accuracy of all the cosmological parameters is highest for a wide
rather than deep survey (Amara \& Refregier 2007a, 2007b). This required
survey area drives the choice of a 1.2m telescope and a combined
visible/NIR FOV of 1 deg$^{\rm 2}$ for the DUNE baseline.
\begin{figure}
\includegraphics[width=0.5\textwidth,height=5cm,angle=0]{surveyarea.ps}\includegraphics[width=0.5\textwidth,angle=0]{des_dens.eps}
\caption{Left: Error on the dark energy equation of state
parameter $w_n$ as a function of weak lensing survey area (in deg$^{\rm
2}$) for several shape measurement systematic levels (assuming 40
galaxies/amin$^2$ with a median redshift $z_m$=1). An area of 20,000 deg$^2$
and a residual systematic shear variance of
$\sigma_{sys}^2<10^{-7}$ is required to achieve the DUNE objective
(error on $w_n$ better than 2\%).
Right (from Abdalla et
al. 2007): Photometric redshift performance for a DES-like ground survey
with and without the DUNE NIR bands (J,H). The deep NIR photometry,
only achievable in space, results in a dramatic reduction of the
photometric redshift errors and catastrophic failures which are needed for all
the probes (weak lensing, BAO, CC, ISW).}
\label{fig:req}
\end{figure}
Ground based facilities plan to increase area coverage, but they will
eventually be limited by systematics inherent in ground based
observations (atmospheric seeing which smears the image, instabilities
of ground based PSFs, telescope flexure and wind-shake, and
inhomogeneous photometric calibrations arising from seeing
fluctuations). The most recent ground-based wide field imagers
(e.g. MegaCam on CFHT, and Subaru) have a stochastic variation of the
PSF ellipticity of the order of a few percent, i.e. of the same order
of magnitude as the sought-after weak lensing signal. Current
measurements have a residual shear systematics variance of
$\sigma_{sys}^2 \sim 10^{-5}$, as indicated with both the results of
the STEPII program and the scatter in measured value of
$\sigma_8$. This level of systematics is comparable to the statistical
errors for surveys that cover a few tens of square degree
(Fig. \ref{fig:req}). As seen on the figure, to reach DUNE's dark
energy targets, the systematics must be at the level of
$\sigma_{sys}^2 \sim 10^{-7}$, 100 times better than the current level
(see Amara \& Refregier 2007b for details). While ground based surveys
may improve their systematics control, reaching this level will be an
extreme challenge. One ultimate limit arises from the finite
information contained in the stars used to calibrate the PSF, due to
noise and pixelisation. Simulations by Paulin-Henriksson et al. (2008)
show that, to reach our systematic targets, the PSF must remain
constant (within a tolerance of 0.1\%) over 50 arcmin$^2$ (which
corresponds to $\sim 50$ stars). While this is prohibitive from the
ground, we have demonstrated during a CNES phase 0 study (Refregier et
al. 2006), that with the careful design of the instrument, this can be
achieved from space. In addition to shape measurements, wide area
imagining surveys use photometric information to measure the redshift
of galaxies in the images. Accurate measurements of the photometric
redshifts require the addition of NIR photometry (an example of this
is shown in Fig. \ref{fig:req}, right panel, and also Abdalla et
al. 2007). Such depths in the NIR cannot be achieved from the ground
over wide area surveys and can only be done from space.
\par\bigskip
To achieve the scientific goals listed in section \ref{section2}, DUNE will
perform four surveys detailed in the following and in Table \ref{tableC5}.
\subsection{Wide Extragalactic Survey: DASS-EX }
To measure dark energy to the required precision, we need to make
measurements over the entire extra-galactic sky to a depth which
yields 40 gal/arcmin$^2$ useful for lensing with a median redshift
$z_m \simeq 0.9$. This can be achieved with a survey (DASS-EX) that
has AB-magnitude limit of 24.5 (10$\sigma$ extended source) in a broad
red visible filter (R+I+Z). Based on the fact that DUNE focuses on
observations that cannot be obtained from the ground, the wide survey
relies on two unique factors that are enabled by space: image quality
in the visible and NIR photometry. Central to shape measurements for
weak lensing the PSF of DUNE needs to be sampled better than 2-2.5
pixels per FWHM (Paulin-Henriksson et al. 2008), to be constant over
50 stars around each galaxy (within a tolerance of $\sim 0.1\%$ in
shape parameters), and to have a wavelength dependence which can be
calibrated. Accurate measurement of the redshift of distant galaxies
($z \sim 1$) requires photometry in the NIR where galaxies have a
distinctive feature (the 4000$\AA$ break). Deep NIR photometry
requires space observations. The bands Y, J and H are the perfect
synergy for ground based survey complement (see Abdalla et al. 2007
for discussion), as recommended by the ESO/ESA Working Group on
Fundamental Cosmology (Peacock et al. 2006).
\subsection{Legacy surveys: DASS-G, DUNE-MD, and DUNE-ML}
We propose to allocate six months to a medium deep survey (DUNE-MD) with
an area of 100 deg$^2$ to magnitudes of 26 in Y, J and H, located at
the N and S ecliptic poles. This survey can be used to calibrate DUNE
during the mission, by constructing it from a stack of $>30$
sub-images too achieve the required depths. DUNE will also perform a
wide Galactic survey (DASS-G) that will complement the 4$\pi$ coverage
of the sky and a microlensing survey (DUNE-ML). Both surveys require
short exposures. Together with the DASS-EX, these surveys need good
image quality with low level of stray light. A summary of all the
surveys is shown in Table \ref{tableC5}.
\begin{table}
\caption{Requirements and geometry for the four DUNE surveys.}
\label{tableC5}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\textbf{Wide Extragalactic Survey DASS-EX (must)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{20,000 sq degrees -- $|b|> 30 \deg$
}\\ \hline
\multirow{2}{*}{Survey Strategy}& Contiguous patches & \multicolumn{2}{|c|}{$> 20 \deg \times 20 \deg$} \\ \cline{2-4}
& Overlap & \multicolumn{2}{|c|}{$ 10 \%$} \\ \hline
\multicolumn{2}{|c|}{Shape Measurement Channel}& R+I+Z (550-920nm) & R+I+$Z_{AB}$ $<$24.5 (10$\sigma$ ext) \\ \hline
\multicolumn{2}{|c|}{ } & Y (920-1146nm) & $Y_{AB}<$24 (5$\sigma$ point) \\ \cline{3-4}
\multicolumn{2}{|c|}{Photometric Channel} & J (1146-1372nm) & $J_{AB}<$24 (5$\sigma$ point) \\ \cline{3-4}
\multicolumn{2}{|c|}{ } & H (1372-1600nm) & $H_{AB}<$24 (5$\sigma$ point) \\ \hline
\multirow{2}{*}{PSF} & Size \& Sample & 0.23" FWHM & $>$ 2.2 pixels per FWHM \\\cline{2-4}
& Stability & \multicolumn{2}{|c|}{within tolerance of 50 stars} \\ \hline
\multirow{2}{*}{Image Quality} & Dead pixels &\multicolumn{2}{|c|}{$<$ 5 \% of final image}\\ \cline{2-4}
& Linearity &\multicolumn{2}{|c|}{Instrument calibratable for $1<$S/N$<1000$}\\ \hline
\multicolumn{4}{|c|}{\textbf{Medium Deep Survey DUNE-MD (should)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ $\sim$100 sq degrees
-- Ecliptic poles}\\ \hline
Survey Strategy& Contiguous patches & \multicolumn{2}{|c|}{Two patches each $7 \deg \times 7 \deg$} \\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{ $Y_{AB}, \; J_{AB}, \; H_{AB} <$26 (5$\sigma$ point) -- for stack}\\ \hline
\multicolumn{2}{|c|}{PSF} & \multicolumn{2}{c|}{Same conditions as the wide survey} \\ \hline
\multicolumn{4}{|c|}{\textbf{Wide Galactic Survey DASS-G (should)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ 20,000 sq degrees --
$|b| < 30 \deg$}\\ \hline
\multicolumn{2}{|c|}{Shape Measurement Channel}&\multicolumn{2}{|c|}{$R+I+Z_{AB}<23.8$ ($5\sigma$ ext)}\\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{ $Y_{AB}, \; J_{AB}, \; H_{AB} <$22 (5$\sigma$ point)}\\ \hline
PSF & Size & \multicolumn{2}{|c|}{$< 0.3"$ FWHM}\\ \hline
\multicolumn{4}{|c|}{\textbf{Microlensing Survey DUNE-ML (could)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ 4 sq degrees -- Galactic bulge}\\ \hline
Survey Strategy & Time sampling & \multicolumn{2}{|c|}{Every 20 min -- 1 month blocks -- total of 3 months}\\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{
$Y_{AB}, \; J_{AB}, \; H_{AB} <$22 (5$\sigma$ point) -- per visit}\\ \hline
PSF & Size & \multicolumn{2}{|c|}{$< 0.4"$ FWHM}\\ \cline{2-4}
\hline
\end{tabular}
\end{table}
\section{Mission Profile and Payload instrument}
The mission design of DUNE is driven by the need for the stability of
the PSF and large sky coverage. PSF stability puts stringent
requirements on pointing and thermal stability during the observation
time. The 20,000 square degrees of DASS-EX demands high operational
efficiency, which can be achieved using a drift scanning mode (or Time
Delay Integration, TDI, mode) for the CCDs in the visible focal
plane. TDI mode necessitates the use of a counter-scanning mirror to
stabilize the image in the NIR focal plane channel.
The baseline for DUNE is a Geosynchronous Earth orbit (GEO), with a
low inclination and altitude close to a standard geostationary
orbit. Based on Phase 0 CNES study, this solution was chosen to meet
both the high science telemetry needs and the spacecraft low
perturbation requirements. This orbit also provides substantial
launch flexibility, and simplifies the ground segment.
As for the PSF size and sampling requirements, a
baseline figure for the line-of-sight stability is 0.5 pixel (smearing
MTF $> 0.99$ at cut-off frequency), with the stability budget to be
shared between the telescope thermal stability (0.35 pixel) and the
attitude orbit control system (AOCS) (0.35 pixel). This implies a
line-of-sight stability better than 0.2 $\mu$rad over 375 sec (the
integration time across one CCD). This stringent requirement calls for
a minimalization of external perturbations which mainly consist of solar radiation
pressure and gravity gradient torques. A gravitational torque of 20
$\mu$Nm is acceptable, and requires an orbit altitude of at least
25,000 km. The attitude and orbit control design is based on proportional
actuators.
A stable thermal environment is requested for the payload ($\sim 10
mK$ variation over 375sec), hence mission design requires a permanent
cold face for the focal plane radiators and an orbit that minimizes
heat load from the Earth. This
could be achieved by having the whole payload in a regulated temperature
cavity.
A primary driver for the GEO orbit choice is the high data rate -- the
orbit must be close enough to Earth to facilitate the transmission of
the high amount of data produced by the payload every day (about
1.5~Tbits) given existing ground network facilities, while minimizing
communication downlink time during which science observations cannot
be made (with a fixed antenna).
The effects of the radiation environment at GEO, for both CCD bulk
damage induced by solar protons and false detections induced by
electrons with realistic shielding, is considered acceptable. However,
DUNE specific radiation tests on CCD sensors will be required as an
early development for confirming the measurement robustness to proton
induced damage. A High Elliptical Orbit (HEO) operating beyond the
radiation belt is an alternative in case electron radiation or thermal
constraints prevent the use of GEO.
The payload for DUNE is a passively cooled 1.2m diameter Korsch-like
three-mirror telescope with two focal planes, visible and NIR covering
1 square degree. Figure~\ref{fig:4.1} provides an overview of the
payload. The Payload module design uses Silicon Carbide (SiC)
technology for the telescope optics and structure. This provides low
mass, high stability, low sensitivity to radiation and the ability to
operate the entire instrument at cold temperature, typically below 170
K, which will be important for cooling the large focal planes. The two
FPAs, together with their passive cooling structures are isostatically
mounted on the M1 baseplate. Also part of the payload are the de-scan
mirror mechanism for the NIR channel and the additional payload data
handling unit (PDHU).
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth,angle=0]{telescope.eps}
\caption{Overview of all payload elements. }
\label{fig:4.1}
\end{center}
\end{figure}
\subsection{Telescope}
The telescope is a Korsch-like $f/20$ three-mirror telescope. After
the first two mirrors, the optical bundle is folded just after passing
the primary mirror (M1) to reach the off-axis tertiary mirror. A
dichroic element located near the exit pupil of the system provides
the spectral separation of the visible and NIR channels. For the NIR,
the de-scan mechanism close to the dichroic filter allows for a
largely symmetric configuration of both spectral channels. The whole
instrument fits within a cylinder of 1.8m diameter and
2.65m length. The payload mass is approximately 500~kg, with 20\%
margin, and average/peak power estimates are 250/545~W.
Simulations have shown that the overall wavefront error (WFE) can be
contained within 50 nm r.m.s, compatible with the required
resolution. Distortion is limited to between 3-4$\mu$m, introducing an
0.15$\mu$rad fixed (hence accessible to calibration) displacement in
the object space. The need to have a calibration of the PSF shape
error better than 0.1 \% over 50 arcmin$^2$ leads to a thermal
stability of $\sim 10$ mK over 375 s. Slow variations of solar
incidence angle on the sunshield for DUNE will not significantly
perturb the payload performance, even for angles as large as 30
degrees.
\subsection{Visible FPA}
The visible Focal Plane Array (VFP) consists of 36 large format
red-sensitive CCDs, arranged in a 9x4 array (Figure~\ref{fig:4.2})
together with the associated mechanical support structure and
electronics processing chains. Four additional CCDs dedicated to
the AOCS measurements are located at the edge of
the array. All CCDs are 4096 pixel red-enhanced e2v CCD203-82 devices
with square 12 $\mu$m pixels. The physical size of the array is
466x233 mm which corresponds to $1.09\deg \times 0.52 \deg$. Each pixel is
0.102 arcsec, so that the PSF is well sampled in each direction over
approximately 2.2 pixels, including all contributions. The VFP
operates in the red band from 550-920nm. This bandpass is produced by
the dichroic. The CCDs are 4-phase devices, so they can be clocked in
$1/4$ pixel steps. The exposure duration on each CCD is 375s,
permitting a slow readout rate and minimising readout noise. Combining
4 rows of CCDs will then provide a total exposure time of 1500s.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth,angle=0]{VisFPA.eps}
\caption{Left: The VFP assembly with the 9x4 array of CCDs
and the 4 AOCS sensors on the front (blue) and the warm electronics
radiator at the back (yellow). Right: An expanded view of the VFP
assembly, including the electronics modules and thermal hardware (but
excluding the CCD radiator). Inset: The e2v CCD203-82 4kx4k pixels
shown here in a metal pack with flexi-leads for electrical
connections. One of the flexi-leads will be removed. }
\label{fig:4.2}
\end{center}
\end{figure}
The VFP will be used by the spacecraft in a closed-loop system to
ensure that the scan rate and TDI clocking are synchronised. The two
pairs of AOCS CCDs provide two speed measurements on relatively bright
stars (V $\sim 22-23$). The DUNE VFP is largely a self-calibrating
instrument. For the shape measurements, stars of the appropriate
magnitude will allow the PSF to be monitored for each CCD including
the effects of optical distortion and detector alignment.
Radiation-induced charge transfer inefficiency will modify the PSF and
will also be self-calibrated in orbit.
\subsection{NIR FPA}
The NIR FPA consists of a 5 x 12 mosaic of 60 Hawaii 2RG detector
arrays from Teledyne, NIR bandpass filters for the wavelength bands Y,
J, and H, the mechanical support structure, and the detector readout
and signal processing electronics (see Figure~\ref{fig:4.3}). The FPA
is operated at a maximum temperature of 140 K for low dark current of
0.02$e^-$/s. Each array has 2048 x 2048 square pixels of 18 $\mu$m
size resulting in a 0.15 x 0.15 arcsec$^2$ field of view (FOV) per pixel.
The mosaic has a physical size of 482 x 212 mm, and covers a
FOV of $1.04^\circ \times 0.44^\circ$ or 0.46 square degrees. The
HgCdTe Hawaii 2RG arrays are standard devices sensitive in the 0.8 to
1.7 $\mu$m wavelength range.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth,angle=0]{NIRFPA.eps}
\caption{Layout of the NIR FPA (MPE/Kayser-Threde). The 5
x 12 Hawaii 2RG Teledyne detector arrays (shown in the inset) are
installed in a molybdenum structure}
\label{fig:4.3}
\end{center}
\end{figure}
As the spacecraft is scanning the sky, the image motion on the NIR FPA
is stabilised by a de-scanning mirror during the integration time of
300s or less per NIR detector. The total integration time of 1500 s
for the $0.4^\circ$ high field is split among five rows and 3
wavelengths bands along the scan direction. The effective integration
times are 600 s in J and H, and 300 s in Y. For each array, the
readout control, A/D conversion of the video output, and transfer of
the digital data via a serial link is handled by the SIDECAR ASIC
developed for JWST. To achieve the limiting magnitudes defined by the
science requirements within these integration times, a minimum of 13
reads are required. Data are
processed in the dedicated unit located in the service module.
\section{Basic spacecraft key factors}
The spacecraft platform architecture is fully based on well-proven and
existing technologies. The mechanical, propulsion, and solar array
systems are reused from Venus Express (ESA) and Mars-Express. All the
AOCS, $\mu$-propulsion, Power control and DMS systems are reused from
GAIA. Finally, the science telemetry system is a direct reuse from the
PLEIADES (CNES) spacecraft. All TRLs are therefore high and all
technologies are either standard or being developed for GAIA (AOCS for
instance).
\subsection{Spacecraft architecture and configuration}
The spacecraft driving requirements are: (1) Passive cooling of both
visible and NIR focal planes below 170 K and 140 K, respectively; (2)
the PSF stability requirement, which translates to line of sight (LOS)
and payload thermal stability requirements; and (3) the high science
data rate. The spacecraft consists of a Payload Module (PLM) that
includes the instrument (telescope hardware, focal plane assemblies
and on board science data management) and a Service Module (SVM). The
SVM design is based on the Mars Express parallelepiped structure that
is 1.7 m $\times$ 1.7 m $\times$ 1.4 m, which accommodates all
subsystems (propulsion, AOCS, communication, power, sunshield, etc) as
well as the PLM.
The spacecraft platform, and all technologies, are either
standard ones or being developed into GAIA programme (e.g. AOCS).
\subsection{Sunshield and Attitude Control}
The nominal scan strategy assumes a constant, normal ($0\deg$)
incidence of the sun on the sunshield, while allowing a sun incidence
angle of up to $30\deg$ to provide margin, flexibility for data
transmission manoeuvres and potential for further scan
optimisation. The sunshield is a ribbed, MLI-covered central frame
fixed to the platform. The satellite rotates in a draft
scan-and-sweep-back approach where the spacecraft is brought back to
next scan position after each $20\deg$ strip. The scan rate is $1.12
\deg$ per hour, such that every day, one complete strip is scanned and
transmitted to ground.
Due to the observation strategy and the fixed high gain antenna (HGA),
the mission requires a high level of attitude manoeuvrability.
During data collection, the spacecraft is
rotated slowly about the sunshield axis. The slow scan control
requirements are equivalent to three-axis satellite control. The
line-of-sight stability requirement is 0.2 $\mu$rad over 375s (the
integration time for one CCD) and is driven by optical quality and PSF
smearing, and will be partially achieved using
a continuous PSF calibration using the stars located in the
neighborhood (50 arcmin$^2$) of each observed galaxy. Detailed
analyses show that DUNE high pointing performance is comparable in
difficulty to that achieved on GAIA during science
observations. Similarly to GAIA, two pairs of dedicated CCD in the
visible focal plane are used for measuring the spacecraft attitude
speed vector. Hybridisation of the star tracker and payload
measurements is used to reduce the noise injected by the star tracker
in the loop. For all other operational phases and for the transition
from coarse manoeuvres to the science observation mode, the attitude
is controlled using the Mars Express propulsion system. The attitude
estimation is based on using two star trackers (also used in science
observing mode), six single-axis gyros and two sun sensors for
monitoring DUNE pointing during manoeuvres with a typically accuracy
better than 1 arcmin.
\subsection{Functional architecture: propulsion and electrical systems}
The star signal collected in the instrument is spread on the focal
plane assembly and transformed into a digital electrical signal which
is transferred to the Payload Data Handling Unit (PDHU), based on
Thales/AlienaSpace heritage. Power management and regulation are
performed by the Power Conditioning \& Distribution Unit (PCDU), and
based on the GAIA program. Electrical power is generated by two solar
arrays (2.7 m$^2$ each), as used in the Mars Express and Venus Express
ESA spacecraft. The control of their orientation is based on the
orientation of the whole spacecraft
towards the Sun. The panels are filled with AsGa cells.
The RF architecture is divided into two parts with the TT\&C system
(S-Band) plus a dedicated payload telemetry system (X-Band in the EES
band (Earth Exploration Service). The allocated bandwidth for payload
telemetry is 375 MHz and high rate transmitters already exist for
this purpose. The X-band 155 Mbits/s TMTHD modulator can be reused
from Pleiades spacecraft. A single fixed HGA of 30 cm diameter can be
used (re-used from Venus Express). The RF power required is 25 W, which
also enables the re-use of the solid state power amplifier (SSPA) from
Pleiades. The transmitted science data volume is estimated at 1.5
Tbits per day. The baseline approach consists in
storing the science data on board in the PDHU, then to downlink the
data twice per day. This can be achieved naturally twice per orbit at
06h and 18h local time and using the rotation degree of freedom about
the satellite-sun axis for orienting the antenna towards the ground
station. The total transmission duration is less than 3 hours. The
spacecraft attitude variation during transmission is less than 30 deg
(including AOCS margins). 20 kg hydrazine propellant budget is
required. In case the operational orbit would change to HEO, a dual
frequency (S-Band + X-Band) 35 m ESOC antenna could fit with the
mission needs, with in an increased HGA diameter (70 cm).
The required power on the GEO orbit is 1055 W. The
sizing case is the science mode after eclipse with battery
charging. Electrical power is generated by the two solar arrays of 2.7
m$^2$ each. With a $30\deg$ solar angle,
the solar array can generate up to 1150 W at the end of its life. The battery
has been sized in a preliminary
approach for the eclipse case (64 Ah need).
\section{Science Operations and Data Processing}
The DUNE operational scenario follows the lines of a survey-type
project. The satellite will operate autonomously except for defined
ground contact periods during which housekeeping and science telemetry
will be downlinked, and the commands needed to control spacecraft and
payload will be uploaded. The DUNE processing pipeline is inspired by
the Terapix pipeline used for the CFHT Legacy Survey. The total
amount of science image data expected from DUNE is $\sim 370$
Terapixels (TPx): 150TPx from the Wide, 120TPx for 3 months of the
microlensing survey, 60TPx for the 3 months of the Galactic plane
survey, and 40TPx for 6 months deep survey. Based on previous
experience, we estimate an equal amount of calibration data (flat
fields, dark frames, etc.) will be taken over the course of the
mission. This corresponds to 740TB, roughly 20 times the amount of
science data for CFHT during 2003-2007.
There are four main activities necessary for the data processing,
handling, and data organisation of the DUNE surveys:
\begin{enumerate}
\item software development: image and catalogue processing,
quality control, image and catalogue handling tools,
pipeline development, survey monitoring, data archiving and
distribution, numerical simulations, image simulations;
\item processing operation: running the pipeline, quality control and
quality assessment operation and versioning,
pipeline/software/database update and maintenance;
\item data archiving and data distribution: data and meta-data
products and product description, public user interface, external data
(non-DUNE) archiving and distribution, public outreach;
\item computing resources: data storage, cluster architecture,
GRID technology.
\end{enumerate}
\section{Conclusion: DUNE's Impact on Cosmology and Astrophysics}
ESA's Planck mission will bring unprecedented precision to the
measurement of the high redshift Universe. This will leave the dark
energy dominated low redshift Universe as the next frontier in high
precision cosmology. Constraints from the radiation perturbation in
the high redshift CMB, probed by Planck, combined with density
perturbations at low redshifts, probed by DUNE, will form a complete
set for testing all sectors of the cosmological model. In this
respect, a DUNE+Planck programme can be seen as the next significant
step in testing, and thus challenging, the standard model of
cosmology. Table \ref{tableC2} illustrates just how precise the
constraints on theory are expected to be: DUNE will offer high
potential for ground-breaking discoveries of new physics, from dark
energy to dark matter, initial conditions and the law of gravity. Our
understanding of the Universe will be fundamentally altered in a
post-DUNE era, with ESA's science programmes at the forefront of these
discoveries.
As described above, the science goals of DUNE go far beyond the
measurement of dark energy. It is a mission which:
(i) measures both effects of dark energy (i.e. the expansion history
of the Universe and the growth of structure) by using weak lensing as the
central probe; (ii) places this high precision measurement of dark
energy within a broader framework of high precision cosmology by
constraining all sectors of the standard cosmology model (dark matter,
initial conditions and Einstein gravity); (iii) through a collection
of unique legacy surveys is able to push the frontiers of the
understanding of galaxy
evolution and the physics of the local group; and finally (iv) is able
to obtain information on some of the lowest masses astronomy
extrasolar planets, which could contain mirror Earths.
DUNE has been selected jointly with SPACE (Cimatti et al. 2008) in
ESA's Cosmic Vision programme for an assessment phase which lead to
the Euclid merged concept.
\begin{acknowledgements}
We thank CNES for support on an earlier version of the DUNE mission
and EADS/Astrium, Alcatel/Alenia Space, as well as Kayser/Threde for
their help in the preparation of the ESA proposal.
\end{acknowledgements}
\bibliographystyle{spbasic}
|
\section{Introduction}
Two dimensional (2D) materials like graphene, silicene and germanene are semimetals with zero-gap \cite{w11,cta09}, and their charge carriers are massless fermions\cite{nltzzq12}. Graphene have been studied vastly because of its superior advantages such as mechanical, optical and electronic properties \cite{ajyr19, kjna19, lkk19, lmhlz19, m18, qxz18, rilts19, sjhs19, thxwl20,ytycqw19, zxldxl, pky17,geh17,z16, mwh18}. Different doping are performed in graphene for the new applications such as sulfur-doping for micro-supercapacitors\cite{csqmj19}, nitrogen-doped graphene quantum dots for photovoltaic\cite{hgrpgc19}, silicon nanoparticles embedded in n-doped few-layered graphene for lithium ion batteries\cite{lyzsgc} and implanting germanium into graphene for single-atom catalysis applications\cite{tmbfbk18}. Theoretical and experimental investigations of graphene-like structures such as silicene and germanene have been vastly carried out \cite{vpqafa,loekve,dsbf12,wqdzkd17,cxhzlt17,ctghhg17}. Silicene and germanene have been grown on Au(111)\cite{cstfmg17}, Ag(111)\cite{jlscl16} and Ir(111)\cite{mwzdwl13} that can encourage researchers to do more study about them. Due to the buckled structure of silicene, it has different physical properties compared to graphene, such as higher surface reactivity\cite{jlscl16}, and a tunable band gap by using an external electric field which is highly favorable in nanoelectronic devices\cite{nltzzq12,dzf12}. However, the formation of imperfections on the synthesis of silicene is usually inevitable which influences the magnetic and electronic properties of the material\cite{lwtwjl15}. There are some studies about doped atoms such as lithium, aluminum and phosphorus in silicene to achieve wide variety of electronic and optical properties\cite{mmt17,dcmj15}.
Recently simulation and fabrication of 2D silicon-carbon compounds known as siligraphene (Si$_m$C$_n$) have received more attentions due to their extraordinary electronic and optical properties. For example, SiC$_2$ siligraphene which has been experimentally synthesized\cite{lzlxpl15}, is a promising anchoring material for lithium-sulfur batteries\cite{dlghll15}, a promising metal-free catalyst for oxygen reduction reaction\cite{dlghll15}, and a novel donor material in excitonic solar cells\cite{zzw13}. Also, graphitic siligraphene g-SiC$_3$ in the presence of strain can be classified in different electrical phases such as a semimetal or a semiconductor. g-SiC$_3$ has a semimetallic behavior under compression strain up to 8\%, but it becomes a semiconductor with direct band gap (1.62 eV) for 9\% of compression strain and becomes a semiconductor with indirect band gap (1.43 eV) for 10\% of compression strain \cite{dlghll15}. Moreover, g-SiC$_5$ has semimetallic properties and it can be used as a gas sensor for air pollutant\cite{dwzhl17}. Furthermore, SiC$_7$ siligraphene has a good photovoltaic applications \cite{heakba19} and can be used as a high capacity hydrogen storage material\cite{nhla18}. It shows superior structural, dynamical and thermal stability comparing to other types of siligraphene and it is a novel donor material with extraordinary sunlight absorption\cite{dzfhll16}. The structural and electronic properties of silicene-like SiX and XSi$_3$ (X = B, C, N, Al, P) honeycomb lattices have been investigated\cite{dw13}. Also, the planarity and non-planarity properties for g-SiC$_n$ and g-Si$_n$C (n = 3, 5, and 7) structures have been studied\cite{tllpz19}.
The excellent properties of siligraphene\cite{dzfhll16} motivated us to study CSi$_7$ and GeSi$_7$, in order to find a new approach of silicene buckling and band gap control and to obtain new electronic and optical properties. Here we call CSi$_7$ carbosilicene and GeSi$_7$ germasilicene. We choose carbon and germanium atoms respectively for CSi$_7$ and GeSi$_7$ because these atoms, same as silicon atom, have four valence electrons in their highest energy orbitals. Using density functional theory, we show that both structures are stable but CSi$_7$ is more stable than GeSi$_7$. The carbon atom in CSi$_7$ decreases the buckling, while germanium atom in GeSi$_7$ increases the buckling. It is shown that CSi$_7$ is a semiconductor with 0.24 eV indirect band gap\cite{plgkl20} but GeSi$_7$, similar to silicene, is a semimetal. Also, we investigate the effects of strain and we show that for CSi$_7$, the compressive strain can increase the band gap and the tensile strain can decrease. At sufficient tensile strain (>3.7\%), the band gap of CSi$_7$ becomes zero and thus the semiconducting properties of this material change to metallic properties. As a result, the band gap of CSi$_7$ can be tuned by strain and this material can be used in straintronic devices such as strain sensors and strain switches. For GeSi$_7$, strain does not have any significant effect on it. In contrast, GeSi$_7$ has high dielectric constant and can be used as a 2D material with high dielectric constant in advanced capacitors. Finally, we investigate the optical properties of these materials and we find that the light absorption of both CSi$_7$ and GeSi$_7$ are significantly greater than the light absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as a good candidate for solar cell applications. It is worth to mention that germasilicene, GeSi$_7$, is a new 2D material proposed and studied in this paper, while carbosilicene, CSi$_7$, has been proposed previously as a member of siligraphene but only its band structure has been studied\cite{tllpz19,plgkl19,plgkl20}.
The rest of the paper is organized as follows. In Sec. II, method of calculations is introduced and the results and discussion are given in Sec. III. Section IV contains a summary and conclusion.
\section{Method of calculations}
Density functional theory (DFT) calculations are performed using the projector-augmented wave pseudopotentials \cite{b94} as implemented in the Quantum-ESPRESSO code\cite{gc09}. To describe the exchange-correlation functional, the generalized gradient approximation (GGA) of Perdew-Bruke-Ernzerhof (PBE) is used\cite{pbe96}. After optimization, the optimum value for the cutoff energy is obtained equal to 80 Ry. Also, Brillouin-zone integrations are performed using Monkhorst-Pack\cite{mp76} and optimum reciprocal meshes of 12×12×1 are considered for calculations. At first, unit cells and atomic positions of both CSi$_7$ and GeSi$_7$ are optimized and then their electronic properties are determined by calculating the density of states and band structure. Moreover, their optical properties are determined by calculating the absorption and the imaginary and real parts of dielectric constant.
\section{Results and discussion}
\subsection{Structural properties}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig1.eps}
\caption{(a) Top view of silicene and (b) Si$_8$ unit cells. (c) Side view of Si$_8$ unit cell.
}
\label{fig1}
\end{figure}
By increasing silicene unit cell [see Fig.~\ref{fig1} (a)] in x and y direction twice, Si$_8$ has been constructed [see Fig. ~\ref{fig1}(b)] in hexagonal lattice (i.e., $ \alpha=\beta=90^{\circ},\gamma=120^{\circ}$). In physical view, both silicene and Si$_8$ have the same physical properties because by increasing both unit cells, silicene monolayer has been achieved. In this work, Si$_8$ unit cell considered because CSi$_7$ and GeSi$_7$ can be constructed by replacing a silicone atom with a carbon or a germanium atom. After relaxation, the bond length of Si$_8$ was $d=2.4 \;\AA$ [see Fig.~\ref{fig1} (a)] and lattice parameters were $|a|=|b|=7.56 \; \AA$ and $|c|=14.4 \;\AA$ [see Figs.~\ref{fig1}(b) and 1(c)] and buckling parameter $\Delta=0.44 \;\AA$ [see Fig.~\ref{fig1}1 (c)] which has a good agreement with previous works\cite{wwltxh,gzj12,zlyqzw16}. Here c is the distance to make sure that there is no interaction between adjacent layers.
For carbosilicene, CSi$_7$, unit cell construction, a silicon atom can be replaced with a carbon atom as shown in Fig. ~\ref{fig2}. Because of structural symmetry of CSi$_7$ monolayer (see Fig. ~\ref{fig6}), the position of impurity atom is not important, and our calculations also show the same ground state energy for all the eight possible impurity positions. After relaxation, optimum lattice parameters are obtained as $|a|= |b|=7.49\; \AA$ and $|c|= 12.86 \; \AA$ for CSi$_7$ unit cell. Fig. ~\ref{fig2} shows this structure before and after relaxation. For a more detailed explanation, we labeled atoms in this figure. It is observed that Si-C bond length (i.e., $d_{2-4}=1.896 \; \AA$) is shorter than Si-Si band length (i.e., $d_{1-2}=2.317,\; d_{1-3}=2.217 \; \AA$) because of sp$^2$ hybridization. Also, unlike graphene, the hexagonal ring is not a regular hexagon due to the electronegativity difference between C and Si atoms\cite{dzfhll16}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth,clip=true]{Fig2.eps}
\caption{Top view of CSi$_7$ unit cell (a) before and (b) after relaxation. Carbon atom is shown by yellow sphere and silicon atoms by blue spheres.
}
\label{fig2}
\end{figure}
Fig. ~\ref{fig3} shows the side view of CSi$_7$ unit cell. After relaxation, the buckling parameter between atoms 1 and 3 ($\Delta_{1-3}$) is 0.1 $\AA$ whereas this parameter for atoms 2 and 4 ($\Delta_{2-4}$) is 0.39 $\AA$. So, CSi$_7$ has a structure with two different buckling parameters and one can use the carbon atoms to decrease buckling parameter of silicene. Silicene has one buckling and two sublattices\cite{zyy15}, while carbosilicene has two bucklings and thus three sublattices including one for carbon atoms and two others for silicon atoms.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig3.eps}
\caption{Side view of CSi$_7$ unit cell (a) before and (b) after relaxation.
}
\label{fig3}
\end{figure}
If we replace a silicon atom with a germanium atom as shown in Fig.~\ref{fig4}, we could obtain germasilicene, GeSi$_7$, structure. As we can see in this figure, the optimized parameters are $|a|$=$|b|$=7.8$ \AA$, $|c|$=11.98 $\AA$ and the Si-Ge bond length and lattice constants are greater than those of Si-Si. Also, by comparing bond lengths and lattice parameters of GeSi$_7$ and CSi$_7$ structures, it is seen that the bond lengths and lattice parameters of GeSi$_7$ are significantly greater than those of CSi$_7$ which is due to the larger atomic number and thus atomic radius of germanium relative to the carbon\cite{zyihm18}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth,clip=true]{Fig4.eps}
\caption{ Top view of GeSi$_7$ unit cell (a) before and (b) after relaxation. Here germanium atom is shown by purple color.
}
\label{fig4}
\end{figure}
The buckling parameters of germasilicene structure are depicted in Fig. ~\ref{fig5}. After relaxation, we find that the value of these parameters are $\Delta_{2-4}=0.53\; \AA$ and $\Delta_{1-3}=0.43 \; \AA$. Therefore, GeSi$_7$ like CSi$_7$ has a structure with two different buckling and the germanium impurity atom increases the buckling of silicene. Bond length values and other structural parameters after relaxation are shown in Table 1.
\begin{table*}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
&$|a|=|b|$& $|c|$ &$d_{1-2}$ &$d_{2-4}$& $d_{1-3}$ &$\Delta_{2-4}$& $\Delta_{1-3}$& $\Delta_d$ \\
\hline
Si$_8$ & 7.65 & 14.4 & 2.4 & 2.4 & 2.4 & 0.44 & 0.44 & 0 \\
\hline
CSi$_7$ & 7.49 & 12.86 & 2.317 & 1.896 & 2.217 & 0.1 & 0.39 & 0.29\\
\hline
GeSi$_7$ & 7.8 & 11.98 & 2.287 & 2.34 & 2.297 & 0.53 & 0.43 & 0.1\\
\hline
\end{tabular}
\caption{Optimum lattice parameters $|a|$, $|b|$ and $|c|$, bond lengths $d_{1-2}$, $d_{2-4}$ and
$d_{1-3}$ and buckling parameters $\Delta_{2-4}$, $\Delta_{1-3}$ and $\Delta_d$. All values are in Angstrom.
}
\label{tab1}
\end{table*}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig5.eps}
\caption{ Side view of GeSi$_7$ unit cell (a) before and (b) after relaxation
}
\label{fig5}
\end{figure}
We now introduce a new parameter for buckling as
\begin{equation}
\Delta_d=|\Delta_{2-4}-\Delta_{1-3}|
\label{eq1}
\end{equation}
which shows the difference between two buckling parameters. Value of $\Delta_d$ for CSi$_7$ (i.e., 0.29 $\AA$) is greater than that for GeSi$_7$ (i.e., 0.062 $\AA$) which means the carbon impurity atom has a greater impact than germanium on silicene buckling. This effect could be explained based on electronegativity difference\cite{drsbf13}. The electronegativity by Pauling scale is 2.55 \cite{ipl09,zhhkl20}, 1.9 \cite{gperdd20} and 2.01 \cite{mgyz19} for carbon, silicon, and germanium respectively. Therefore, electronegativity difference is 0.65 for CSi$_7$ and 0.11 for GeSi$_7$ which show that CSi$_7$ has a greater electronegativity difference which leads to the in-plane hybridized bondings and reduces the buckling in comparison to the other cases.
Fig. ~\ref{fig6} shows the charge density of a monolayer of CSi$_7$ and GeSi$_7$. The charge density of a monolayer of Si is also shown in this figure for comparison [see Fig. ~\ref{fig6}(a)]. The high charge density around the carbon and germanium impurity atoms [see Figs. ~\ref{fig6}(b) and 6(c)] shows charge transfer from silicon atoms to impurity atoms. Also, the electron aggregation around impurity atoms indicates ionic- covalent bonds in CSi$_7$ and GeSi$_7$ structures because of electronegativity difference.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig6.eps}
\caption{ Charge density of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$
}
\label{fig6}
\end{figure}
Now, we calculate the cohessive and formation energies for these structures. The cohessive energy is -4.81 eV/atom and -4.32 eV/atom for CSi$_7$ and GeSi$_7$, respectively. The negative value of cohecive energy for CSi$_7$ and GeSi$_7$ means that these structures will not be decomposed into their atoms. The more negative cohesive energy, the more stable structure, so CSi$_7$ is more stable than GeSi$_7$. Also, the caculated cohesive energy for silicene is -4.89 eV/atom which is in good agreement with previous studies \cite{gperdd20,mgyz19} and shows CSi$_7$ has a stable structure with cohessive energy very close to silicene.
Our calculations show the formation energy for CSi$_7$ and GeSi$_7$ structures are +0.16 eV/atom and -0.005 eV/atom, respectively. So, the formation of CSi$_7$ (GeSi$_7$) from their constituents is endothermic (exothermic) because of the positive (negative) value of formation energy. On the other hand, positive formation energy for CSi$_7$ represents a high stability of this structure, while the negative or nearly zero value for GeSi$_7$ is attributed mostly to the high reactivity related to silicene\cite{dw13}.
\subsection{Electronic properties}
To investigate electronic properties of CSi$_7$ and GeSi$_7$, at first, we compare band structure of silicene, CSi$_7$ and GeSi$_7$ monolayers and we show the results in Fig. ~\ref{fig7}. As we can see in this figure, like graphene and silicene, GeSi$_7$ is semi-metal (or zero-gap semiconductor) with Dirac cone in point $K$. This is because the $\pi$ and $\pi^*$ bands cross linearly at the Fermi energy $E_F$. These band structures indicate that the charge career in silicene and GeSi$_7$ behave like massless Dirac fermions\cite{zlyqzw16}. In contrast with GeSi$_7$, CSi$_7$ is a semiconductor with indirect band gap. The value of its indirect band gap is 0.24 eV in $K-\Gamma$ direction which significantly less than its direct band gap value (i.e., 0.5 eV in $K-K$ direction).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\linewidth,clip=true]{Fig7.eps}
\caption{ Band structure of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$.
}
\label{fig7}
\end{figure}
For a better comparison, an enlarged band structure of silicene, CSi$_7$ and GeSi$_7$ are shown in Fig. ~\ref{fig8}. It is seen that, in point $K$, silicene and GeSi$_7$ have similar band structures with zero band gap, whereas CSi$_7$ has a band gap. In Dirac cone of graphene and silicene, $\pi$ and $\pi^*$ bands are made from the same atoms\cite{dw13} but these bonds in GeSi$_7$ are made from two different atoms. To determine the Fermi velocity, $v_F$, the graphs for silicene and GeSi$_7$ must be fitted linearly near the Fermi level by using equation $E_{k+K}=\gamma k$. Then the Fermi velocity is given by $v_F=\gamma/ \hbar$. Our calculations show that $v_F$ is $5\cross10^5$ m/s for silicene (which shows a good agreement with previous works\cite{dw13,wd13}) and $4.8\cross10^5$ m/s for GeSi$_7$. A comparison between Fermi velocity in silicene and GeSi$_7$ indicates that Ge atoms in GeSi$_7$ do not have a significant effect on Fermi velocity. The total density of states (DOS) is also shown in Fig. ~\ref{fig8}. It is observed that the total DOS has a good agreement with the band structure.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth,clip=true]{Fig8.eps}
\caption{ Enlarged band structure and total DOS of silicene, CSi$_7$ and GeSi$_7$.
}
\label{fig8}
\end{figure}
We now investigate the effect of strain on the band structure of CSi$_7$ and GeSi$_7$ and the results are shown in Fig. ~\ref{fig9}. As we can see in Figs. ~\ref{fig9}(a) and ~\ref{fig9}(b), compressive strain has important effects on band structure of CSi$_7$ but it has no significant effect on GeSi$_7$ [compare these figures with Figs. ~\ref{fig7}(b) and ~\ref{fig7}(c)]. In the presence of compressive strain for CSi$_7$, both direct and indirect band gaps increase, respectively from 0.5 eV and 0.24 eV to 0.52 eV and 0.44 eV. But for GeSi$_7$, the zero-band gap remains unchanged and compressive strain cannot open any band gap. Fig. ~\ref{fig9}(c) shows the direct and indirect band gap variations of CSi$_7$ versus the both compressive and tensile strains. It is observed that both direct and indirect band gaps increase with increasing the compressive strain, while they decrease with increasing the tensile strain. The variation of band gaps versus strain S is nearly linear and could be formulated by $E_g=-0.017S+0.447$ for direct band gap and $E_g=-0.059 S+0.227$ for indirect one. Under strain and without strain, the direct band gap has significantly larger values relative to indirect band gap, thus it has no important effect on electronic transport properties in CSi$_7$. In contrast with GeSi$_7$, the strain is an important factor for tuning of band gap in CSi$_7$. For example, when the tensile strain increases above the band gap of CSi$_7$ disappears and this 2D material becomes a metal [see Fig. ~\ref{fig9}(c)]. This property of CSi$_7$ is important in straintronic devices such as strain switches and strain sensors.
\begin{figure}[ht!]
\centering
\includegraphics[width=.98\linewidth,clip=true]{Fig9.eps}
\caption{ Band structure of (a) CSi$_7$ and (b) GeSi$_7$ under compressive strain with value -3$\%$. (c) Energy gap variation of CSi$_7$ versus both compressive and tensile strains.
}
\label{fig9}
\end{figure}
\subsection{Optical properties}
The complex dielectric function $\epsilon=\epsilon_r+\epsilon_i$ can be calculated for both polarizations of light: (i) parallel (x direction) and (ii) perpendicular (z direction), where $\epsilon_r$ is the real part and $\epsilon_i$ is the imaginary part of the dielectric function. This function is an important parameter for calculation of optical properties of matters. For instance, the real and imaginary parts of refractive index (i.e., $n=n_r+n_i$) can be written as\cite{w}
\begin{equation}
n_r=\sqrt{\frac{(\epsilon_r^2+\epsilon_i^2)^{1/2}+\epsilon_r}{2}}
\label{eq2}
\end{equation}
and
\begin{equation}
n_i=\sqrt{\frac{(\epsilon_r^2+\epsilon_i^2)^{1/2}-\epsilon_r}{2}}
\label{eq3}
\end{equation}
respectively. The absorption coefficient $\alpha$ is given by\cite{w}
\begin{equation}
\alpha=\frac{2\omega n_i}{C}
\label{eq4}
\end{equation}
where C is the speed of light in vacuum. The real parts of dielectric function of CSi$_7$, GeSi$_7$ and silicene are depicted in Fig. ~\ref{fig10} for x and z directions. This figure shows that $\epsilon _r$ in both directions are inhomogeneous because the graphs of $\epsilon_r$ are not similar for the two directions. The root of real part (where $\epsilon_r=0$) represents the plasma energy (frequency) which for these materials it locates at $4.3\; eV \;(1.04\;PHz)$ for x-direction. It can be seen from Figs. ~\ref{fig10}(a) and ~\ref{fig10}(b) that the values of static dielectric constant (the value of dielectric function real part at zero frequency or zero energy) in the x-direction are 12.3 for silicene and CSi$_7$ and 30 for GeSi$_7$, and in the z-direction are 2.4, 2 and 2.9 for silicene, CSi$_7$ and GeSi$_7$ respectively. Thus, for both directions GeSi$_7$ has the biggest static dielectric constant. Also, the static dielectric constant of GeSi$_7$ is significantly greater than graphene (1.25 for z-direction and 7.6 for x-direction\cite{rdj14}). According to the energy density equation of capacitors (i.e., $u=\epsilon\epsilon_0 E^2/2$), by increasing dielectric constant $\epsilon$, the energy density u increases. Here, E in the electric field inside the capacitor. So, materials with high dielectric constant have attracted a lot of attentions because of their potential applications in transistor gate, non-volatile ferroelectric memory and integral capacitors\cite{tic06}. Among the 2D-materials, graphene has been used for electrochemical capacitors\cite{cls13} and supercapacitors\cite{vrsgr08}. Since GeSi$_7$ has a high dielectric constant, it can be used as a 2D-material with high performance dielectric in advanced capacitors.
\begin{figure}[ht!]
\centering
\includegraphics[width=.7\linewidth,,clip=true]{Fig10.eps}
\caption{ Comparison of real part of dielectric function for CSi$_7$ and GeSi$_7$ (a) in x direction and (b) in z direction. The graphs of silicene are also shown in this figure for comparison.
}
\label{fig10}
\end{figure}
Fig. ~\ref{fig11} shows absorption coefficient $\alpha$ for CSi$_7$ and GeSi$_7$. Absorption coefficient for silicene is also shown in this figure for comparison. The absorption coefficient shown in this figure for silicene is in agreement with previous works\cite{hkb18,cjlmty16}. There are two peaks for CSi$_7$: one locates in 1.18 eV (infrared region) and the other in 1.6 eV (visible region). The peak for silicene (at 1.83 eV) locates in visible region (1.8-3.1 eV). So, carbon atom increases and shifts the edge of absorption from the visible region to infrared region because it breaks the symmetry of silicene structure and it opens a narrow energy band gap in silicene band structure. For GeSi$_7$ there is an absorption peak in visible region (at 2.16 eV). Also, the peak height of GeSi$_7$ is larger than that of silicene and CSi$_7$. The sun light spectrum includes different wavelengths and absorption of each part has a special application. For example, ultraviolet-visible region absorption spectrophotometry and its analysis are used in pharmaceutical analysis, clinical chemistry, environmental analysis and inorganic analysis\cite{rop88}. Also near infrared ($\lambda$= 800 to 1100 nm or E = 1.55 eV to 1.13 eV) and infrared ($\lambda$ > 1100 nm or E < 1.13eV) regions are used for solar cells\cite{wrmss,sgzlgp20}, latent fingerprint development\cite{bsckm19}, brain stimulation and imaging\cite{cwcgy20}, photothermal therapy\cite{hhwl19}, photocatalysis\cite{qzzwz10} and photobiomodulation\cite{whwlh17}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75\linewidth,clip=true]{Fig11.eps}
\caption{ Absorption coefficient for silicene, CSi$_7$ and GeSi$_7$.
}
\label{fig11}
\end{figure}
On the other hand, sunlight radiation received by earth is comprising 5$\%$ u$\%$ltraviolet, 45$\%$ infrared and 50$\%$ visible \cite{hs11}. So, we investigate area under the absorption curve of CSi$_7$ and GeSi$_7$ in visible (from 1.8 to 3.1 eV), near infrared (from 1.13 to 1.55 eV) and infrared (<1.13 eV). Fig. ~\ref{fig12} shows this area for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions. As we can see in this figure, the absorption of CSi$_7$ for all three spectrum regions and total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. For comparison, we also calculate the absorption coefficient in infrared region for siligraphene SiC$_7$ , a new material studied recently\cite{dzfhll16}. The absorption for siligraphene in infrared region is equal to 2.7 which shows that CSi$_7$ with absorption 8.78 and GeSi$_7$ with absorption 6.31 have more than two times greater absorption relative to siligraphene in infrared region.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth,clip=true]{Fig12.eps}
\caption{ Areas under the absorption curve for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions.
}
\label{fig12}
\end{figure}
\subsection{Summary and conclusion}
We studied the structural, electronic and optical properties of CSi$_7$ and GeSi$_7$ structures using density functional theory within Quantum Espresso code. We showed that the carbon atom in CSi$_7$ decreases the buckling, whereas germanium atom in GeSi$_7$ increases the buckling which promises a new way to control the buckling in silicene-like structures. Both structures are stable but CSi$_7$ is more stable than GeSi$_7$. Band structure and DOS plots show CSi$_7$ is a semiconductor with 0.24 eV indirect band gap but GeSi$_7$, similar to silicene, is a semimetal. Strain does not have any significant effect on GeSi$_7$, but for CSi$_7$, the compressive strain can increase the band gap and tensile strain can decrease it. At sufficient tensile strain ($> 3.7 \%$), the band gap becomes zero or negative and thus the semiconducting properties of CSi$_7$ change to metallic properties. As a result, the band gap of CSi$_7$ could be changed and controlled by strain and this material can be used in straintronic devices such as strain sensor and strain switch. Furthermore, we investigated the optical properties of CSi$_7$ and GeSi$_7$ such as static dielectric constant and light absorption. The GeSi$_7$ has high dielectric constant relative to CSi$_7$, silicene and graphene and can be used as a 2D-material with high performance dielectric in advanced capacitors. The light absorption of CSi$_7$ for near infrared, infrared and visible regions and its total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as proper candidates to solar cell applications.
|
\section{Introduction}
De Sitter (dS) spacetime is among the most popular backgrounds in
gravitational physics. There are several reasons for this. First of all dS
spacetime is the maximally symmetric solution of Einstein's equation with a
positive cosmological constant. Due to the high symmetry numerous physical
problems are exactly solvable on this background. A better understanding of
physical effects in this background could serve as a handle to deal with
more complicated geometries. De Sitter spacetime plays an important role in
most inflationary models, where an approximately dS spacetime is employed to
solve a number of problems in standard cosmology \cite{Lind90}. More
recently astronomical observations of high redshift supernovae, galaxy
clusters and cosmic microwave background \cite{Ries07} indicate that at the
present epoch the universe is accelerating and can be well approximated by a
world with a positive cosmological constant. If the universe would
accelerate indefinitely, the standard cosmology would lead to an asymptotic
dS universe. In addition to the above, an interesting topic which has
received increasing attention is related to string-theoretical models of dS
spacetime and inflation. Recently a number of constructions of metastable dS
vacua within the framework of string theory are discussed (see, for
instance, \cite{Kach03,Silv07} and references therein).
There is no reason to believe that the version of dS spacetime which may
emerge from string theory, will necessarily be the most familiar version
with symmetry group $O(1,4)$ and there are many different topological spaces
which can accept the dS metric locally. There are many reasons to expect
that in string theory the most natural topology for the universe is that of
a flat compact three-manifold \cite{McIn04}. In particular, in Ref. \cite%
{Lind04} it was argued that from an inflationary point of view universes
with compact spatial dimensions, under certain conditions, should be
considered a rule rather than an exception. The models of a compact universe
with nontrivial topology may play an important role by providing proper
initial conditions for inflation (for the cosmological consequences of the
nontrivial topology and observational bounds on the size of compactified
dimensions see, for example, \cite{Lach95}). The quantum creation of the
universe having toroidal spatial topology is discussed in \cite{Zeld84} and
in references \cite{Gonc85} within the framework of various supergravity
theories. The compactification of spatial dimensions leads to the
modification of the spectrum of vacuum fluctuations and, as a result, to
Casimir-type contributions to the vacuum expectation values of physical
observables (for the topological Casimir effect and its role in cosmology
see \cite{Most97,Bord01,Eliz06} and references therein). The effect of the
compactification of a single spatial dimension in dS spacetime (topology $%
\mathrm{R}^{D-1}\times \mathrm{S}^{1}$) on the properties of quantum vacuum
for a scalar field with general curvature coupling parameter and with
periodicity condition along the compactified dimension is investigated in
Ref. \cite{Saha07} (for quantum effects in braneworld models with dS spaces
see, for instance, \cite{dSbrane}).
In view of the above mentioned importance of toroidally compactified dS
spacetimes, in the present paper we consider a general class of
compactifications having the spatial topology $\mathrm{R}^{p}\times (\mathrm{%
S}^{1})^{q}$, $p+q=D$. This geometry can be used to describe two types of
models. For the first one $p=3$, $q\geqslant 1$,\ and which corresponds to
the universe with Kaluza-Klein type extra dimensions. As it will be shown in
the present work, the presence of extra dimensions generates an additional
gravitational source in the cosmological equations which is of barotropic
type at late stages of the cosmological evolution. For the second model $D=3$
and the results given below describe how the properties of the universe with
dS geometry are changed by one-loop quantum effects induced by the
compactness of spatial dimensions. In quantum field theory on curved
backgrounds among the important quantities describing the local properties
of a quantum field and quantum back-reaction effects are the expectation
values of the field square and the energy-momentum tensor for a given
quantum state. In particular, the vacuum expectation values of these
quantities are of special interest. In order to evaluate these expectation
values, we construct firstly the corresponding positive frequency Wightman
function. Applying to the mode-sum the Abel-Plana summation formula, we
present this function as the sum of the Wightman function for the topology $%
\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ plus an additional term
induced by the compactness of the $(p+1)$th dimension. The latter is finite
in the coincidence limit and can be directly used for the evaluation of the
corresponding parts in the expectation \ values of the field square and the
energy-momentum tensor. In this way the renormalization of these quantities
is reduced to the renormalization of the corresponding quantities in
uncompactified dS spacetime. Note that for a scalar field on the background
of dS spacetime the renormalized vacuum expectation values of the field
square and the energy-momentum tensor are investigated in Refs. \cite%
{Cand75,Dowk76,Bunc78} by using various regularization schemes (see also
\cite{Birr82}). The corresponding effects upon phase transitions in an
expanding universe are discussed in \cite{Vile82,Alle83}.
The paper is organized as follows. In the next section we consider the
positive frequency Wightman function for dS spacetime of topology $\mathrm{R}%
^{p}\times (\mathrm{S}^{1})^{q}$. In sections \ref{sec:vevPhi2} and \ref%
{sec:vevEMT2} we use the formula for the Wightman function for the
evaluation of the vacuum expectation values of the field square and the
energy-momentum tensor. The asymptotic behavior of these quantities is
investigated in the early and late stages of the cosmological evolution. The
case of a twisted scalar field with antiperiodic boundary conditions is
considered in section \ref{sec:Twisted}. The main results of the paper are
summarized in section \ref{sec:Conc}.
\section{Wightman function in de Sitter spacetime with toroidally
compactified dimensions}
\label{sec:WF}
We consider a free massive scalar field with curvature coupling parameter $%
\xi $\ on background of $(D+1)$-dimensional de Sitter spacetime ($\mathrm{dS}%
_{D+1}$) generated by a positive cosmological constant $\Lambda $. The field
equation has the form%
\begin{equation}
\left( \nabla _{l}\nabla ^{l}+m^{2}+\xi R\right) \varphi =0, \label{fieldeq}
\end{equation}%
where $R=2(D+1)\Lambda /(D-1)$ is the Ricci scalar for $\mathrm{dS}_{D+1}$
and $\xi $ is the curvature coupling parameter. The special cases $\xi =0$
and $\xi =\xi _{D}\equiv (D-1)/4D$ correspond to minimally and conformally
coupled fields respectively. The importance of these special cases is
related to that in the massless limit the corresponding fields mimic the
behavior of gravitons and photons. We write the line element for $\mathrm{dS}%
_{D+1}$ in planar (inflationary) coordinates most appropriate for
cosmological applications:%
\begin{equation}
ds^{2}=dt^{2}-e^{2t/\alpha }\sum_{i=1}^{D}(dz^{i})^{2}, \label{ds2deSit}
\end{equation}%
where the parameter $\alpha $ is related to the cosmological constant by the
formula%
\begin{equation}
\alpha ^{2}=\frac{D(D-1)}{2\Lambda }. \label{alfa}
\end{equation}%
Below, in addition to the synchronous time coordinate $t$ we will also use
the conformal time $\tau $ in terms of which the line element takes
conformally flat form:%
\begin{equation}
ds^{2}=(\alpha /\tau )^{2}[d\tau ^{2}-\sum_{i=1}^{D}(dz^{i})^{2}],\;\tau
=-\alpha e^{-t/\alpha },\;-\infty <\tau <0. \label{ds2Dd}
\end{equation}%
We assume that the spatial coordinates $z^{l}$, $l=p+1,\ldots ,D$, are
compactified to $\mathrm{S}^{1}$ of the length $L_{l}$: $0\leqslant
z^{l}\leqslant L_{l}$, and for the other coordinates we have $-\infty
<z^{l}<+\infty $, $l=1,\ldots ,p$. Hence, we consider the spatial topology $%
\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$, where $q=D-p$. For $p=0$, as a
special case we obtain the toroidally compactified dS spacetime discussed in
\cite{McIn04,Lind04,Zeld84}. The Casimir densities for a scalar field with
periodicity conditions in the case $q=1$ were discussed previously in Ref.
\cite{Saha07}.
In the discussion below we will denote the position vectors along the
uncompactified and compactified dimensions by $\mathbf{z}_{p}=(z^{1},\ldots
,z^{p})$ and $\mathbf{z}_{q}=(z^{p+1},\ldots ,z^{D})$. For a scalar field
with periodic boundary condition one has (no summation over $l$)%
\begin{equation}
\varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=\varphi (t,%
\mathbf{z}_{p},\mathbf{z}_{q}), \label{periodicBC}
\end{equation}%
where $l=p+1,\ldots ,D$ and $\mathbf{e}_{l}$ is the unit vector along the
direction of the coordinate $z^{l}$. In this paper we are interested in the
effects of non-trivial topology on the vacuum expectation values (VEVs) of
the field square and the energy-momentum tensor. These VEVs are obtained
from the corresponding positive frequency Wightman function $%
G_{p,q}^{+}(x,x^{\prime })$ in the coincidence limit of the arguments. The
Wightman function is also important in consideration of the response of
particle detectors at a given state of motion (see, for instance, \cite%
{Birr82}). Expanding the field operator over the complete set $\left\{
\varphi _{\sigma }(x),\varphi _{\sigma }^{\ast }(x)\right\} $ of positive
and negative frequency solutions to the classical field equation, satisfying
the periodicity conditions along the compactified dimensions, the positive
frequency Wightman function is presented as the mode-sum:
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=\langle 0|\varphi (x)\varphi (x^{\prime
})|0\rangle =\sum_{\sigma }\varphi _{\sigma }(x)\varphi _{\sigma }^{\ast
}(x^{\prime }), \label{Wigh1}
\end{equation}%
where the collective index $\sigma $ specifies the solutions.
Due to the symmetry of the problem under consideration the spatial
dependence of the eigenfunctions $\varphi _{\sigma }(x)$ can be taken in the
standard plane-wave form, $e^{i\mathbf{k}\cdot \mathbf{z}}$. Substituting
into the field equation, we obtain that the time dependent part of the
eigenfunctions is a linear combination of the functions $\tau ^{D/2}H_{\nu
}^{(l)}(|\mathbf{k|}\tau )$, $l=1,2$, where $H_{\nu }^{(l)}(x)$ is the
Hankel function and
\begin{equation}
\nu =\left[ D^{2}/4-D(D+1)\xi -m^{2}\alpha ^{2}\right] ^{1/2}. \label{knD}
\end{equation}%
Different choices of the coefficients in this linear combination correspond
to different choices of the vacuum state. We will consider de Sitter
invariant Bunch-Davies vacuum \cite{Bunc78} for which the coefficient for
the part containing the function $H_{\nu }^{(1)}(|\mathbf{k|}\tau )$ is
zero. The corresponding eigenfunctions satisfying the periodicity conditions
take the form
\begin{equation}
\varphi _{\sigma }(x)=C_{\sigma }\eta ^{D/2}H_{\nu }^{(1)}(k\eta )e^{i%
\mathbf{k}_{p}\cdot \mathbf{z}_{p}+i\mathbf{k}_{q}\cdot \mathbf{z}%
_{q}},\;\eta =\alpha e^{-t/\alpha }, \label{eigfuncD}
\end{equation}%
where we have decomposed the contributions from the uncompactified and
compactified dimensions with the notations%
\begin{eqnarray}
\mathbf{k}_{p} &=&(k_{1},\ldots ,k_{p}),\;\mathbf{k}_{q}=(k_{p+1},\ldots
,k_{D}),\;k=\sqrt{\mathbf{k}_{p}^{2}+\mathbf{k}_{q}^{2}},\; \notag \\
\;k_{l} &=&2\pi n_{l}/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D.
\label{kD1D2}
\end{eqnarray}%
Note that we have transformed the Hankel function to have the positive
defined argument and instead of the conformal time $\tau $ the variable $%
\eta $ is introduced which we will call the conformal time as well. The
eigenfunctions are specified by the set $\sigma =(\mathbf{k}%
_{p},n_{p+1},\ldots ,n_{D})$ and the coefficient $C_{\sigma }$ is found from
the standard orthonormalization condition
\begin{equation}
-i\int d^{D}x\sqrt{|g|}g^{00}\varphi _{\sigma }(x)\overleftrightarrow{%
\partial }_{\tau }\varphi _{\sigma ^{\prime }}^{\ast }(x)=\delta _{\sigma
\sigma ^{\prime }}, \label{normcond}
\end{equation}%
where the integration goes over the spatial hypersurface $\tau =\mathrm{const%
}$, and $\delta _{\sigma \sigma ^{\prime }}$ is understood as the Kronecker
delta for the discrete indices and as the Dirac delta-function for the
continuous ones. By using the Wronskian relation for the Hankel functions
one finds%
\begin{equation}
C_{\sigma }^{2}=\frac{\alpha ^{1-D}e^{i(\nu -\nu ^{\ast })\pi /2}}{%
2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}. \label{normCD}
\end{equation}
Having the complete set of eigenfunctions and using the mode-sum formula (%
\ref{Wigh1}), for the positive frequency Wightman function we obtain the
formula
\begin{eqnarray}
G_{p,q}^{+}(x,x^{\prime }) &=&\frac{\alpha ^{1-D}(\eta \eta ^{\prime
})^{D/2}e^{i(\nu -\nu ^{\ast })\pi /2}}{2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}%
\int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}} \notag
\\
&&\times \sum_{\mathbf{n}_{q}=-\infty }^{+\infty }e^{i\mathbf{k}_{q}\cdot
\Delta \mathbf{z}_{q}}H_{\nu }^{(1)}(k\eta )[H_{\nu }^{(1)}(k\eta ^{\prime
})]^{\ast }, \label{GxxD}
\end{eqnarray}%
with $\Delta \mathbf{z}_{p}=\mathbf{z}_{p}-\mathbf{z}_{p}^{\prime }$, $%
\Delta \mathbf{z}_{q}=\mathbf{z}_{q}-\mathbf{z}_{q}^{\prime }$, and%
\begin{equation}
\sum_{\mathbf{n}_{q}=-\infty }^{+\infty }=\sum_{n_{p+1}=-\infty }^{+\infty
}\ldots \sum_{n_{D}=-\infty }^{+\infty }. \label{nqsum}
\end{equation}%
As a next step, we apply to the series over $n_{p+1}$ in (\ref{GxxD}) the
Abel-Plana formula \cite{Most97,Saha07Gen}%
\begin{equation}
\sideset{}{'}{\sum}_{n=0}^{\infty }f(n)=\int_{0}^{\infty
}dx\,f(x)+i\int_{0}^{\infty }dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}-1},
\label{Abel}
\end{equation}%
where the prime means that the term $n=0$ should be halved. It can be seen
that after the application of this formula the term in the expression of the
Wightman function which corresponds to the first integral on the right of (%
\ref{Abel}) is the Wightman function for dS spacetime with the topology $%
\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$, which, in the notations
given above, corresponds to the function $G_{p+1,q-1}^{+}(x,x^{\prime })$.
As a result one finds
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=G_{p+1,q-1}^{+}(x,x^{\prime })+\Delta
_{p+1}G_{p,q}^{+}(x,x^{\prime }). \label{G1decomp}
\end{equation}%
The second term on the right of this formula is induced by the compactness
of the $z^{p+1}$ - direction and is given by the expression
\begin{eqnarray}
\Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&\frac{2\alpha ^{1-D}(\eta \eta
^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}%
_{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\
&&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}-1)} \notag \\
&&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta
x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2}
\end{eqnarray}%
where $\mathbf{n}_{q-1}=(n_{p+2},\ldots ,n_{D})$, $I_{\nu }(x)$ and $K_{\nu
}(x)$ are the Bessel modified functions and the notation%
\begin{equation}
k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}(2\pi n_{l}/L_{l})^{2}
\label{knD1+2}
\end{equation}%
is introduced. In formula (\ref{GxxD2}), $V_{q-1}=L_{p+2}\cdots L_{D}$ is
the volume of $(q-1)$-dimensional compact subspace. Note that the
combination of the Bessel modified functions appearing in formula (\ref%
{GxxD2}) can also be written in the form%
\begin{eqnarray}
K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta x)K_{\nu }(\eta
^{\prime }x) &=&\frac{2}{\pi }\sin (\nu \pi )K_{\nu }(\eta x)K_{\nu }(\eta
^{\prime }x) \notag \\
&&+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x)+K_{\nu }(\eta x)I_{\nu }(\eta
^{\prime }x), \label{eqformComb}
\end{eqnarray}%
which explicitly shows that this combination is symmetric under the
replacement $\eta \rightleftarrows \eta ^{\prime }$. In formula (\ref{GxxD2}%
) the integration with respect to the angular part of $\mathbf{k}_{p}$ can
be done by using the formula%
\begin{equation}
\int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}}F(|%
\mathbf{k}_{p}|)=\frac{(2\pi )^{p/2}}{|\Delta \mathbf{z}_{p}|^{p/2-1}}%
\int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p/2}F(|\mathbf{k}%
_{p}|)J_{p/2-1}(|\mathbf{k}_{p}||\Delta \mathbf{z}_{p}|), \label{intang}
\end{equation}%
where $J_{\mu }(x)$ is the Bessel function.
After the recurring application of formula (\ref{GxxD2}), the Wightman
function for dS spacetime with spatial topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$ is presented in the form%
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=G_{\mathrm{dS}}^{+}(x,x^{\prime })+\Delta
G_{p,q}^{+}(x,x^{\prime }), \label{GdSGcomp}
\end{equation}%
where $G_{\mathrm{dS}}^{+}(x,x^{\prime })\equiv G_{D,0}^{+}(x,x^{\prime })$
is the corresponding function for uncompactified dS spacetime and the part%
\begin{equation}
\Delta G_{p,q}^{+}(x,x^{\prime })=\sum_{l=1}^{q}\Delta
_{D-l+1}G_{D-l,l}^{+}(x,x^{\prime }), \label{DeltaGtop}
\end{equation}%
is induced by the toroidal compactification of the $q$-dimensional subspace.
Two-point function in the uncompactified dS spacetime is investigated in
\cite{Cand75,Dowk76,Bunc78,Bros96,Bous02} (see also \cite{Birr82}) and is
given by the formula%
\begin{equation}
G_{\mathrm{dS}}^{+}(x,x^{\prime })=\frac{\alpha ^{1-D}\Gamma (D/2+\nu
)\Gamma (D/2-\nu )}{2^{(D+3)/2}\pi ^{(D+1)/2}\left( u^{2}-1\right) ^{(D-1)/4}%
}P_{\nu -1/2}^{(1-D)/2}(u), \label{WFdS}
\end{equation}%
where $P_{\nu }^{\mu }(x)$ is the associated Legendre function of the first
kind and
\begin{equation}
u=-1+\frac{\sum_{l=1}^{D}(z^{l}-z^{\prime l})^{2}-(\eta -\eta ^{\prime })^{2}%
}{2\eta \eta ^{\prime }}. \label{u}
\end{equation}%
An alternative form is obtained by using the relation between the the
associated Legendre function and the hypergeometric function.
\section{Vacuum expectation values of the field square}
\label{sec:vevPhi2}
We denote by $\langle \varphi ^{2}\rangle _{p,q}$ the VEV of the field
square in dS spacetime with spatial topology $\mathrm{R}^{p}\times (\mathrm{S%
}^{1})^{q}$. Having the Wightman function we can evaluate this VEV taking
the coincidence limit of the arguments. Of course, in this limit the
two-point functions are divergent and some renormalization procedure is
needed. The important point here is that the local geometry is not changed
by the toroidal compactification and the divergences are the same as in the
uncompactified dS spacetime. As in our procedure we have already extracted
from the Wightman function the part $G_{\mathrm{dS}}^{+}(x,x^{\prime })$,
the renormalization of the VEVs is reduced to the renormalization of the
uncompactified dS part which is already done in literature. The VEV\ of the
field square is presented in the decomposed form%
\begin{equation}
\langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle _{\mathrm{dS}%
}+\langle \varphi ^{2}\rangle _{c},\;\langle \varphi ^{2}\rangle
_{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle \varphi ^{2}\rangle _{D-l,l},
\label{phi2dSplComp}
\end{equation}%
where $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is the VEV in
uncompactified $\mathrm{dS}_{D+1}$ and the part $\langle \varphi ^{2}\rangle
_{c}$ is due to the compactness of the $q$-dimensional subspace. Here the
term $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ is defined by the
relation similar to (\ref{G1decomp}):
\begin{equation}
\langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle
_{p+1,q-1}+\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}.
\label{phi2decomp}
\end{equation}%
This term is the part in the VEV induced by the compactness of the $z^{p+1}$
- direction. This part is directly obtained from (\ref{GxxD2}) in the
coincidence limit of the arguments:%
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{2\alpha ^{1-D}\eta
^{D}}{2^{p}\pi ^{p/2+1}\Gamma (p/2)V_{q-1}}\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p-1}
\notag \\
&&\times \int_{0}^{\infty }dx\,\frac{xK_{\nu }(x\eta )\left[ I_{-\nu }(x\eta
)+I_{\nu }(x\eta )\right] }{\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}%
_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}%
_{q-1}}^{2}}}-1)}. \label{phi2Dc}
\end{eqnarray}%
Instead of $|\mathbf{k}_{p}|$ introducing a new integration variable $y=%
\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}$ and expanding $%
(e^{Ly}-1)^{-1}$, the integral over $y$ is explicitly evaluated and one finds%
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelPhi2}
\end{eqnarray}%
where we use the notation%
\begin{equation}
f_{\mu }(y)=y^{\mu }K_{\mu }(y). \label{fmunot}
\end{equation}%
By taking into account the relation between the conformal and synchronous
time coordinates, we see that the VEV of the field square is a function of
the combinations $L_{l}/\eta =L_{l}e^{t/\alpha }/\alpha $. In the limit when
the length of the one of the compactified dimensions, say $z^{l}$, $%
l\geqslant p+2$, is large, $L_{l}\rightarrow \infty $, the main contribution
into the sum over $n_{l}$ in (\ref{DelPhi2}) comes from large values of $%
n_{l}$ and we can replace the summation by the integration in accordance
with the formula%
\begin{equation}
\frac{1}{L_{l}}\sum_{n_{l}=-\infty }^{+\infty }f(2\pi n_{l}/L_{l})=\frac{1}{%
\pi }\int_{0}^{\infty }dy\,f(y). \label{sumtoint}
\end{equation}%
The integral over $y$ is evaluated by using the formula from \cite{Prud86}
and we can see that from (\ref{DelPhi2}) the corresponding formula is
obtained for the topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$.
For a conformally coupled massless scalar field one has $\nu =1/2$ and $%
\left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)=1/x$. In this case the
corresponding integral in formula (\ref{DelPhi2}) is explicitly evaluated
and we find%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}=\frac{2(\eta /\alpha )^{D-1}%
}{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\frac{f_{p/2}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p}}%
,\;\xi =\xi _{D},\;m=0. \label{DelPhi2Conf}
\end{equation}%
In particular, the topological part is always positive. Formula (\ref%
{DelPhi2Conf}) could also be obtained from the corresponding result in $%
(D+1) $-dimensional Minkowski spacetime with spatial topology $\mathrm{R}%
^{p}\times (\mathrm{S}^{1})^{q}$, taking into account that two problems are
conformally related: $\Delta _{p+1}\langle \varphi ^{2}\rangle
_{p,q}=a^{1-D}(\eta )\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{%
\mathrm{(M)}}$, where $a(\eta )=\alpha /\eta $ is the scale factor. This
relation is valid for any conformally flat bulk. The similar formula takes
place for the total topological part $\langle \varphi ^{2}\rangle _{c}$.
Note that, in this case the expressions for $\Delta _{p+1}\langle \varphi
^{2}\rangle _{p,q}$ are obtained from the formulae for $\Delta _{p+1}\langle
\varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}}$ replacing the lengths $L_{l}$ of
the compactified dimensions by the comoving lengths $\alpha L_{l}/\eta $, $%
l=p,\ldots ,D$.
Now we turn to the investigation of the topological part $\Delta
_{p+1}\langle \varphi ^{2}\rangle _{p,q}$ in the VEV of the field square in
the asymptotic regions of the ratio $L_{p+1}/\eta $. For small values of
this ratio, $L_{p+1}/\eta \ll 1$, we introduce a new integration variable $%
y=L_{p+1}x$. By taking into account that for large values $x$ one has $\left[
I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\approx 1/x$, we find that to the
leading order $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ coincides
with the corresponding result for a conformally coupled massless field,
given by (\ref{DelPhi2Conf}):%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx (\eta /\alpha
)^{D-1}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}%
},\;L_{p+1}/\eta \ll 1. \label{DelPhi2Poq}
\end{equation}%
For fixed value of the ratio $L_{p+1}/\alpha $, this limit corresponds to $%
t\rightarrow -\infty $ and the topological part $\langle \varphi ^{2}\rangle
_{c}$ behaves like $\exp [-(D-1)t/\alpha ]$. By taking into account that the
part $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is time independent, from
here we conclude that in the early stages of the cosmological expansion the
topological part dominates in the VEV\ of the field square.
For small values of the ratio $\eta /L_{p+1}$, we introduce a new
integration variable $y=L_{p+1}x$ and expand the integrand by using the
formulae for the Bessel modified functions for small arguments. For real
values of the parameter $\nu $, after the integration over $y$ by using the
formula from \cite{Prud86}, to the leading order we find%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{2^{(1-p)/2+\nu
}\eta ^{D-2\nu }\Gamma (\nu )}{\pi ^{(p+3)/2}V_{q-1}\alpha ^{D-1}}%
\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{%
f_{(p+1)/2-\nu }(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p+1-2\nu }}%
,\;\eta /L_{p+1}\ll 1. \label{DelPhi2Mets}
\end{equation}%
In the case of a conformally coupled massless scalar field $\nu =1/2$ and
this formula reduces to the exact result given by Eq. (\ref{DelPhi2Conf}).
For fixed values of $L_{p+1}/\alpha $, the limit under consideration
corresponds to late stages of the cosmological evolution, $t\rightarrow
+\infty $, and the topological part $\langle \varphi ^{2}\rangle _{c}$ is
suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$. Hence, in this limit
the total VEV is dominated by the uncompactified dS part $\langle \varphi
^{2}\rangle _{\mathrm{dS}}$. Note that formula (32) also describes the
asymptotic behavior of the topological part in the strong curvature regime
corresponding to small values of the parameter $\alpha $.
In the same limit, for pure imaginary values of the parameter $\nu $ in a
similar way we find the following asymptotic behavior
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &\approx &\frac{4\alpha
^{1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n%
}_{q-1}=-\infty }^{+\infty }\frac{1}{(nL_{p+1})^{p+1}} \notag \\
&&\times {\mathrm{Re}}\left[ 2^{i|\nu |}\Gamma (i|\nu |)(nL_{p+1}/\eta
)^{2i|\nu |}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{\mathbf{n}_{q-1}})\right] .
\label{DelPhi2MetsIm}
\end{eqnarray}%
Defining the phase $\phi _{0}$ by the relation
\begin{equation}
Be^{i\phi _{0}}=2^{i|\nu |}\Gamma (i|\nu |)\sum_{n=1}^{\infty }\sum_{\mathbf{%
n}_{q-1}=-\infty }^{+\infty }n^{2i|\nu |-p-1}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{%
\mathbf{n}_{q-1}}), \label{Bphi0}
\end{equation}%
we write this formula in terms of the synchronous time:%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{4\alpha
e^{-Dt/\alpha }B}{(2\pi )^{(p+3)/2}L_{p+1}^{p+1}V_{q-1}}\cos [2|\nu
|t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}]. \label{DelPhi2MetsIm1}
\end{equation}%
Hence, in the case under consideration at late stages of the cosmological
evolution the topological part is suppressed by the factor $\exp (-Dt/\alpha
)$ and the damping of the corresponding VEV has an oscillatory nature.
\section{Vacuum energy-momentum tensor}
\label{sec:vevEMT2}
In this section we investigate the VEV for the energy-momentum tensor of a
scalar field in $\mathrm{dS}_{D+1}$ with toroidally compactified $q$%
-dimensional subspace. In addition to describing the physical structure of
the quantum field at a given point, this quantity acts as the source of
gravity in the semiclassical Einstein equations. It therefore plays an
important role in modelling self-consistent dynamics involving the
gravitational field. Having the Wightman function and the VEV of the field
square we can evaluate the vacuum energy-momentum tensor by using the formula%
\begin{equation}
\langle T_{ik}\rangle _{p,q}=\lim_{x^{\prime }\rightarrow x}\partial
_{i}\partial _{k}^{\prime }G_{p,q}^{+}(x,x^{\prime })+\left[ \left( \xi -%
\frac{1}{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\xi \nabla _{i}\nabla
_{k}-\xi R_{ik}\right] \langle \varphi ^{2}\rangle _{p,q}, \label{emtvev1}
\end{equation}%
where $R_{ik}=Dg_{ik}/\alpha ^{2}$ is the Ricci tensor for $\mathrm{dS}_{D+1}
$. Note that in (\ref{emtvev1}) we have used the expression for the
classical energy-momentum tensor which differs from the standard one by the
term which vanishes on the solutions of the field equation (see, for
instance, Ref. \cite{Saha04}). As in the case of the field square, the VEV
of the energy-momentum tensor is presented in the form%
\begin{equation}
\langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{p+1,q-1}+\Delta
_{p+1}\langle T_{i}^{k}\rangle _{p,q}. \label{TikDecomp}
\end{equation}%
Here $\langle T_{i}^{k}\rangle _{p+1,q-1}$ is the part corresponding to dS
spacetime with $p+1$ uncompactified and $q-1$ toroidally compactified
dimensions and $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ is induced by
the compactness along the $z^{p+1}$ - direction. The recurring application
of formula (\ref{TikDecomp}) allows us to write the VEV in the form%
\begin{equation}
\langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{\mathrm{dS}%
}+\langle T_{i}^{k}\rangle _{c},\;\langle T_{i}^{k}\rangle
_{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle T_{i}^{k}\rangle _{D-l,l},
\label{TikComp}
\end{equation}%
where the part corresponding to uncompactified dS spacetime, $\langle
T_{i}^{k}\rangle _{\mathrm{dS}}$, is explicitly decomposed. The part $%
\langle T_{i}^{k}\rangle _{c}$ is induced by the comactness of the $q$%
-dimensional subspace.
The second term on the right of formula (\ref{TikDecomp}) is obtained
substituting the corresponding parts in the Wightman function, Eq. (\ref%
{GxxD2}), and in the field square, Eq. (\ref{DelPhi2}), into formula (\ref%
{emtvev1}). After the lengthy calculations for the energy density one finds%
\begin{eqnarray}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &=&\frac{2\alpha ^{-1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx \notag \\
&&\times \frac{xF^{(0)}(x\eta )}{(nL_{p+1})^{p-1}}f_{(p-1)/2}(nL_{p+1}\sqrt{%
x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelT00}
\end{eqnarray}%
with the notation%
\begin{eqnarray}
F^{(0)}(y) &=&y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu }^{\prime }(y)\right]
K_{\nu }^{\prime }(y)+D(1/2-2\xi )y\left[ (I_{-\nu }(y)+I_{\nu }(y))K_{\nu
}(y)\right] ^{\prime } \notag \\
&&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left( \nu
^{2}+2m^{2}\alpha ^{2}-y^{2}\right) , \label{F0}
\end{eqnarray}%
and the function $f_{\mu }(y)$ is defined by formula (\ref{fmunot}). The
vacuum stresses are presented in the form (no summation over $i$)%
\begin{eqnarray}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q} &=&A_{p,q}-\frac{4\alpha
^{-1-D}\eta ^{D+2}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{%
\mathbf{n}_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta )
\notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p+1}}%
f_{p}^{(i)}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelTii}
\end{eqnarray}%
where we have introduced the notations%
\begin{eqnarray}
f_{p}^{(i)}(y) &=&f_{(p+1)/2}(y),\;i=1,\ldots ,p, \notag \\
f_{p}^{(p+1)}(y) &=&-y^{2}f_{(p-1)/2}(y)-pf_{(p+1)/2}(y), \label{fp+1} \\
f_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{(p-1)/2}(y),\;i=p+2,\ldots ,D.
\notag
\end{eqnarray}%
In formula (\ref{DelTii}) (no summation over $i$, $i=1,\ldots ,D$),
\begin{eqnarray}
A_{p,q} &=&\left[ \left( \xi -\frac{1}{4}\right) \nabla _{l}\nabla ^{l}-\xi
g^{ii}\nabla _{i}\nabla _{i}-\xi R_{i}^{i}\right] \Delta _{p+1}\langle
\varphi ^{2}\rangle _{p,q} \notag \\
&=&\frac{2\alpha ^{-1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}%
\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}\int_{0}^{\infty }dx\,\frac{xF(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{A}
\end{eqnarray}%
with the notation%
\begin{eqnarray}
F(y) &=&\left( 4\xi -1\right) y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu
}^{\prime }(y)\right] K_{\nu }^{\prime }(y)+\left[ 2(D+1)\xi -D/2\right] y(%
\left[ I(y)+I_{\nu }(y)\right] K_{\nu }(y))^{\prime } \notag \\
&&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left[ \left( 4\xi
-1\right) \left( y^{2}+\nu ^{2}\right) \right] . \label{Fy}
\end{eqnarray}%
As it is seen from the obtained formulae, the topological parts in the VEVs
are time-dependent and, hence, the local dS symmetry is broken by them.
As an additional check of our calculations it can be seen that the
topological terms satisfy the trace relation
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=D(\xi -\xi _{D})\nabla
_{l}\nabla ^{l}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}+m^{2}\Delta
_{p+1}\langle \varphi ^{2}\rangle _{p,q}. \label{tracerel}
\end{equation}%
In particular, from here it follows that the topological part in the VEV\ of
the energy-momentum tensor is traceless for a conformally coupled massless
scalar field. The trace anomaly is contained in the uncompactified dS part
only. We could expect this result, as the trace anomaly is determined by the
local geometry and the local geometry is not changed by the toroidal
compactification.
For a conformally coupled massless scalar field $\nu =1/2$ and, by using the
formulae for $I_{\pm 1/2}(x)$ and $K_{1/2}(x)$, after the integration over $%
x $ from formulae (\ref{DelT00}), (\ref{DelTii}) we find (no summation over $%
i$)%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=-\frac{2(\eta /\alpha )^{D+1}}{%
(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(nL_{p+1})^{p+2}%
}, \label{DelTConf}
\end{equation}%
with the notations%
\begin{eqnarray}
g_{p}^{(0)}(y) &=&g_{p}^{(i)}(y)=f_{p/2+1}(y),\;i=1,\ldots ,p, \notag \\
g_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{p/2}(y),\;i=p+2,\ldots ,D,
\label{gi} \\
g_{p}^{(p+1)}(y) &=&-(p+1)f_{p/2+1}(y)-y^{2}f_{p/2}(y). \notag
\end{eqnarray}%
As in the case of the filed square, this formula can be directly obtained by
using the conformal relation between the problem under consideration and the
corresponding problem in $(D+1)$-dimensional Minkowski spacetime with the
spatial topology $\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$. Note that in
this case the topological part in the energy density is always negative and
is equal to the vacuum stresses along the uncompactified dimensions. In
particular, for the case $D=3$, $p=0$ (topology $(\mathrm{S}^{1})^{3}$) and
for $L_{i}=L$, $i=1,2,3$, from formulae (\ref{TikComp}), (\ref{DelTConf})
for the topological part in the vacuum energy density we find $\langle
T_{0}^{0}\rangle _{c}=-0.8375(a(\eta )L)^{-4}$ (see, for example, Ref. \cite%
{Most97}).
The general formulae for the topological part in the VEV of the energy
density are simplified in the asymptotic regions of the parameters. For
small values of the ratio $L_{p+1}/\eta $ we can see that to the leading
order $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ coincides with the
corresponding result for a conformally coupled massless field (no summation
over $i$):%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx -\frac{2(\eta /\alpha
)^{D+1}}{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{%
(nL_{p+1})^{p+2}},\;L/\eta \ll 1. \label{TiiSmall}
\end{equation}%
For fixed values of the ratio $L_{p+1}/\alpha $, this formula describes the
asymptotic behavior of the VEV at the early stages of the cosmological
evolution corresponding to $t\rightarrow -\infty $. In this limit the
topological part behaves as $\exp [-(D+1)t/\alpha ]$ and, hence, it
dominates the part corresponding to the uncompactified dS spacetime which is
time independent. In particular, the total energy density is negative.
In the opposite limit of small values for the ratio $\eta /L_{p+1}$ we
introduce in the formulae for the VEV of the energy-momentum tensor an
integration variable $y=L_{p+1}x$ and expand the integrants over $\eta
/L_{p+1}$. For real values of the parameter $\nu $, for the energy density
to the leading order we find%
\begin{eqnarray}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &\approx &\frac{2^{\nu }D\left[
D/2-\nu +2\xi \left( 2\nu -D-1\right) \right] }{(2\pi
)^{(p+3)/2}L_{p+1}^{1-q}V_{q-1}\alpha ^{D+1}}\Gamma (\nu ) \notag \\
&&\times \left( \frac{\eta }{L_{p+1}}\right) ^{D-2\nu }\sum_{n=1}^{\infty
}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{f_{(p+1)/2-\nu
}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{n^{(p+1)/2-\nu }}. \label{T00smallEta}
\end{eqnarray}%
In particular, this energy density is positive for a minimally coupled
scalar field and for a conformally coupled massive scalar field. Note that
for a conformally coupled massless scalar the coefficient in (\ref%
{T00smallEta}) vanishes. For the vacuum stresses the second term on the
right of formula (\ref{DelTii}) is suppressed with respect to the first term
given by (\ref{A}) by the factor $(\eta /L_{p+1})^{2}$ for $i=1,\ldots ,p+1$%
, and by the factor $(\eta k_{i})^{2}$ for $i=p+2,\ldots ,D$. As a result,
to the leading order we have the relation (no summation over $i$)
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{2\nu }{D}\Delta
_{p+1}\langle T_{0}^{0}\rangle _{p,q},\;\eta /L_{p+1}\ll 1,
\label{TiismallEta}
\end{equation}%
between the energy density and stresses, $i=1,\ldots ,D$. The coefficient in
this relation does not depend on $p$ and, hence, it takes place for the
total topological part of the VEV as well. Hence, in the limit under
consideration the topological parts in the vacuum stresses are isotropic and
correspond to the gravitational source with barotropic equation of state.
Note that this limit corresponds to late times in terms of synchronous time
coordinate $t$, $(\alpha /L_{p+1})e^{-t/\alpha }\ll 1$, and the topological
part in the VEV is suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$.
For a conformally coupled massless scalar field the coefficient of the
leading term vanishes and the topological parts are suppressed by the factor
$\exp [-(D+1)t/\alpha ]$. As the uncompactified dS part is constant, it
dominates the topological part at the late stages of the cosmological
evolution.
For small values of the ratio $\eta /L_{p+1}$ and for purely imaginary $\nu $%
, in the way similar to that used for the case of the field square we can
see that the energy density behaves like%
\begin{equation}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q}\approx \frac{4De^{-Dt/\alpha
}BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\sin [2|\nu |t/\alpha
+2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}], \label{T00ImEta}
\end{equation}%
where the coefficient $B_{D}$ and the phase $\phi _{1}$ are defined by the
relation%
\begin{equation}
|\nu |(1/2-2\xi )+i\left[ D/4-(D+1)\xi \right] =B_{D}e^{i\phi _{1}}.
\label{DefBD}
\end{equation}%
In the same limit, the main contribution into the vacuum stresses comes from
the term $A$ in (\ref{A}) and one has (no summation over $i$)%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{8|\nu
|e^{-Dt/\alpha }BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\cos
[2|\nu |t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}].
\label{TiiImEta}
\end{equation}%
As we see, in the limit under consideration to the leading order the vacuum
stresses are isotropic.
\section{Twisted scalar field}
\label{sec:Twisted}
One of the characteristic features of field theory on backgrounds with
non-trivial topology is the appearance of topologically inequivalent field
configurations \cite{Isha78}. In this section we consider the case of a
twisted scalar field on background of dS spacetime with the spatial topology
$\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ assuming that the field obeys
the antiperiodicity condition (no summation over $l$)%
\begin{equation}
\varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=-\varphi (t,%
\mathbf{z}_{p},\mathbf{z}_{q}), \label{AntiPer}
\end{equation}%
where $\mathbf{e}_{l}$ is the unit vector along the direction of the
coordinate $z^{l}$, $l=p+1,\ldots ,D$. The corresponding Wightman fucntion
and the VEVs of the field square and the energy-momentum tensor can be found
in the way similar to that for the field with periodicity conditions. The
eigenfunctions have the form given by (\ref{eigfuncD}), where now%
\begin{equation}
k_{l}=2\pi (n_{l}+1/2)/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D.
\label{nltwisted}
\end{equation}%
The positive frequency Wightman function is still given by formula (\ref%
{GxxD}). For the summation over $n_{p+1}$ we apply the Abel-Plana formula in
the form \cite{Most97,Saha07Gen}%
\begin{equation}
\sum_{n=0}^{\infty }f(n+1/2)=\int_{0}^{\infty }dx\,f(x)-i\int_{0}^{\infty
}dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}+1}. \label{abel2}
\end{equation}%
Similar to (\ref{GxxD2}), for the correction to the Wightman function due to
the compactness of the $(p+1)$th spatial direction this leads to the result
\begin{eqnarray}
\Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&-\frac{2\alpha ^{1-D}(\eta \eta
^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}%
_{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\
&&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}+1)} \notag \\
&&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta
x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2tw}
\end{eqnarray}%
where now $\mathbf{k}_{q-1}=(\pi (2n_{p+2}+1)/L_{p+2},\ldots ,\pi
(2n_{D}+1)/L_{D})$, and
\begin{equation}
k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}\left[ \pi (2n_{l}+1)/L_{l}\right]
^{2}. \label{knqtw}
\end{equation}%
Taking the coincidence limit of the arguments, for the VEV of the field
square we find
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }(-1)^{n}\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}),
\label{DelPhi2tw}
\end{eqnarray}%
with the notations being the same as in (\ref{DelPhi2}). Note that in this
formula we can put $\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }=2^{q-1}\sum_{%
\mathbf{n}_{q-1}=0}^{+\infty }$. In particular, for the topology $\mathrm{R}%
^{D-1}\times \mathrm{S}^{1}$ with a single compactified dimension of the
length $L_{D}=L$, considered in \cite{Saha07}, we have $\langle \varphi
^{2}\rangle _{c}=\Delta _{D}\langle \varphi ^{2}\rangle _{D-1,1}$ with the
topological part given by the formula%
\begin{eqnarray}
\langle \varphi ^{2}\rangle _{c} &=&\frac{4\alpha ^{1-D}}{(2\pi )^{D/2+1}}%
\sum_{n=1}^{\infty }(-1)^{n}\int_{0}^{\infty }dx\,x^{D-1} \notag \\
&&\times \left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\frac{%
K_{D/2-1}(nLx/\eta )}{(nLx/\eta )^{D/2-1}}. \label{phi2SingComp}
\end{eqnarray}%
In figure \ref{fig1} we have plotted the topological part in the VEV of the
field square in the case of a conformally coupled twisted massive scalar ($%
\xi =\xi _{D}$) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}%
^{2}\times \mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right
panel) as a function of $L/\eta =Le^{t/\alpha }/\alpha $. In the second case
we have taken the lengths for all compactified dimensions being the same: $%
L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves correspond to the
values of the parameter $m\alpha $. Note that we have presented conformally
non-trivial examples and the graphs are plotted by using the general formula
(\ref{DelPhi2tw}). For the case $m\alpha =1$ the parameter $\nu $ is pure
imaginary and in accordance with the asymptotic analysis given above the
behavior of the field square is oscillatory for large values of the ratio $%
L/\eta $. For the left panel in figure \ref{fig1} the first zero is for $%
L/\eta \approx 8.35$ and for the right panel $L/\eta \approx 9.57$.
\begin{figure}[tbph]
\begin{center}
\begin{tabular}{cc}
\epsfig{figure=sahfig1a.eps,width=7.cm,height=6cm} & \quad %
\epsfig{figure=sahfig1b.eps,width=7.cm,height=6cm}%
\end{tabular}%
\end{center}
\caption{The topological part in the VEV of the field square in the case of
a conformally coupled twisted massive scalar ($\protect\xi =\protect\xi _{D}$%
) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}^{2}\times
\mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right panel) as a
function of $L/\protect\eta =Le^{t/\protect\alpha }/\protect\alpha $. In the
second case we have taken the lengths for all compactified dimensions being
the same: $L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves
correspond to the values of the parameter $m\protect\alpha $. }
\label{fig1}
\end{figure}
In the case of a twisted scalar field the formulae for the VEV of the
energy-momentum tensor are obtained from formulae for the untwisted field
given in the previous section (formulae (\ref{DelT00}), (\ref{DelTii})) with
$k_{\mathbf{n}_{q-1}}^{2}$ from (\ref{knqtw}) and by making the replacement%
\begin{equation}
\sum_{n=1}^{\infty }\rightarrow \sum_{n=1}^{\infty }(-1)^{n},\
\label{SumRepl}
\end{equation}%
and $k_{i}=2\pi (n_{i}+1/2)/L_{i}$ in expression (\ref{fp+1}) for $%
f^{(i)}(y) $, $i=p+2,\ldots ,D$. In figure \ref{fig2} the topological part
in the VEV of the energy density is plotted versus $L/\eta $ for a a
conformally coupled twisted massive scalar in $D=3$ dS spacetime with
spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ (left panel) and $(%
\mathrm{S}^{1})^{3}$ (right panel). In the latter case the lengths of
compactified dimensions are the same. As in figure \ref{fig1}, the numbers
near the curves are the values of the parameter $m\alpha $. For $m\alpha =1$
the behavior of the energy density for large values $L/\eta $ correspond to
damping oscillations. In the case $m\alpha =0.25$ (the parameter $\nu $ is
real) for the example on the left panel the topological part of the energy
density vanishes for $L/\eta \approx 9.2$, takes the minimum value $\langle
T_{0}^{0}\rangle _{c}\approx -3.1\cdot 10^{-6}/\alpha ^{4}$ for $L/\eta
\approx 12.9$ and then monotonically goes to zero. For the example on the
right panel with $m\alpha =0.25$ the energy density vanishes for $L/\eta
\approx 45$, takes the minimum value $\langle T_{0}^{0}\rangle _{c}\approx
-1.1\cdot 10^{-8}/\alpha ^{4}$ for $L/\eta \approx 64.4$ and then
monotonically goes to zero. For a conformally coupled massless scalar field
in the case of topology $(\mathrm{S}^{1})^{3}$ one has $\langle
T_{0}^{0}\rangle _{c}=0.1957(\eta /\alpha L)^{4}$. Note that in the case of
topology $\mathrm{R}^{D-1}\times \mathrm{S}^{1}$ for a conformally coupled
massless scalar we have the formulae (no summation over $l$)%
\begin{eqnarray}
\langle T_{l}^{l}\rangle _{c} &=&\frac{1-2^{-D}}{\pi ^{(D+1)/2}}\left( \frac{%
\eta }{\alpha L}\right) ^{D+1}\zeta _{\mathrm{R}}(D+1)\Gamma \left( \frac{D+1%
}{2}\right) , \label{TllConfTwS1} \\
\langle T_{D}^{D}\rangle _{c} &=&-D\langle T_{0}^{0}\rangle _{c},\;\xi =\xi
_{D},\;m=0, \label{T00ConfTwS1}
\end{eqnarray}%
where $l=0,1,\ldots ,D-1$, and $\zeta _{\mathrm{R}}(x)$ is the Riemann zeta
function. The corresponding energy density is positive.
\begin{figure}[tbph]
\begin{center}
\begin{tabular}{cc}
\epsfig{figure=sahfig2a.eps,width=7.cm,height=6cm} & \quad %
\epsfig{figure=sahfig2b.eps,width=7.cm,height=6cm}%
\end{tabular}%
\end{center}
\caption{The same as in figure \protect\ref{fig1} for the topological part
of the energy density. }
\label{fig2}
\end{figure}
\section{Conclusion}
\label{sec:Conc}
In topologically non-trivial spaces the periodicity conditions imposed on
possible field configurations change the spectrum of the vacuum fluctuations
and lead to the Casimir-type contributions to the VEVs of physical
observables. Motivated by the fact that dS spacetime naturally arise in a
number of contexts, in the present paper we consider the quantum vacuum
effects for a massive scalar field with general curvature coupling in $(D+1)$%
-dimensional dS spacetime having the spatial topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$. Both cases of the periodicity and antiperiodicity
conditions along the compactified dimensions are discussed. As a first step
for the investigation of vacuum densities we evaluate the positive frequency
Wightman function. This function gives comprehensive insight into vacuum
fluctuations and determines the response of a particle detector of the
Unruh-DeWitt type. Applying the Abel-Plana formula to the corresponding
mode-sum, we have derived a recurrence relation which presents the Wightman
function for the $\mathrm{dS}_{D+1}$ with topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$ in the form of the sum of the Wightman function for the
topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ and the additional
part $\Delta _{p+1}G_{p,q}^{+}$ induced by the compactness of the $(p+1)$th
spatial dimension. The latter is given by formula (\ref{GxxD2}) for a scalar
field with periodicity conditions and by formula (\ref{GxxD2tw}) for a
twisted scalar field. The repeated application of formula (\ref{G1decomp})
allows us to present the Wightman function as the sum of the uncompactified
dS and topological parts, formula (\ref{DeltaGtop}). As the toroidal
compactification does not change the local geometry, by this way the
renormalization of the bilinear field products in the coincidence limit is
reduced to that for uncompactifeid $\mathrm{dS}_{D+1}$.
Further, taking the coincidence limit in the formulae for the Wightman
function and its derivatives, we evaluate the VEVs of the field square and
the energy-momentum tensor. For a scalar field with periodic conditions the
corresponding topological parts are given by formula (\ref{DelPhi2}) for the
field square and by formulae (\ref{DelT00}) and (\ref{DelTii}) for the
energy density and vacuum stresses respectively. The trace anomaly is
contained in the uncompactified dS part only and the topological part
satisfies the standard trace relation (\ref{tracerel}). In particular, this
part is traceless for a conformally coupled massless scalar. In this case
the problem under consideration is conformally related to the corresponding
problem in $(D+1)$-dimensional Minkowski spacetime with the spatial topology
$\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ and the topological parts in the
VEVs are related by the formulae $\langle \varphi ^{2}\rangle _{c}=(\eta
/\alpha )^{D-1}\langle \varphi ^{2}\rangle _{c}^{\mathrm{(M)}}$ and $\langle
T_{i}^{k}\rangle _{c}=(\eta /\alpha )^{D+1}\langle T_{i}^{k}\rangle _{c}^{%
\mathrm{(M)}}$. Note that for a conformally coupled massless scalar the
topological part in the energy density is always negative and is equal to
the vacuum stresses along the uncompactified dimensions.
For the general case of the curvature coupling, in the limit $L_{p+1}/\eta
\ll 1$ the leading terms in the asymptotic expansion of the VEVs coincide
with the corresponding expressions for a conformally coupled massless field.
In particular, this limit corresponds to the early stages of the
cosmological expansion, $t\rightarrow -\infty $, and the topological parts
behave like $e^{-(D-1)t/\alpha }$ for the field square and like $%
e^{-(D+1)t/\alpha }$ for the energy-momentum tensor. Taking into account
that the uncompactified dS part is time independent, from here we conclude
that in the early stages of the cosmological evolution the topological part
dominates in the VEVs. In the opposite asymptotic limit corresponding to $%
\eta /L_{p+1}\ll 1$, the behavior of the topological parts depends on the
value of the parameter $\nu $. For real values of this parameter the leading
terms in the corresponding asymptotic expansions are given by formulae (\ref%
{DelPhi2Mets}) and (\ref{T00smallEta}) for the field square and the
energy-momentum tensor respectively. The corresponding vacuum stresses are
isotropic and the topological part of the energy-momentum tensor corresponds
to the gravitational source of the barotropic type with the equation of
state parameter equal to $-2\nu /D$. In the limit under consideration the
topological part in the energy density is positive for a minimally coupled
scalar field and for a conformally coupled massive scalar field. In
particular, this limit corresponds to the late stages of the cosmological
evolution, $t\rightarrow +\infty $, and the topological parts of the VEVs
are suppressed by the factor $e^{-(D-2\nu )t/\alpha }$ for both the field
square and the energy-momentum tensor. For a conformally coupled massless
field the coefficient of the leading term in the asymptotic expansion
vanishes and the topological part is suppressed by the factor $%
e^{-(D+1)t/\alpha }$. In the limit $\eta /L_{p+1}\ll 1$ and for pure
imaginary values of the parameter $\nu $ the asymptotic behavior of the
topological parts in the VEVs of the field square and the energy-momentum
tensor is described by formulae (\ref{DelPhi2MetsIm1}), (\ref{T00ImEta}), (%
\ref{TiiImEta}). These formulae present the leading term in the asymptotic
expansion of the topological parts at late stages of the cosmological
evolution. In this limit the topological terms oscillate with the amplitude
going to the zero as $e^{-Dt/\alpha }$ for $t\rightarrow +\infty $. The
phases of the oscillations for the energy density and vacuum stresses are
shifted by $\pi /2$.
In section \ref{sec:Twisted} we have considered the case of a scalar field
with antiperiodicity conditions along the compactified directions. The
Wightman fucntion and the VEVs of the field square and the energy-momentum
tensor are evaluated in the way similar to that for the field with
periodicity conditions. The corresponding formulae are obtained from the
formulae for the untwisted field with $k_{\mathbf{n}_{q-1}}^{2}$ defined by
Eq. (\ref{knqtw}) and by making the replacement (\ref{SumRepl}). In this
case we have also presented the graphs of the topological parts in the VEVs
of the field square and the energy-momentum tensor for $\mathrm{dS}_{4}$
with the spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ and $(%
\mathrm{S}^{1})^{3}$.
\section*{Acknowledgments}
AAS would like to acknowledge the hospitality of the INFN Laboratori
Nazionali di Frascati, Frascati, Italy. The work of AAS was supported in
part by the Armenian Ministry of Education and Science Grant. The work of SB
has been supported in part by the European Community Human Potential Program
under contract MRTN-CT-2004-005104 "Constituents, fundamental forces and
symmetries of the Universe" and by INTAS under contract 05-7928.
|
\section{Introduction}
\label{sec1}
\IEEEPARstart{W}{ith} the development of the internet of vehicles (IoV) and cloud computing, caching technology facilitates various real-time vehicular applications for vehicular users (VUs), such as automatic navigation, pattern recognition and multimedia entertainment \cite{Liuchen2021} \cite{QWu2022}. For the standard caching technology, the cloud caches various contents like data, video and web pages. In this scheme, vehicles transmit the required contents to a macro base station (MBS) connected to a cloud server, and could fetch the contents from the MBS, which would cause high content transmission delay from the MBS to vehicles due to the communication congestion caused by frequently requested contents from vehicles \cite{Dai2019}. The content transmission delay can be effectively reduced by the emergence of vehicular edge computing (VEC), which caches contents in the road side unit (RSU) deployed at the edge of vehicular networks (VNs) \cite{Javed2021}. Thus, vehicles can fetch contents directly from the local RSU, to reduce the content transmission delay. In the VEC, since the caching capacity of the local RSU is limited, if some vehicles cannot fetch their required contents, a neighboring RSU who has the required contents could forward them to the local RSU. The worst case is that vehicles need to fetch contents from the MBS due to both local and neighboring RSUs not having cached the requested contents.
In the VEC, it is critical to design a caching scheme to cache the popular contents. The traditional caching schemes cache contents based on the previously requested contents \cite{Narayanan2018}. However, owing to the high-mobility characteristics of vehicles in VEC, the previously requested contents from vehicles may become outdated quickly, thus the traditional caching schemes may not satisfy all the VUs' requirement. Therefore, it is necessary to predict the most popular contents in the VEC and cache them in the suitable RSUs in advance. Machine learning (ML) as a new tool, can extract hidden features by training user data to efficiently predict popular contents\cite{Yan2019}. However, the user data usually contains privacy information and users are reluctant to share their data directly with others, which make it difficult to collect and train users' data. Federated learning (FL) can protect the privacy of users by sharing their local models instead of data\cite{Chen2021}. In traditional FL, the global model is periodically updated by aggregating all vehicles' local models\cite{Wang2020} -\cite{Cheng2021}. However, vehicles may frequently drive out of the coverage area of the VEC before they update their local models and thus the local models cannot be uploaded in the same area, which would reduce the accuracy of the global model as well as the probability of getting the predicted popular contents. Hence, it motivates us to consider the mobility of vehicles and propose an asynchronous FL to predict accurate popular contents in VEC.
Generally, the predicted popular contents should be cached in their local RSU of vehicles to guarantee a low content transmission delay. However, the caching capacity of each local RSU is limited and the popular contents may be diverse, thus the size of the predicted popular contents usually exceeds the cache capacity of the local RSU. Hence, the VEC has to determine where the predicted popular contents are cached and updated. The content transmission delay is an important metric for vehicles to provide real-time vehicular application. The different popular contents cached in the local and neighboring RSUs would impact the way vehicles fetch contents, and thus affect the content transmission delay. In addition, the content transmission delay of each vehicle is impacted by its channel condition, which is affected by vehicle mobility. Therefore, it is necessary to consider the mobility of vehicles to design a cooperative caching scheme, in which the predicted popular contents can be cached among RSUs to optimize the content transmission delay. In contrast to some conventional decision algorithms, deep reinforcement learning (DRL) is a favorable tool to construct the decision-making framework and optimize the cooperative caching for the contents in complex vehicular environment \cite{Zhu2021}. Therefore, we shall employ DRL to determine the optimal cooperative caching to reduce the content transmission delay of vehicles.
In this paper, we consider the vehicle mobility and propose a cooperative Caching scheme in VEC based on Asynchronous Federated and deep Reinforcement learning (CAFR). The main contributions of this paper are summarized as follows.
\begin{itemize}
\item[1)] By considering the mobility characteristics of vehicles including the positions and velocities, we propose an asynchronous FL algorithm to improve the accuracy of the global model.
\item[2)] We propose an algorithm to predict the popular contents based on the global model, where each vehicle adopts the autoencoder (AE) to predict its interested contents based on the global model, while the local RSU collects the interested contents of all vehicles within the coverage area to catch the popular contents.
\item[3)] We elaborately design a DRL framework (dueling deep Q-network (DQN)) to illustrate the cooperative caching problem, where the state, action and reward function have been defined. Then the local RSU can determine optimal cooperative caching to minimize the content transmission delay based on the dueling DQN algorithm.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{sec2} reviews the related works on content caching in VNs. Section \ref{sec3} briefly describes the system model. Section \ref{sec5} proposes a mobility-aware cooperative caching in the VEC based on asynchronous federated and deep reinforcement learning method. We present some simulation results in Section \ref{sec6}, and then conclude them in Section \ref{sec7}.
\section{Related Work}
\label{sec2}
In this section, we first review the existing works related to the content caching in vehicular networks (VNs), and then survey the current state of art of the cooperative content caching schemes in VEC.
In \cite{YDai2020}, Dai \textit{et al.} proposed a distributed content caching framework with empowering blockchain to achieve security and protect privacy, and considered the mobility of vehicles to design an intelligent content caching scheme based on DRL framework.
In \cite{Yu2021}, Yu \textit{et al.} proposed a mobility-aware proactive edge caching scheme in VNs that allows multiple vehicles with private data to collaboratively train a global model for predicting content popularity, in order to meet the requirements for computationally intensive and latency-sensitive vehicular applications.
In \cite{JZhao2021}, Zhao \textit{et al.} optimized the edge caching and computation management for service caching, and adopted Lyapunov optimization to deal with the dynamic and unpredictable challenges in VNs.
In \cite{SJiang2020}, Jiang \textit{et al.} constructed a two-tier secure access control structure for providing content caching in VNs with the assistance of edge devices, and proposed the group signature-based scheme for the purpose of anonymous authentication.
In \cite{CTang2021}, Tang \textit{et al.} proposed a new optimization method to reduce the average response time of caching in VNs, and then adopted Lyapunov optimization technology to constrain the long-term energy consumption to guarantee the stability of response time.
In \cite{YDai2022}, Dai \textit{et al.} proposed a VN with digital twin to cache contents for adaptive network management and policy arrangement, and designed an offloading scheme based on the DRL framework to minimize the total offloading delay.
However, the above content caching schemes in VNs did not take into account the cooperative caching in the VEC environment.
There are some works considering cooperative content caching schemes in VEC.
In \cite{GQiao2020}, Qiao \textit{et al.} proposed a cooperative edge caching scheme in VEC and constructed the double time-scale markov decision process to minimize the content access cost, and employed the deep deterministic policy gradient (DDPG) method to solve the long-term mixed-integer linear programming problems.
In \cite{JChen2020}, Chen \textit{et al.} proposed a cooperative edge caching scheme in VEC which considered the location-based contents and the popular contents, while designing an optimal scheme for cooperative content placement based on an ant colony algorithm to minimize the total transmission delay and cost.
In \cite{LYao2022}, Yao \textit{et al.} designed a cooperative edge caching scheme with consistent hash and mobility prediction in VEC to predict the path of each vehicle, and also proposed a cache replacement policy based on the content popularity to decide the priorities of collaborative contents.
In \cite{RWang2021}, Wang \textit{et al.} proposed a cooperative edge caching scheme in VEC based on the long short-term memory (LSTM) networks, which caches the predicted contents in RSUs or other vehicles and thus reduces the content transmission delay.
In \cite{DGupta2020}, Gupta \textit{et al.} proposed a cooperative caching scheme that jointly considers cache location, content popularity and predicted rating of contents to make caching decision based on the non-negative matrix factorization, where it employs a legitimate user authorization to ensure the secure delivery of cached contents.
In \cite{LYao2019}, Yao \textit{et al.} proposed a cooperative caching scheme based on the mobility prediction and drivers' social similarities in VEC, where the regularity of vehicles' movement behaviors are predicted based on the hidden markov model to improve the caching performance.
In \cite{RWu2022}, Wu \textit{et al.} proposed a hybrid service provisioning framework and cooperative caching scheme in VEC to solve the profit allocation problem among the content providers (CPs), and proposed an optimization model to improve the caching performance in managing the caching resources.
In \cite{LYao2017}, Yao \textit{et al.} proposed a cooperative caching scheme based on mobility prediction, where the popular contents may be cached in the mobile vehicles within the coverage area of hot spot. They also designed a cache replacement scheme according to the content popularity to solve the limited caching capacity problem for each edge cache device.
In \cite{KZhang2018}, Zhang \textit{et al.} proposed a cooperative edge caching architecture that focuses on the mobility-aware caching, where the vehicles cache the contents with base stations collaboratively. They also introduced a vehicle-aided edge caching scheme to improve the capability of edge caching.
In \cite{KLiu2016}, Liu \textit{et al.} designed a cooperative caching scheme that allows vehicles to search the unrequested contents. This scheme facilitates the content sharing among vehicles and improves the service performance.
In \cite{SWang2017}, Wang \textit{et al.} proposed a VEC caching scheme to reduce the total transmission delay. This scheme extends the capability of the data center from the core network to the edge nodes by cooperatively caching popular contents in different CPs. It minimizes the VUs' average delay according to an iterative ascending price method.
In \cite{MLiu2021}, Liu \textit{et al.} proposed a real-time caching scheme in which edge devices cooperate to improve the caching resource utilization. In addition, they adopted the DRL framework to optimize the problem of searching requests and utility models to guarantee the search efficiency.
In \cite{BKo2019}, Ko \textit{et al.} proposed an adaptive scheduling scheme consisting of the centralized scheduling mechanism, ad hoc scheduling mechanism and cluster management mechanism to exploit the ad hoc data sharing among different RSUs.
In \cite{JCui2020}, Cui \textit{et al.} proposed a privacy-preserving data downloading method in VEC, where the RSUs can find popular contents by analyzing encrypted requests of nearby vehicles to improve the downloading efficiency of the network.
In \cite{QLuo2020}, Luo \textit{et al.} designed a communication, computation and cooperative caching framework, where computing-enabled RSUs provide computation and bandwidth resource to the VUs to minimize the data processing cost in VEC.
As mentioned above, no other works has considered the vehicle mobility and privacy of VUs simultaneously to design cooperative caching schemes in VEC, which motivates us to propose a mobility-aware cooperative caching in VEC based on the asynchronous FL and DRL.
\begin{figure}
\center
\includegraphics[scale=0.7]{1-eps-converted-to.pdf}
\caption{VEC scenario}
\label{fig1}
\end{figure}
\section{System Model}
\label{sec3}
\subsection{System Scenario}
As shown in Fig. \ref{fig1}, we consider a three-tier VEC in an urban scenario that consists of a local RSU, a neighboring RSU, a MBS attached with a cloud and some vehicles moving in the coverage area of the local RSU. The top tier is the MBS deployed at the center of the VEC, while middle tier is the RSUs deployed in the coverage area of the MBS. They are placed on one side of the road. The bottom tier is the vehicles driving within the coverage area of the RSUs.
Each vehicle stores a large amount of VUs' historical data, i.e., local data. Each data is a vector reflecting different information of a VU, including the VU's personal information such as identity (ID) number, gender, age and postcode, the contents that the VU may request, as well as the VU's ratings for the contents where a larger rating for a content indicates that the VU is more interested in the content. Particularly, the rating for a content may be $0$, which means that it is not popular or is not requested by VUs. Each vehicle randomly chooses a part of the local data to form a training set while the rest is used as a testing set. The time duration of vehicles within the coverage area of the MBS is divided into rounds. For each round, each vehicle randomly selects contents from its testing set as the requested contents, and sends the request information to the local RSU to fetch the contents at the beginning of each round. We consider the MBS has abundant storage capacity and caches all available contents, while the limited storage capacity of each RSU can only accommodate part of contents. Therefore, the vehicle fetches each of the requested content from the local RSU, neighboring RSU or MBS in different conditions. Specifically,
\subsubsection{Local RSU}If a requested content is cached in the local RSU, the local RSU sends back the requested content to the vehicle. In this case the vehicle fetches the content from the local RSU.
\subsubsection{neighboring RSU}If a requested content is not cached in the local RSU, the local RSU transfers the request to the neighboring RSU, and the neighboring RSU sends the content to the local RSU if it caches the requested content. Afterward, the local RSU sends back the content to the vehicle. In this case the vehicle fetches the content from the neighboring RSU.
\subsubsection{MBS}If a content is neither cached in the local RSU nor the neighboring RSU, the vehicle sends the request to the MBS that directly sends back the requested content to the vehicle. In this case, the VU fetches the content from the MBS.
\subsection{Mobility Model of Vehicles}
The model assumes that all vehicles drive in the same direction and vehicles arrive at a local RSU, following a Poisson distribution with the arrival rate $\lambda_{v}$. Once a vehicle enters the coverage of the local RSU, it sends request information to the local RSU. Each vehicle keeps the same mobility characteristics including position and velocity within a round and may change its mobility characteristics at the beginning of each round. The velocity of different vehicles follows an independent identically distribution. The velocity of each vehicle is generated by a truncated Gaussian distribution, which is flexible and consistent with the real dynamic vehicular environment. For round $r$, the number of vehicles driving in the coverage area of the local RSU is $N^{r}$. The set of $N^{r}$ vehicles are denoted as $\mathbb{V}^{r}=\left\{V_{1}^{r}, V_{2}^{r},\ldots, V_{i}^{r}, \ldots, V_{N^{r}}^{r}\right\}$, where $V_{i}^{r}$ is vehicle $i$ driving in the local RSU $(1 \leq i \leq N^{r})$. Let $\left\{U_{1}^{r}, U_{2}^{r}, \ldots, U_{i}^{r}, \ldots, U_{N^{r}}^{r}\right\}$ be the velocities of all vehicles driving in the local RSU, where $U_{i}^{r}$ is velocity of $V_{i}^{r}$. According to \cite{AlNagar2019}, the probability density function of $U_{i}^{r}$ is expressed as
\begin{equation}
f({U_{i}^r}) = \left\{ \begin{aligned}
\frac{{{e^{ - \frac{1}{{2{\sigma ^2}}}{{({U_{i}^r} - \mu )}^2}}}}}{{\sqrt {2\pi {\sigma ^2}} (erf(\frac{{{U_{\max }} - \mu }}{{\sigma \sqrt 2 }}) - erf(\frac{{{U_{\min }} - \mu }}{{\sigma \sqrt 2 }}))}},\\
{U_{min }} \le {U_{i}^r} \le {U_{max }},\\
0 \qquad \qquad \qquad \qquad \quad otherwise.
\end{aligned} \right.
\label{eq1}
\end{equation}
where $U_{\max}$ and $U_{\min}$ are the maximum and minimum velocity threshold of each vehicle, respectively, and $erf\left(\frac{U_{i}^{r}-\mu}{\sigma \sqrt{2}}\right)$ is the Gauss error function of $U_{i}^{r}$ under the mean $\mu$ and variance $\sigma^{2}$.
\subsection{Communication Model}
The communications between the local RSU and neighboring RSU adopt the wired link. Each vehicle keeps the same communication model during a round and changes its communication model for different rounds. When the round is $r$, the channel gain of $V_{i}^{r}$ is modeled as \cite{3gpp}
\begin{equation}
\begin{aligned}
h_{i}^{r}(dis(x,V_{i}^{r}))=\alpha_{i}^{r}(dis(x,V_{i}^{r})) g_{i}^{r}(dis(x,V_{i}^{r})), \\
x=S,M,\\
\label{eq2}
\end{aligned}
\end{equation}
where $x=S$ means the local RSU and $x=M$ means the MBS, $dis(x,V_{i}^{r})$ is the distance between the local RSU$/$MBS and $V_{i}^{r}$, $\alpha_{i}^{r}(dis(x,V_{i}^{r}))$ is the path loss between the local RSU$/$MBS and $V_{i}^{r}$, and $g_{i}^{r}(dis(x,V_{i}^{r}))$ is the shadowing channel fading between the local RSU$/$MBS and $V_{i}^{r}$, which follows a Log-normal distribution.
Each RSU communicates with the vehicles in its coverage area through vehicle to RSU (V2R) link, while the MBS communicates with vehicles through vehicle to base station (V2B) link. Since the distances between the local RSU$/$MBS and $V_{i}^{r}$ are different in different rounds, V2R$/$V2B link suffers from different channel impairments, and thus transmit with different transmission rates in different rounds. The transmission rates under V2R and V2B link are calculated as follows.
According to the Shannon theorem, the transmission rate between the local RSU and $V_{i}^{r}$ is calculated as \cite{Chenwu2020}
\begin{equation}
R_{R, i}^{r}=B\log _{2}\left(1+\frac{p_B h_{i}^{r}(dis(S,V_{i}^{r}))}{\sigma_{c}^{2}}\right),
\label{eq3}
\end{equation}where $B$ is the available bandwidth, $p_B$ is the transmit power level used by the local RSU and $\sigma_{c}^{2}$ is the noise power.
Similarly, the transmission rate between the MBS and $V_{i}^{r}$ is calculated as
\begin{equation}
R_{B, i}^{r}=B\log _{2}\left(1+\frac{p_{M} h_{i}^{r}(dis(M,V_{i}^{r}))}{\sigma_{c}^{2}}\right),
\label{eq4}
\end{equation}where $p_{M}$ is the transmit power level used by MBS.
\begin{figure}
\center
\includegraphics[scale=0.75]{2-eps-converted-to.pdf}
\caption{Asynchronous FL}
\label{fig2}
\end{figure}
\section{Cooperative Caching Scheme}
\label{sec5}
In this section, we propose a cooperative cache scheme to optimize the content transmission delay in each round $r$. We first propose an asynchronous FL algorithm to protect VU's information and obtain an accurate model. Then we propose an algorithm to predict the popular contents based on the obtained model. Finally, we present a DRL based algorithm to determine the optimal cooperative caching according to the predicted popular contents. Next, we will introduce the asynchronous FL algorithm, the popular content prediction algorithm and the DRL-based algorithm, respectively.
\subsection{Asynchronous Federated Learning}
As shown in Fig. \ref{fig2}, the asynchronous FL algorithm consists of 5 steps as follows.
\subsubsection{Select Vehicles}
\
\newline
\indent
The main goal of this step is to select the vehicles whose staying time in the local RSU is long enough to ensure they can participate in the asynchronous FL and complete the training process.
Each vehicle first sends its mobility characteristics including its velocity and position (i.e., the distance to the local RSU and distance it has traversed within the coverage of the local RSU), then the local RSU selects vehicles according to the staying time that is calculated based on the vehicle's mobility characteristics. The staying time of $V_{i}^{r}$ in the local RSU is calculated as
\begin{equation}
T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r},
\label{eq5}
\end{equation}
where $L_s$ is the coverage range of the local RSU, $P_{i}^{r}$ is the distance that $V_{i}^{r}$ has traversed within the coverage of the local RSU.
The staying time of $V_{i}^{r}$ should be larger than the sum of the average training time $T_{training}$ and inference time $T_{inference}$ to guarantee that $V_{i}^{r}$ can complete the training process. Therefore, if $T_{r,i}^{staying}>T_{training}+T_{inference}$, the local RSU selects $V_{i}^{r}$ to participate in asynchronous FL training. Otherwise, $V_{i}^{r}$ is ignored.
\subsubsection{Download Model}
\
\newline
\indent
In this step, the local RSU will generate the global model $\omega^{r}$. For the first round, the local RSU initializes a global model based on the AE, which can extract the hidden features used for popular content prediction. In each round, the local RSU updates the global model and transfers the global model $\omega^{r}$ to all the selected vehicles in the end.
\subsubsection{Local Training}
\
\newline
\indent
In this step, each vehicle in the local RSU sets the downloaded global model $\omega^{r}$ as the initial local model and updates the local model iteratively through training. Afterward, the updated local model will be the feedback to the local RSU.
For each iteration $k$, $V_{i}^{r}$ randomly samples some training data $n_{i,k}^{r}$ from the training set. Then, it uses $n_{i,k}^{r}$ to train the local model based on the AE that consists of an encoder and a decoder. Let $W_{i,k}^{r,e}$ and $b_{i,k}^{r,e}$ be the weight matrix and bias vector of the encoder for iteration $k$, respectively, $W_{i,k}^{r,d}$ and $b_{i,k}^{r,d}$ be the weight matrix and bias vector of the decoder for iteration $k$, respectively. Thus the local model of $V_{i,j}^{r}$ for iteration $k$ is expressed as $\omega_{i,k}^r=\{W_{i,k}^{r,e}, b_{i,k}^{r,e}, W_{i,k}^{r,d}, b_{i,k}^{r,d}\}$. For each training data $x$ in $n_{i,k}^{r}$, the encoder first maps the original training data $x$ to a hidden layer to obtain the hidden feature of $x$, i.e., $z(x)=f\left(W_{i,k}^{r,e}x+b_{i,k}^{r,e}\right)$. Then the decoder calculates the reconstructed input $\hat{x}$, i.e., $\hat{x}=g\left(W_{i,k}^{r,d}z(x)+b_{i,k}^{r,d}\right)$, where $f{(\cdot)}$ and $g{(\cdot)}$ are the nonlinear and logical activation function \cite{Ng2011}. Afterward, the loss function of data $x$ under the local model $\omega_{i,k}^r$ is calculated as
\begin{equation}
l\left(\omega_{i,k}^r;x\right)=(x-\hat{x})^{2},
\label{eq6}
\end{equation}where $\omega^{r}_{i,1}=\omega^{r}$.
After the loss functions of all the data in $n_{i,k}^{r}$ are calculated, the local loss function for iteration $k$ is calculated as
\begin{equation}
f(\omega_{i,k}^r)=\frac{1}{\left| n_{i,k}^r\right|}\sum_{x\in n_{i,k}^r} l\left(\omega_{i,k}^r;x\right),
\label{eq7}
\end{equation}
where $\left| n_{i,k}^r\right|$ is the number of data in $n_{i,k}^r$.
Then the regularized local loss function is calculated to reduce the deviation between the local model $\omega_{i,k}^r$ and global model $\omega^{r}$ to improve the algorithm convergence, i.e.,
\begin{equation}
g\left(\omega_{i,k}^r\right)=f\left(\omega_{i,k}^r\right)+\frac{\rho}{2}\left\|\omega^{r}-\omega_{i,k}^r\right\|^{2},
\label{eq8}
\end{equation}
where $\rho$ is the regularization parameter.
Let $\nabla g(\omega_{i,k}^{r})$ be the gradient of $g\left(\omega_{i,k}^r\right)$, which is referred to as the local gradient. In the previous round, some vehicles may upload the updated local model unsuccessfully due to the delayed training time, and thus adversely affect the convergence of global model \cite{Chen2020}\cite{Xie2019}\cite{-S2021}. Here, these vehicles are called stragglers and the local gradient of a straggler in the previous round is referred to as the delayed local gradient. To solve this problem, the delayed local gradient will be aggregated into the local gradient of the current round $r$. Thus, the aggregated local gradient can be calculated as
\begin{equation}
\nabla \zeta_{i,k}^{r}=\nabla g(\omega_{i,k}^{r})+\beta \nabla g_{i}^{d},
\label{eq9}
\end{equation}
where $\beta$ is the decay coefficient and $\nabla g_{i}^{d}$ is the delayed local gradient. Note that $\nabla g_{i}^{d}=0$ if $V_{i}^{r}$ uploads successfully in the previous round.
Then the local model for the next iteration is updated as
\begin{equation}
\omega^{r}_{i,k+1}=\omega^{r}-\eta_{l}^{r}\nabla \zeta_{i,k}^{r},
\label{eq10}
\end{equation}where $\eta_{l}^{r}$ is the local learning rate in round $r$, which is calculated as
\begin{equation}
\eta_{l}^{r}=\eta_{l} \max \{1, \log (r)\},
\label{eq11}
\end{equation} where $\eta_{l}$ is the initial value of local learning rate.
Then iteration $k$ is finished and $V_{i}^{r}$ randomly samples some training data again to start the next iteration. When the number of iterations reaches the threshold $e$, $V_{i}^{r}$ completes the local training and upload the updated local model $\omega_{i}^{r}$ to the local RSU.
\subsubsection{Upload Model}
\
\newline
\indent
Each vehicle uploads its updated local model to the local RSU after it completes local training.
\subsubsection{Asynchronous aggregation}
\
\newline
\indent
If the local model of $V_{i}^{r}$, i.e., $\omega^{r}_{i}$, is the first model received by the local RSU, the upload is successful and the local RSU updates the global model. Otherwise, the local RSU drops $\omega^{r}_{i}$ and thus the upload is not successful.
When the upload is successful, the local RSU updates the global model $\omega^{r}$ by weighted averaging as follows:
\begin{algorithm}
\caption{The Asynchronous Federated Learning Algorithm}
\label{al1}
Set global model $\omega^{r}$;\\
\For{each round $r$ from $1$ to $R^{max}$}
{
\For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$ \textbf{in parallel}}
{
$T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r}$;\\
\If{ $T_{r,i}^{staying}>T_{training}+T_{inference}$}
{
$V^{r}_i$ is selected to participate in asynchronous FL training;
}
}
\For{each selected vehicle $ V^{r}_{i}$}
{
$\omega^{r}_{i} \leftarrow \textbf{Vehicle Updates}(\omega^r,i)$;\\
Upload the local model $\omega^{r}_{i}$ to the local RSU;\\
}
Receive the updated model $\omega^{r}_{i}$;\\
Calculate the weight of the asynchronous aggregation $\chi_{i}$ based on Eq. \eqref{eq14};\\
Update the global model based on Eq. \eqref{eq12};\\
\Return $w^{r+1}$
}
\textbf{Vehicle Update}($w,i$):\\
\textbf{Input:} $w^r$ \\
Calculate the local learning rate $\eta_{l}^{r}$ based on Eq. \eqref{eq11};\\
\For{each local epoch k from $1$ to $e$}
{
Randomly samples some data $n_{i,k}^r$ from the training set;\\
\For{each data $x \in n_{i,k}^r$ }
{
Calculate the loss function of data $x$ based on Eq. \eqref{eq6};\\
}
Calculate the local loss function for interation $k$ based on Eq. \eqref{eq7};\\
Calculate the regularized local loss function $g\left(\omega_{i,k}^r\right)$ based on Eq. \eqref{eq8};\\
Aggregate local gradient $\nabla \zeta_{i,k}^{r}$ based on Eq. \eqref{eq9};\\
Update the local model $\omega^{r}_{i,k}$ based on Eq. \eqref{eq10};\\
}
Set $\omega^{r}_{i}=\omega^{r}_{i,e}$;\\
\Return$\omega^{r}_{i}$
\end{algorithm}
\begin{equation}
\omega^{r}=\omega^{r-1}+\frac{d_{i}^r}{d^r} \chi_{i} \omega^{r}_{i},
\label{eq12}
\end{equation}where $d_{i}^r$ is the size of local data in $V_i^r$, $d^r$ is the total local data size of the selected vehicles and $\chi_{i}$ is the weight of the asynchronous aggregation for $V_{i}^{r}$.
The weight of the asynchronous aggregation $\chi_{i}$ is calculated by considering the traversed distance of $V_{i}^{r}$ in the coverage area of the local RSU and the content transmission delay from local RSU to $V_{i}^{r}$ to improve the accuracy of the global model and reduce the content transmission delay. Specifically, if the traversed distance of $V_{i}^{r}$ is large, it may have long available time to participate in the training, thus its local model should occupy large weight for aggregation to improve the accuracy of the global model. In addition, the content transmission delay from local RSU to $V_{i}^{r}$ is important because $V_{i}^{r}$ would finally download the content from the local RSU when the content is either cached in the local or neighboring RSU. Thus, if the content transmission delay from local RSU to $V_{i}^{r}$ is small, its local model should also occupy large weight for aggregation to reduce the content transmission delay. The weight of the asynchronous aggregation $\chi_{i}$ is calculated as
\begin{equation}
\chi_{i}=\mu_{1} {(L_{s}-P_{i}^{r})}+\mu_{2} \frac{s}{R_{R, i}^{r}},
\label{eq13}
\end{equation}where $\mu_{1}$ and $\mu_{2}$ are coefficients of the position weight and transmission weight, respectively (i.e., $\mu_{1}+\mu_{2}=1$), $s$ is the size of each content. Thus, the content transmission delay from local RSU to $V_{i}^{r}$ is affected by the transmission rate between the local RSU and $V_{i}^{r}$, i.e., $R_{R, i}^{r}$. We can further calculate $\chi_{i}$ based on the normalized $L_{s}-P_{i}^{r}$ and $R_{R, i}^{r}$, i.e.,
\begin{equation}
\chi_{i}=\mu_{1} \frac{(L_{s}-P_{i}^{r})}{L_{s}}+\mu_{2} \frac{R_{R, i}^{r}}{\max _{k \in N^{r}}\left(R_{R, k}^{r}\right)}.
\label{eq14}
\end{equation}
Since the local RSU knows $dis(S,V_{i}^{r})$ and $P_{i}^{r}$ for each vehicle $i$ at the beginning of the asynchronous FL, the local RSU can calculate $R_{R, i}^{r}$ according to Eqs. \eqref{eq2} and \eqref{eq3}, and further calculate $\chi_{i}$ according to Eq. \eqref{eq13}.
Up to now, the asynchronous FL in round $r$ is finished and the updated global model $\omega^{r}$ is obtained. The process of the asynchronous FL algorithm is shown in Algorithm \ref{al1} for ease of understanding, where $R^{max}$ is the maximum number of rounds, $e$ is the maximum number of local epochs. Then, the local RSU sends the obtained model to each vehicle to predict popular contents.
\subsection{Popular Content Prediction}
\begin{figure*}
\center
\includegraphics[scale=0.6]{3-eps-converted-to.pdf}
\caption{Popular content prediction process}
\label{fig3}
\end{figure*}
In this subsection, we propose an algorithm to predict the popular contents. As shown in Fig. \ref{fig3}, the popular content prediction algorithm consists of the 4 steps as follows.
\subsubsection{Data Preprocessing}
\
\newline
\indent
The VU's rating for a content is $0$ when VU is uninterested in the content or has not requested a content. Thus, it is difficult to differentiate if a content is an interesting one for the VU when its rating is $0$. Marking all contents with rating $0$ as uninterested contents is a bias prediction. Therefore, we adopt the obtained model to reconstruct the rating for each content in the first step, which is described as follows.
Each vehicle abstracts a rating matrix from the data in the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's ratings for all contents. Denote the rating matrix of $V_{i}^r$ as $\boldsymbol{R}_{i}^r$. Then, the AE with the obtained model is adopted to reconstruct $\boldsymbol{R}_{i}^r$. The rating matrix $\boldsymbol{R}_{i}^r$ is used as the input data for the AE that outputs the reconstructed rating matrix $\hat{\boldsymbol{R}}_{i}^r$. Since $\hat{\boldsymbol{R}}_{i}^r$ is reconstructed based on the obtained model which reflects the hidden features of data, $\hat{\boldsymbol{R}}_{i}^r$ can be used to approximate the rating matrix $\boldsymbol{R}_{i}^r$.
Then, similar to the rating matrix, each vehicle also abstracts a personal information matrix from the data of the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's personal information.
\subsubsection{Cosine Similarity}
\
\newline
\indent
$V_{i}^r$ counts the number of the nonzero ratings for each VU in $\boldsymbol{R}_{i}^r$ and marks the VUs with the $1$$/$$m$ largest numbers as active VUs. Then, each vehicle combines $\hat{\boldsymbol{R}}_{i}^r$ and the personal information matrix (denoted as $\boldsymbol{H}_{i}^r$) to calculate the similarity between each active VU and other VUs. The similarity between an active VU $a$ and $b$ is calculated according to cosine similarity \cite{yuet2018}
\begin{equation}
\begin{aligned}
\operatorname{sim}_{a,b}^{r,i}=\cos \left(\boldsymbol{H}_{i}^r(a,:), \boldsymbol{H}_{i}^r(b,:)\right)\\
=\frac{\boldsymbol{H}_{i}^r(a,:) \cdot \boldsymbol{H}_{i}^r(b,:)^T}{\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2} \times\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}}
\label{eq15}
\end{aligned}
\end{equation}where $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$ are the vectors corresponding to the active VU $a$ and $b$ in the combined matrixes, respectively, $\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2}$ and $\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}$ are the 2-norm of $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$, respectively. Then for each active VU $a$, $V_{i}^r$ selects the VUs with the $K$ largest similarities as the $K$ neighboring VUs of VU $a$. The ratings of the $K$ neighboring VUs also reflect the preferences of VU $a$ to a certain extent.
\subsubsection{Interested Contents}
\
\newline
\indent
After determining the neighboring VUs of active VUs, in $\boldsymbol{R}_{i}^r$, the vectors of neighboring VUs for each active VU are abstracted to construct a matrix $\boldsymbol{H}_K$, where the first dimension of $\boldsymbol{H}_K$ is the IDs of the neighboring VUs for active VUs, while the second dimension of $\boldsymbol{H}_K$ is the ratings of the contents from neighboring VUs. In $\boldsymbol{H}_K$, a content with a VU's nonzero rating is regarded as the VU's interested content. Then the number of interested contents is counted for each VU, where the counted number of a content is referred to as the content popularity of the content. $V_{i}^r$ selects the contents with the $F_c$ largest content popularity as the predicted interested contents.
\subsubsection{Popular Contents}
\
\newline
\indent
After vehicle in the local RSU uploads their predicted interested contents, the local RSU collects and compares the predicted interested contents uploaded from all vehicles to select the contents with the $F_{c}$ largest content popularity as the popular contents. The proposed popular content prediction algorithm is illustrated in Algorithm \ref{al2}, where $\mathbb{C}^{r}$ is the set of the popular contents and $\mathbb{C}_{i}^r$ is the set of interested contents of $V^{r}_i$.
\begin{algorithm}
\caption{The Popular Content Prediction Algorithm}
\label{al2}
\textbf{Input: $\omega^{r}$}\\
\For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$}
{
Construct the rating matrix $\boldsymbol{R}_{i}^r$ and personal information matrix;\\
$\hat{\boldsymbol{ R}}_{i}^r \leftarrow AE(\omega^{r},\boldsymbol{R}_{i}^r)$;\\
Combine $\hat{\boldsymbol{ R}}_{i}^r$ and information matrix as $\boldsymbol{H}_{i}^r$;\\
$\mathbb{C}_{i}^r \leftarrow \textbf{Vehicle Predicts}(\boldsymbol{H}_{i}^r,i)$;\\
Uploads $\mathbb{C}_{i}^r$ to the local RSU;\\
}
\textbf{Compare} received contents and select the $F_c$ most interested contents into $\mathbb{C}^{r}$.\\
\Return $\mathbb{C}^{r}$\\
\textbf{Vehicle Predicts}$(\boldsymbol{H}_{i}^r, i)$:\\
\textbf{Input: $\boldsymbol{H}_{i}^r, i\in {1,2,...,N^r}$}\\
Calculate the similarity between $V_{i}^r$ and other vehicles based on Eq. \eqref{eq15};\\
Select the first $K$ vehicles with the largest similarity as neighboring vehicles of $V_{i}^r$;\\
Construct reconstructed rating matrixes of $K$ neighboring vehicles as $\boldsymbol{H}_K$;\\
Select the $F_c$ most interested contents as $\mathbb{C}_{i}^r$;\\
\Return $\mathbb{C}_{i}^r$
\end{algorithm}
The cache capacity of the each RSU $c$, i.e., the largest number of contents that each RSU can accommodate, is usually smaller than $F_{c}$.
Next, we will propose a cooperative caching to determine where the predicted popular contents can be cached.
\subsection{Cooperative Caching Based on DRL}
We consider the computation capability of each RSU is powerful and the cooperative caching can be determined within a short time. The main goal is to find an optimal cooperative caching based on DRL to minimize the content transmission delay. Next, we will formulate the DRL framework and then introduce the DRL algorithm.
\subsubsection{DRL Framework}
\
\newline
\indent
The DRL framework includes state, action and reward. The training process is divided into slots. For the current slot $t$, the local RSU observes the current state $s(t)$ and decides the current action $a(t)$ based on $s(t)$ according to a policy $\pi$, which is used to generate the action based on the state at each slot. Then the local RSU can obtain the current reward $r(t)$ and observes the next state $s(t+1)$ that is transited from the current state $s(t)$. We will design $s(t)$, $a(t)$ and $r(t)$, respectively, for this DRL framework.
\paragraph{State}
\
\newline
\indent
We consider the contents cached by the local RSU as the current state $s(t)$. In order to focus on the contents with high popularity, the contents of the state space $s(t)$ are sorted in descending order based on the predicted content popularity of the $F_c$ popular contents, thus the current state can be expressed as $s(t)=\left(s_{1}, s_{2}, \ldots, s_{c}\right)$, where $s_{i}$ is the $i$th most popular content.
\paragraph{Action}
\
\newline
\indent
Action $a(t)$ represents whether the contents cached in the local RSU need to be relocated or not. In the $F_c$ predicted popular contents, the contents that are not cached in the local RSU form a set $\mathbb{N}$. If $a(t)=1$, the local RSU randomly selects $n(n<c)$ contents from $\mathbb{N}$ and exchanges them with the $n$ lowest popular contents cached in the local RSU, and then sorts the contents in a descending order based on their content popularity to get $s(t+1)$. Neighboring RSU also randomly samples $c$ contents from $F_c$ popular contents that do not belong to $s(t+1)$ as the cached contents of the neighboring RSU within the next slot $t+1$. We denote the contents cached by the neighboring RSU as $s_n(t+1)$.
If $a(t)=0$, the contents cached in the local RSU will not be relocated and the neighboring RSU also determines its cached contents, similar to the case when $a(t)=1$.
\paragraph{Reward}
\
\newline
\indent
The reward function $r(t)$ is designed to minimize the total content transmission delay to fetch the contents requested by vehicles. Note that the local RSU has recorded all the contents requested by the vehicles. The content transmission delays to fetch a requested content $f$ are different when the content is cached in different places.
If content $f$ is cached in the local RSU, i.e., $f\in s(t)$, the local RSU transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is calculated as
\begin{equation}
d_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}},
\label{eq16}
\end{equation}where $R_{R, i}^{r}$ is the transmission rate between the local RSU and $V_{i}^{r}$, which has been calculated by Eq. \eqref{eq3}.
If content $f$ is cached in the neighboring RSU, i.e., $f\in s_n(t)$, the neighboring RSU sends the content to the local RSU that forwards the content to $V_{i}^{r}$, thus the transmission delay is calculated as
\begin{equation}
\bar{d}_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}}+\frac{s}{R_{R-R}},
\label{eq17}
\end{equation}where $R_{R-R}$ is the transmission rate between the local RSU and neighboring RSU, which is a constant transmission rate in the wired link.
If content $f$ is neither cached in the local RSU nor in the neighboring RSU, i.e., $f \notin s(t) \text{ and } f \notin s_n(t)$, the MBS transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is expressed as
\begin{equation}
d_{B, i,f}^{r}=\frac{s}{R_{B, i}^{r}},
\label{eq18}
\end{equation}where $R_{B, i}^{r}$ is the transmission rate between the MBS and $V_{i}^{r}$, which is calculated according to Eq. \eqref{eq4}.
In order to clearly distinguish the content transmission delays under different conditions, we set the reward that $V_{i}^r$ fetches content $f$ at slot $t$ as
\begin{equation}
r_{i,f}^r(t)=\begin{cases}
e^{-\lambda_{1} d_{R,i,f}^{r}}& f\in s(t)\\
e^{-\left(\lambda_{1} d_{R, i, f}^{r}+\lambda_{2} \bar d_{R, i, f}^{r}\right)}&f \in s_n(t) \\
e^{-\lambda_{3} d_{M, i, f}^{r}}&f \notin s(t) \text{ and } f \notin s_n(t)
\end{cases},
\label{eq19}
\end{equation}
where $\lambda_{1}+\lambda_{2}+\lambda_{3}=1$ and $\lambda_{1}<\lambda_{2}\ll \lambda_{3}$.
Thus the reward function $r(t)$ is calculated as
\begin{equation}
r(t)=\sum_{i=1}^{N^r}\sum_{f=1}^{F_{i}^r} r_{i,f}^r(t),
\label{eq20}
\end{equation}where $F_{i}^r$ is the number of requested contents from $V_{i}^r$.
\subsubsection{DRL Algorithm}
\
\newline
\indent
As mentioned above, the next state will change when the action is $1$. The dueling DQN algorithm is a particular algorithm which works for the cases where the partial actions have no relevant effects on subsequent states \cite{Wangarxiv2016}. Specifically, the dueling DQN decomposes the Q-value into two functions $V$ and $A$. Function $V$ is the state value function that is unrelated to the action, while $A$ is the action advantage function that is related to the action. Therefore, we adopt the dueling DQN algorithm to solve this problem.
\begin{algorithm}
\caption{Cooperative Caching Based on Dueling DQN Algorithm}
\label{al3}
Initialize replay buffer $\mathcal{D}$, the parameters of the prediction network $\theta$, the parameters of the target network $\theta'$;\\
\textbf{Input:} requested contents from all vehicles in the local RSU for round $r$\\
\For{episode from $1$ to $T_s$}
{
Local RSU randomly caches $c$ contents from $F_c$ popular contents;\\
Neighboring RSU randomly caches $c$ contents from $F_c$ popular contents that are not cached in the local RSU;\\
\For{slot from $1$ to $N_s$}
{
Observe the state $s(t);$\\
Calculate the Q-value of prediction network $Q(s(t), a; \theta)$ based on Eq. \eqref{eq21};\\
Calculate the action $a(t)$ based on Eq. \eqref{eq22};\\
Obtain state $s(t+1)$ after executing action $a(t)$;\\
Obtain reward $r(t)$ based on Eqs. \eqref{eq16} - \eqref{eq20};\\
Store tuple $(s(t),a(t),r(t),s(t+1))$ in $\mathcal{D}$;\\
\If{number of tuples in $\mathcal{D}$ is larger than $I$}
{
Randomly sample a minibatch of $I$ tuples from $\mathcal{D}$;\\
\For{tuple $i$ from $1$ to $I$}
{
Calculate the Q-value function of target network $Q'(s^i, a; \theta')$ based on Eq. \eqref{eq23};\\
Calculate the target Q-value of the target network $y^i$ based on Eq. \eqref{eq24};\\
Calculate the loss function $L(\theta)$ based on Eq. \eqref{eq25};\\
}
Calculate the gradient of loss function $\nabla_{\theta} L(\theta)$ based on Eq. \eqref{eq26};\\
Update parameters of the prediction network $\theta$ based on Eq. \eqref{eq27};\\
}
\If{number of slots is $M$}
{$\theta'=\theta$.\\}
}
}
\end{algorithm}
The dueling DQN includes a prediction network, a target network and a replay buffer. The prediction network evaluates the current state-action value (Q-value) function, while the target network generates the optimal Q-value function. Each of them consists of three layers, i.e., the feature layer, the state-value layer, and the advantage layer. The replay buffer $\mathcal{D}$ is adopted to cache the transitions for each slot. The dueling DQN algorithm is illustrated in Algorithm \ref{al3} and is described in detail as follow.
\begin{figure*}
\center
\includegraphics[scale=0.27]{4-eps-converted-to.pdf}
\caption{The flow diagram of the dueling DQN}
\label{fig4}
\end{figure*}
Firstly, the parameters of the prediction network $\theta$ and the parameters of the target network $\theta'$ are initialized randomly. The requested contents from all vehicles in the local RSU for round $r$ as input (lines 1-2).
Then the algorithm is executed for $T_s$ episodes. At the beginning of each episode, the local RSU randomly selects $c$ contents from $F_c$ popular contents, and the neighboring RSU randomly selects $c$ contents from $F_c$ popular contents that are not cached in the local RSU. Then the algorithm is executed iteratively from slots $1$ to $N_s$. In each slot $t$, the local RSU first observes state $s(t)$ and then input $s(t)$ to the prediction network, in which it goes through the feature layer, state-value layer and advantage layer, respectively. In the end, the prediction network outputs the state value function $V(s(t) ; \theta)$ and the action advantage function under each action $a$, i.e., $A(s(t), a ; \theta)$, respectively, where ${a \in\{0,1\}}$. Furthermore, the Q-value function of prediction network under each action $a$ is calculated as
\begin{equation}
\begin{aligned}
Q(s(t), a; \theta)=V(s(t) ; \theta)+\{ A(s(t), a ; \theta) \\
-\mathbb{E}[A(s(t), a ; \theta)] \} \\
\end{aligned}.
\label{eq21}
\end{equation}
In Eq. \eqref{eq21}, the range of Q-values can be narrowed to remove redundant degrees of freedom by calculating the difference between the action advantage function $A(s(t), a ; \theta)$ and the average value of the action advantage functions under all actions, i.e., $\mathbb{E}[A(s(t), a ; \theta)]$. Thus, the stability of the algorithm can be improved.
Then action $a(t)$ is chosen by the $\varepsilon \text {-greedy}$ method, which is calculated as
\begin{equation}
a(t)=\underset{a \in\{0,1\}}{\operatorname{argmax}}(Q(s(t), a;\theta))
\label{eq22}.
\end{equation}
Particularly, action $a(1)$ is initialized as $1$ at slot $1$.
The local RSU calculates the reward $r(t)$ according to Eqs. \eqref{eq16} - \eqref{eq20} and state $s(t)$ transits to the next state $s(t+1)$, then the local RSU observes $s(t+1)$. Next, the neighboring RSU randomly samples $c$ popular contents that are not cached in $s(t+1)$ as its cached contents, which is denoted as $s_n(t+1)$. The transition from $s(t)$ to $s(t+1)$ is denoted as tuple $(s(t),a(t),r(t),s(t+1))$, which is then stored in the replay buffer $\mathcal{D}$. When the number of the stored tuples in the replay buffer $\mathcal{D}$ is larger than $I$, the local RSU randomly samples $I$ tuples from $\mathcal{D}$ to form a minibatch. Let $(s^i,a^i,r^i,s'^i), (i=1,2,3,...,I)$ be the $i$-th tuple in the mini-batch. Then $S_i$ input each tuple into the prediction network and the target network (lines 3-12).
Next, we will introduce how parameters of prediction network $\theta$ are updated. For tuple $i$, the local RSU inputs $s^i$ into the target network, where it goes through the feature layer and outputs its feature. Then the feature is input to the state-value layer and the advantage layer, respectively, which output state value function $V'(s^i ; \theta')$ and action advantage function $A'(s^i, a; \theta')$ under each action $a \in \{0,1\}$, respectively. Thus, the Q-value function of target network of tuple $i$ under each action $a$ is calculated as
\begin{equation}
\begin{aligned}
&Q'(s^i, a; \theta')=V'(s^i ; \theta')\\
&+\{ A'(s^i, a ; \theta') -\left.\mathbb{E}\left[A'\left(s^i, a ; \theta'\right)\right]\right\}\\
\end{aligned}.
\label{eq23}
\end{equation}
Then the target Q-value of the target network of tuple $i$ is calculated as
\begin{equation}
y^i=r^i+\gamma_{D} \max _{a\in\{0,1\} } Q'(s^i, a; \theta'),
\label{eq24}
\end{equation}where $\gamma_{D}$ is the discount factor. The loss function is calculated as follows
\begin{equation}
L(\theta)=\frac{1}{I} \sum_{i=1}^{I}\left[(y^i-Q(s^i, a^i, \theta))^{2}\right].
\label{eq25}
\end{equation}
The gradient of loss function $\nabla_{\theta} L(\theta)$ for all sampled tuples is calculated as
\begin{equation}
\nabla_{\theta} L(\theta)=\frac{1}{I} \sum_{i=1}^{I} [\left(y^i-Q(s^i, a^i, \theta)\right) \nabla_{\theta^i} Q(s^i, a^i, \theta)].
\label{eq26}
\end{equation}
At the end of slot $t$, the parameters of the prediction network $\theta$ are updated as
\begin{equation}
\theta \leftarrow \theta-\eta_{\theta} \nabla_{\theta} L(\theta),
\label{eq27}
\end{equation}where $\eta_{\theta}$ is the learning rate of prediction network.
Up to now, the iteration in slot $t$ is completed, which will be repeated. During the iterations, the parameters of target network $\theta' $ are updated after a certain number of slots ($M$), as the parameters of prediction network $\theta$. When the number of slots reaches $N_s$, this episode is finished and then the local RSU randomly caches $c$ contents from $F_c$ popular contents to start the next episode. When the number of episodes reaches $T_s$, the algorithm will be terminated (lines 13-22). The flow diagram of the dueling DQN algorithm is shown in Fig. \ref{fig4}.
Finally, the local RSU and neighboring RSU cache popular contents according to the optimal cooperative caching, and then each vehicle fetches contents from the VEC. This round is finished after each vehicle has fetched contents and then the next round is started.
\section{Simulation and Analytical Results}
\label{sec6}
\begin{table}
\caption{Values of the parameters in the experiments.}
\label{tab2}
\footnotesize
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Parameters of System Model}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$B$ & $540$ kHz & $K$ &$10$\\
\hline
$m$ &$3$ & $p_B$ & $30$ dBm\\
\hline
$p_M$ & $43$ dBm & $R_{R,R}$ & $15$ Mbps \\
\hline
$s$ &$100$ bytes & $T_{training}$ & $2$s\\
\hline
$T_{inference}$ & $0.5$s & $U_{\max}$ &$60$ km/h\\
\hline
$U_{\min }$ &$50$ km/h & $\mu$ &$55$ km/h\\
\hline
$\sigma$ &$2.5$km/h & $\sigma_{c}^{2}$ & $-114$ dBm\\
\hline
\multicolumn{4}{|c|}{Parameters of Asynchronous FL}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$L_s$ &$1000$m & $\beta$ & $0.001$\\
\hline
$\eta_{l}$ &$0.01$ & $\mu_{1}$ &$0.5$ \\
\hline
$\mu_{2}$ &$0.5$ & $\rho$ &$0.0001$\\
\hline
\multicolumn{4}{|c|}{Parameters of DRL}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$I$ &$32$ & $\gamma_{D}$ & $0.99$\\
\hline
$\eta_{\theta}$ &$0.01$ & $\lambda_{1}$ & $0.0001$\\
\hline
$\lambda_{2}$ & $0.4$ & $\lambda_{3}$ & $0.5999$\\
\hline
\end{tabular}
\end{table}
We have evaluated the performance of the proposed CAFR scheme in this section.
\subsection{Settings and Dataset}
We simulate a VEC environment on the urban road as shown in Fig. \ref{fig1} and the simulation tool is Python $3.8$. The communications between vehicle and RSU/MBS employ the 3rd Generation Partnership Project (3GPP) cellular V2X (C-V2X) architecture, where the parameters are set according to the 3GPP standard \cite{3gpp}. The simulation parameters are listed in Table \ref{tab2}. A real-world dataset from the MovieLens website, i.e., MovieLens 1M, is used in the experiments. MovieLens 1M contains $1,000,209$ rating values for $3,883$ movies from $6,040$ anonymous VUs with movie ratings ranging from $0$ to $1$, where each VU rates at least $20$ movies \cite{Harper2016}. MovieLens lM also provides personal information about VUs including ID number, gender, age and postcode. We randomly divide MovieLens lM data set to each vehicle as its local data. Each vehicle randomly chooses $99.8\%$ data from its local data as its training set and $0.2\%$ data as its testing set. For each round, each vehicle randomly samples a part of the movies from testing set as its requested contents.
\subsection{Performance Evaluation}
We use cache hit ratio and the content transmission delay as performance metrics to evaluate the CAFR scheme. The cache hit rate is defined as the probability of fetching requested contents from the local RSU \cite{Muller2017}. If a requested content is cached in the local RSU, it can be fetched directly from the local RSU, which is referred to as a cache hit, otherwise, it is referred to as a cache miss. Thus, the cache hit rate is calculated as
\begin{equation}
\text { cache hit radio }=\frac{\text {cache hits }}{\text {cache hits }+\text {cache misses }}\times 100\%.
\label{eq28}
\end{equation}
The content transmission delay indicates the average delay for all vehicles to fetch contents, which is calculated as
\begin{equation}
\text {content transmission delay}=\frac{D^{\text {total}}}{\text {the number of vehicles }},
\label{eq29}
\end{equation}
where $D^{\text {total}}$ is the delay for all vehicles to fetch contents, and it is calculated by aggregating the content transmission delay for every vehicle to fetch contents.
We compare the CAFR scheme with other baseline schemes such as:
\begin{itemize}
\item Random: Randomly selecting $c$ contents from the all contents to cache in the local and neighboring RSU.
\item c-$\epsilon$-greedy: Selecting the contents with $c$ largest numbers of requests based on probability $1-\epsilon$ and selecting $c$ contents randomly based on probability $\epsilon$ to cache in the local RSU. In our simulation, $\epsilon= 0.1$.
\item Thompson sampling: For each round, the contents cached in the local RSU is updated based on the number of cache hits and cache misses in the previous round \cite{Cui2020}, and $c$ contents with the highest value are selected to cache in the local RSU.
\item FedAVG: Federated averaging (FedAVG) is a typical synchronous FL scheme where the local RSU needs to wait for the local model updates to update its global model according to weighted average method:
\begin{equation}
\omega^{r}=\sum_{i=1}^{N^r} \frac {d^r_i}{d^r} \omega^{r}_{i}.
\label{eq30}
\end{equation}
\item CAFR without DRL: Compared with the CAFR scheme, this scheme does not adopt the DRL algorithm to optimize caching scheme. Specifically, after predicting the popular contents, $c$ contents are randomly selected from the predicted popular contents to cache in the local RSU and neighboring RSU, respectively.
\end{itemize}
\begin{figure}
\center
\includegraphics[scale=0.5]{method_ce_vs_cs-eps-converted-to.pdf}
\caption{Cache hit radio under different cache capacities}
\label{fig5}
\end{figure}
Now, we will evaluate the performance of the CAFR scheme through simulation experiments. In the following performance evaluation, each result is the average value of five experiments.
Fig. \ref{fig5} shows the cache hit ratio of different schemes under different cache capacities of each RSU, where the result of CAFR is obtained when the vehicle density is $15$ vehicles/km (i.e., the number of vehicles is 15 per kilometer), and the results of other schemes are independent with the vehicle density. It can be seen that the cache hit ratio of all schemes increases with a larger capacity. This is because that the local RSU caches more contents with a larger capacity, thus the requested contents of vehicles are more likely to be fetched from the local RSU. Moreover, it is seen that the random scheme provides the worst cache hit ratio, because the scheme just selects contents randomly without considering the content popularity. In addition, CAFR and c-$\epsilon$-greedy outperform the random scheme and the thompson sampling. This is because that random and thompson sampling schemes do not predict the caching contents through learning, whereas CAFR and c-$\epsilon$-greedy decide the caching contents by observing the historical requested contents. Furthermore, CAFR outperforms c-$\epsilon$-greedy. This is because that CAFR captures useful hidden features from the data to predict the accurate popular contents.
\begin{figure}
\center
\includegraphics[scale=0.5]{method_rd_vs_cs-eps-converted-to.pdf}
\caption{Content transmission delay under different cache capacities}
\label{fig6}
\end{figure}
Fig. \ref{fig6} shows the content transmission delay of different schemes under different cache capacities of each RSU, where the vehicle density is $15$ vehicles/km. It is seen that the content transmission delays of all schemes decrease as the cache capacity increases. This is because that each RSU caches more contents as the cache capacity increases, and each vehicle fetches contents from local RSU and neighboring RSU with a higher possibility, thus reducing the content transmission delay. Moreover, the content transmission delay of CAFR is smaller than other schemes. This is because that the cache hit rate of CAFR is better than those of schemes, and more vehicles can fetch contents from local RSU directly, thus reducing the content transmission delay.
\begin{figure}
\center
\includegraphics[scale=0.5]{vs_vd-eps-converted-to.pdf}
\caption{Cache hit radio and content transmission delay under different vehicle densities}
\label{fig7}
\end{figure}
Fig. \ref{fig7} shows the cache hit ratio and the content transmission delay of the CAFR scheme under different vehicle densities when the cache capacity of each RSU is $100$. As shown in this figure, the cache hit rate increases as the vehicle density increases. This is because when more vehicles enter the coverage area of the RSU, the global model of the local RSU is trained based on more data, and thus can predict accurately. In addition, the content transmission delay decreases as the vehicle density increases. This is because the cache hit rate increases when the vehicle density increases, which enables more vehicles to fetch contents directly from local RSU.
\begin{figure}
\center
\includegraphics[scale=0.5]{asy_syn_ce-eps-converted-to.pdf}
\caption{Cache hit radio of CAFR and FedAVG}
\label{fig8}
\end{figure}
Fig. \ref{fig8} compares the cache hit rate of the CAFR scheme and the FedAVG scheme under different rounds when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the cache hit radio of CAFR fluctuates between $22.5\%$ and $24\%$ within $30$ rounds, while the cache hit rate of FedAVG scheme fluctuates between $22\%$ and $23.5\%$ within $30$ rounds. This indicates that the CAFR scheme is slightly better than the FedAVG scheme. This is because the CAFR scheme has considered the vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, thus improving the accuracy of the global model.
\begin{figure}
\center
\includegraphics[scale=0.5]{asy_syn_tt-eps-converted-to.pdf}
\caption{Training time of CAFR and FedAVG}
\label{fig9}
\end{figure}
Fig. \ref{fig9} shows the training time of CAFR and FedAVG schemes for each round when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the training time of CAFR scheme for each round is within $1$s and $2$s, while the training time of FedAVG scheme for each round is within $22$s and $24$s. This indicates that CAFR scheme has a much smaller training time than the FedAVG scheme. This is because the FedAVG scheme needs to aggregate all vehicles' local models for the global model updating in each round, while the CAFR scheme aggregates as soon as a vehicle's local model is received for each round.
\begin{figure}
\center
\includegraphics[scale=0.5]{ce_rd_episode-eps-converted-to.pdf}
\caption{Cache hit radio and content transmission delay of each episode in the DRL}
\label{fig10}
\end{figure}
Fig. \ref{fig10} shows the cache hit rate and content transmission delay of each episode in the DRL of the CAFR scheme when the vehicle density is $15$ vehicles/km and the cache capacity of RSU is $100$. As the episode increases, the cache hit rate gradually increases and the content transmission delay decreases gradually in the first ten episodes. This is because the local RSU and neighboring RSU gradually cache appropriate popular contents in the first ten episodes. In addition, it is seen that the cache hit rate and content transmission delay converge at around episode $10$. This is because the local RSU is able to learn the policy to perform optimal cooperative caching at around $10$ episodes.
\begin{figure}
\center
\includegraphics[scale=0.5]{rl_vs_ce_cs-eps-converted-to.pdf}
\caption{Cache hit radio for whether cache replacement}
\label{fig11}
\end{figure}
Fig. \ref{fig11} compares the cache hit ratio of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km. As shown in Fig. \ref{fig11}, the cache hit ratio of CAFR outperforms the CAFR without DRL. This is because DRL can determine the optimal cooperative caching according to the predicted popular contents, and thus more suitable popular contents can be cached in the local RSU.
\begin{figure}
\center
\includegraphics[scale=0.5]{rl_vs_rd_cs-eps-converted-to.pdf}
\caption{Content transmission delay of CAFR and CAFR without DRL under different cache capacities}
\label{fig12}
\end{figure}
Fig. \ref{fig12} compares the content transmission delay of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km.
As shown in Fig. \ref{fig12}, the content transmission delay of CAFR is less than that of CAFR without DRL. This is because the cache hit ratio of CAFR outperforms the CAFR without DRL and more vehicles can fetch contents from local RSU directly.
\section{Conclusions}
\label{sec7}
In this paper, we considered the vehicle mobility and proposed a cooperative caching scheme CAFR to reduce the content transmission delay and improve the cache hit radio. We first proposed an asynchronous FL algorithm to obtain an accurate global model, and then proposed an algorithm to predict the popular contents based on the global model. Afterwards, we proposed a cooperative caching scheme to minimize the content transmission delay based on the dueling DQN algorithm. Simulation results have demonstrated that the CAFR scheme outperforms other baseline caching schemes. According to the theoretical analysis and simulation results, the conclusions can be summarized as follows:
\begin{itemize}
\item CAFR scheme can learn from the local data of vehicles to capture useful hidden features and predict the accurate popular contents.
\item CAFR greatly reduces the training time for each round by aggregating the local model of a single vehicle in each round. In addition, CAFR considers vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, which can improve the accuracy of the training model.
\item The DRL in the CAFR scheme determines the optimal cooperative caching policy according to the predicted popular contents, and thus more suitable popular contents are cached in the local RSU and neighboring RSU to reduce the content transmission delay.
\end{itemize}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
\section{Introduction}\label{sec:intro}
New knowledge in physics is driven by the observation of phenomena, the design of experiments to probe these phenomena, and the communication of and debate around the resulting measurements in public fora. Laboratory courses in physics are thus unique spaces where students can engage in these central aspects of studying physical systems. Greater emphasis on these aspects in laboratory spaces is needed to accurately represent the physics discipline and to engage students in the universal scientific endeavor that is driven by observation, measurement, and communication.
Recently, national calls have been made to design laboratory instruction such that it emphasizes students' engagement in experimental scientific practices rather than simply re-enforcing content learning \cite{kozminski2014aapt,holmes2017value}. Such experiences would be better aligned with discovery-based learning \cite{olson2012engage}, which is more representative of the enterprise of experimental physics. This focus on science practices is articulated in the American Association of Physics Teachers' {\it Recommendations for the Undergraduate Physics Laboratory Curriculum} \cite{kozminski2014aapt}. These recommendations call for all laboratories in undergraduate physics to better represent experimental physics by constructing laboratory curriculum around science practices such as designing experiments, analyzing and visualizing data, and communicating physics. Arguably, middle-division and advanced laboratory courses for physics and astronomy majors -- with their more complex experiments and equipment as well as their focus on the professional development of future physicists -- tend to engage students with these practices.
By contrast, introductory physics laboratory courses tend to have more prescriptive and direct approaches to instruction. In these courses, students often follow a well-documented procedure and do not typically have opportunities to explore the observed phenomenon and the associated experimental work. At larger universities in the United States, these introductory laboratory courses are taught to thousands of students per semester, which makes these more direct approaches to instruction attractive as they are quite efficient. At many US schools, engineering students, physical science majors, and biological science students must pass these laboratory courses to complete their degree program. The scale of these course offerings provides an additional challenge to incorporating science practices. There are unique examples in the literature where students of introductory physics are engaged with scientific practices such as the Investigative Science Learning Environment (ISLE) \cite{etkina2001investigative} and Studio Physics \cite{wilson1994cuple}. However, these courses have the advantage of being taught to smaller population of students than most introductory laboratory courses, in the case of ISLE, or having an integrated ``lecture'' and a modified instructional space, in the case of Studio Physics, and thus can make use of greater instructional resources.
In this paper, we describe a stand-alone, introductory physics laboratory course sequence for biological science majors at Michigan State University (MSU) that was designed specifically to engage students in scientific practices through the work of experimental physics. Students learn to design experiments, analyze and visualize their data, and communicate their results to their peers and instructors. Design, Analysis, Tools, and Apprenticeship (DATA) Lab is unique in that it is was explicitly designed with the AAPT Lab Recommendations in mind. The sequence is a stand-alone mechanics laboratory (DL1) and a separate E\&M and optics laboratory (DL2), which is taught to more than 2000 students per year. Furthermore, the process of developing and launching this pair of courses required that we confront and overcome several well-documented challenges such as
departmental norms for the course, expectations of content coverage, and the lack of instructor time \cite{dancy2008barriers}.
\begin{table*}[th]
\caption{Finalized learning goals for DATA Lab}\label{tab:lgs}
\begin{tabular}{l|p{4in}}
\hline
\hline
Learning Goal & Description \\
\hline
LG1 - Experimental Process& Planning and executing an experiment to effectively explore how different parameters of a physical system interact with each other. Generally taking the form of model evaluation or determination.\\
LG2 - Data Analysis & Knowing how to turn raw data into an interpretable result (through plots, calculations, error analysis, comparison to an expectation, etc.) that can be connected to the bigger physics concepts.\\
LG3 - Collaboration & Working effectively as a group. Communicating your ideas and understanding. Coming to a consensus and making decisions as a group.\\
LG4 - Communication & Communicating understanding -- of the physics, the experimental process, the results -- in a variety of authentic ways -- to your peers, in a lab notebook, in a presentation or proposal. \\
\hline
\end{tabular}
\end{table*}
We begin this paper by describing how the learning goals for the lab sequence were constructed through a consensus-driven process (Sec.~\ref{LGSection}). In Sec.~\ref{structures}, we provide an overview of the course structure -- diving deeper into the details of the course materials later (Sec.~\ref{OverviewSection}). We describe the assessments for this course in Sec.~\ref{assessments} as they are somewhat non-traditional for a course of this level and scale. To make our discussion concrete, we highlight a particular example in Sec.~\ref{ExpOverSection}. Finally, we offer a measure of efficacy using student responses to the Colorado Learning Attitudes about Science Survey for Experimental Physics \cite{zwickl2014epistemology} (Sec.~\ref{efficacy}) and some concluding remarks (Sec.~\ref{conclusions})
\section{Learning Goals}\label{LGSection}
As this laboratory course serves the largest population of students enrolled in introductory physics at MSU, it was critical to develop a transformed course that reflected faculty voice in the design. While physics faculty are not often steeped in formal aspects of curriculum development, sustained efforts to transform physics courses take an approach where faculty are engaged in the process to develop a consensus design \cite{chasteen2011thoughtful,chasteen2012transforming,wieman2017improving}. In this process, interested faculty are invited to participate in discussions around curriculum design, but experts in curriculum and instruction synthesize those discussions to develop course structures, materials, and pedagogy. These efforts are then reflected out to faculty to iterate on the process. Our design process followed the approach developed by the University of Colorado's Science Education Initiative \cite{chasteen2011thoughtful,chasteen2012transforming,wieman2017improving}. In this process, faculty are engaged in broad discussions about learning goals, the necessary evidence to achieve the expected learning, and the teaching practices and course activities that provide evidence that students are meeting these goals. Below, we discuss the approach to developing learning goals for the course as well as present the finalized set of learning goals from which the course was designed. We refer readers to \citet{wieman2017improving} for a comprehensive discussion of setting about transforming courses at this scale.
Prior to engaging in curriculum and pedagogical design, an interview protocol was developed to talk with faculty about what they wanted students to get out of this laboratory course once students had completed the two semester sequence. The interview focused discussion on what made an introductory laboratory course in physics important for these students and what role it should play as a distinct course since, at MSU, students do not need to enroll in the laboratory course at the same time as the associated lecture course. A wide variety of faculty members were interviewed including those who had previously taught the course, those who had taught other physics laboratory courses, and those who conduct experimental research. In total, 15 interviews were conducted with faculty. This number represents more than half of the total number of experimental faculty who teach at MSU.
The discussion of faculty learning goals was wide-ranging and covered a variety of important aspects of laboratory work including many of the aspects highlighted in the AAPT Laboratory Guidelines \cite{kozminski2014aapt}. Interviews were coded for general themes of faculty goals and the initial list included: developing skepticism in their own work, in science, and the media; understanding that measurements have uncertainty; developing agency over their own learning; communicating their results to a wider variety of audiences; learning how to use multiple sources of information to develop their understanding; demonstrating the ability to use and understand equipment; documenting their work effectively; and becoming reflective of their own experimental work.
With the intent of resolving the faculty's expressed goals with the AAPT Lab Guidelines, the goals were synthesized under larger headings, which aimed to combine and/or to connect seemingly disconnected goals. In addition, through a series of informational meetings that roughly 10-12 faculty attended regularly, how these goals were being combined and connected to interested faculty were reflected upon. Additional critiques and refinements of these goals were collected through notes taken during these meetings. Through several revisions, a set of four broad goals that faculty agreed reflected their views on the purpose of this part of laboratory courses was finalized. Additionally, these goals were also represented in the AAPT Lab Guidelines. The finalized goals are listed in Table \ref{tab:lgs} along with short description of each; they are enumerated (LG{\it X}) in order to refer to them in later sections.
\begin{figure*}
\includegraphics[clip, trim=15 100 15 100, width=0.8\linewidth]{DATALabWeekly.png}
\caption{Week-by-week schedule of DATA Lab I \& II.\label{weekly}}
\end{figure*}
The learning goals formed the basis for the design of course structures including materials and pedagogy. To construct these course structures, constructive alignment \cite{biggs1996enhancing} was leveraged, which helped ensure that the designed materials and enacted pedagogy were aligned with the overall learning goals for the course. These structures are described in the next section where we have included a direct reference to each learning goal that a particular course structure is supporting.
\section{Course Structures}
\label{structures}
Each laboratory section consists of twenty students and two instructors -- one graduate teaching assistant (GTA) and one undergraduate learning assistant (ULA) \cite{otero2010physics}. The students are separated into five groups of four, which they remain in for 4 to 6 weeks -- 4 to 6 class meetings. This time frame works well because it gives the students time to grow and improve as a group as well as individuals within a consistent group. In addition, when the groups are switched it requires the students to adapt to a new group of peers. The groups complete 6 (DL1) or 5 (DL2) experiments during the semester, most of them spanning two weeks -- two class meetings. Fig.~\ref{weekly} provides an overview of the two-semester sequence and will be unpacked further below. We indicate the laboratories that students complete with light green squares (introductory experiments) and dark green squares (two week labs). The students keep a written lab notebook, which they turn in to be graded at the end of each experiment.
\indent In this laboratory course, each group conducts a different experiment. This is possible because, in general, students tend to follow a similar path with respect to the learning goals and there is no set endpoint for any individual experiment. As long as students continue to work through the experimental process and complete analysis of their data, they are working towards the learning goals and can be evaluated using the aligned assessments (Sec.~\ref{assessments}). This approach also emphasizes that there is not one way to complete an experiment; this has added benefits for students' ownership and agency of the work as they must decide how to proceed through the experiment. In addition, having no set endpoint and two weeks to complete most experiments takes away the time pressure to reach a specific point in a given time. All of these aspects allow students to more fully engage with the work they are doing and, in turn, make progress toward the learning goals. Having each group conduct a different experiment addressed a significant point of discussion among the faculty; specifically, not covering the same breadth of content was a major concern. Although, through this design, students do not complete all of the experiments, they are introduced to all of the concepts through the peer evaluation of the communication projects (red squares in Fig.~\ref{weekly}, addressed in detail below).
\subsection{Laboratory Activities}
The laboratory activities were designed around the learning goals. As such, the experiments follow a similar path from the beginning of the experimental process through analysis, with communication and collaboration as central components throughout. The course structures in relation to each of the learning goals are highlighted below. The core component (i.e. lab activities) of the course sequence is outlined in Fig.~\ref{snpsht}.\\
\textbf{LG1 - Experimental Process:} The students begin each experiment by broadly exploring the relevant parameters and their relationships. Typically, students investigate how changing one parameter affects another by making predictions and connecting their observations to physics ideas (qualitative exploration in Fig.~\ref{snpsht}). From these initial investigations, students work toward designing an experiment by determining what to measure, change, and keep the same. This often requires grounding decisions on some known model or an observed relationship (quantitative exploration, experimental design, and investigation in Fig.~\ref{snpsht}).\\
\textbf{LG2 - Data Analysis:} After additional formal investigations in which data has been collected, students summarize the raw data into an interpretable result. This typically includes some form of data analysis; for example, constructing a plot to evaluate a model or determining a quantitative relationship between the different variables in the data. In this work, the students are expected to make claims that are supported by their results. This often involves the students finding the slope and/or intercept in a plot and interpreting those results with respect to their expectations (discussion and analysis in Fig.~\ref{snpsht}).\\
\textbf{LG3 - Collaboration:} Throughout the experimental work and analysis, students discuss and make decisions with their peers in their lab group. Students are encouraged to develop a consensus approach to their work -- deciding collectively where to take their experiment and analysis. Furthermore, students are expected to make these decisions by grounding their discussions in their experiment, data, and analysis.\\
\textbf{LG4 - Communication:} Overall, the entire process requires that students communicate with their group and instructors. Additionally, students communicate their experimental approach and the results of their work including their analysis in their lab notebook. Later, students provide a more formal presentation of their work in the form of the communication projects.
\begin{figure}
\includegraphics[clip, trim=110 375 100 180, width=0.8\linewidth]{experimentoverview.png}
\caption{A snapshot of an experiment from pre-class homework through the communication project.\label{snpsht}}
\end{figure}
It be should emphasized that this process is not content dependent; each laboratory activity conducted by a student group follows this process. This generalization enables the core components of the course to be repeated (see Fig. \ref{weekly}) to help address external constraints, such as limited equipment and time to work on experiments.
\subsection{Communication Projects}
DATA Lab is also defined by the focus on authentic scientific communication through the communication projects (CPs). The CPs are a formal way for the students to present their work and they are one of the assessments of the course in which the work done by the students is completed individually. CPs replace the lab practical from the traditional version of the course where students would conduct a smaller portion of a laboratory by themselves. CPs occur in the middle and at the end of the semester (red squares in Fig.\ref{weekly}). In DL1, the CP is a written proposal that summarizes the work the students conducted in one of their previous experiments and proposes an additional investigation. In DL2, the students create and present a research poster on one of (or a portion of one of) their experiments. In both courses, the projects are shared with and reviewed by their primary instructor and their peers in the class.
Through the CPs, students continue to engage with the faculty consensus learning goals (Sec.~\ref{LGSection}) as described below:\\
\textbf{LG1 - Experimental Process:} Students are expected to reflect on and summarize the process through which they went to complete the experiment. In so doing, they must communicate their rationale and reasoning for following that process.\\
\textbf{LG2 - Data Analysis:} The students must show that they can turn their raw data into an interpretable result. Again, this is often and, ideally, done in the form of a plot of their data with the emphasize of a model, including a fit, is needed. Students also present and explain what the results mean in the context of the experiment and a physical model.\\
\textbf{LG3 - Collaboration:} While the experiment was completed with the student's group where they may have consulted with their group mates, the CPs themselves are not inherently collaborative. However, in DL1, the reviews that students perform on each other's projects are done collaboratively in their groups.\\
\textbf{LG4 - Communication:} The CPs are the formal communication of a student's experimental work. In both courses, a student's CP is reviewed by their peers and feedback is provided describing successes and shortcomings along with suggestions for improvements.
\subsection{Final Projects}
The course structure was designed with the intent to provide students with a variety of ways to engage in the experimental physics practices. The final projects are an additional form of communication including an analysis and interpretation of experimental results through critiquing other scientific results (DL1--Critique Project) and describing a new experimental design (DL2--Design Project).
\textit{Critique Project}: For the final project in DL1, students critique two sides of a popular science topic. In the prior week, students are arranged into new groups and before the class meeting, they must choose, as a group, from a list of possible topics such as climate change and alternative energy. In class, students collectively write up a summary and critique both sides of the scientific argument.
\textit{Design Project}: For the final project in DL2, students choose an experiment that was conducted previously and design a new experiment for a future semester of DATA Lab. Similarly to DL1, the students are sorted into new groups and they must decide, as a group, which experiment they will be working on before the class meeting. Due to the structure of the course, specifically everyone doing different experiments throughout the semester, this choice may be an experiment that individual members of the group did not complete; negotiating this decision is part of the process of the Design Project. In class, students construct two documents: (1) a document that explains the design of the new experiment and (2) a document that would aide a future DATA Lab instructor to teach the experiment. Through this final project, DL2 students can design a project covering material that they may not have had the chance to explore during the course.
For both final projects, students turn in one assignment per group and they receive a single grade (as a group) for the assignment. Students also assess their own in-class participation, providing themselves a participation score (on a 4.0 scale) for the day. This score is submitted to their instructor along with their rationale for assigning themselves the grade.
These projects offer the final opportunity for DATA Lab students to engage with the faculty-consensus learning goals:\\
\textbf{LG1 - Experimental Process:} In DL1, students evaluate and summarize both sides of the chosen argument by reviewing the relevant data and experiments. Although students are not conducting an experiment, they are still asked to be critical of the experimental process in each side of the argument. In DL2, students must create a clear procedure for their proposed experiment. Here, they must consider the available equipment as well as how the data would be collected and why. \\
\textbf{LG2 - Data Analysis:} In DL1, the students must evaluate the evidence provided in each article. They must decide if there are obvious flaws in the way the analysis was conducted and if the analysis is compelling; that is, if the overall claims made in article align with the data and analysis. In DL2, students must consider the kind of analysis that would fit with their experiment and the data that they would collect. In addition, students are also expected to reflect on their analysis in light of the models that are available to explain the data they would collect. \\
\textbf{LG3 - Collaboration:} In both courses, students continue to work as a group and are graded accordingly. In addition, the students have been put into new groups, which they must adjust to.\\
\textbf{LG4 - Communication:} In both courses, students continue to communicate with their group as part of the collaboration. In DL1 specifically, the final project provides an opportunity to communicate their own evaluation and critique of a scientific arguments. Students in DL2 are expected to communicate to different audiences, including future DATA Lab students and instructors, about their newly planned experiment.
\section{Overview of Key Supports}
\label{OverviewSection}
As the students' work in this course is sufficiently open-ended, specific supports to ensure they feel capable of conducting the lab activities have been designed. Since the CPs are the main assessments in the DATA Lab course sequence and are a large portion of their overall grade for the course, the goals of the key supports are intended to provide students with the tools to help them succeed in the projects. Each of the supports designed for DATA Lab will be discussed in detail below (Secs.~\ref{sec:labs} \& \ref{sec:sup}). Assessments will be discussed in Sec.~ \ref{assessments}]
Broadly, the key supports for the students are outlined in Fig.~\ref{snpsht}. Before each class day, students complete a pre-class homework assignment (vertical green lines). Students also have three communication project homework (CPHW) assignments during the semester (vertical pink line) to help them complete their CPs. These supports, in addition to feedback on students' in-class participation and lab notebooks, apply for any of the regular two week experiments (green squares Fig.~\ref{weekly}). In the following section, these will be described in detail along with the additional supports that were designed for the courses.
\subsection{Typical Experiment}\label{sec:labs}
Each two-week experiment follows a similar path, highlighted in Fig.~\ref{snpsht} and described, in part, in Sec.~\ref{structures}. In this section, details of the general course components necessary to maintain the flexibility of the path students take through each experiment will be described.
\textbf{Pre-Class Homework:} At the beginning of an experiment, students are expected to complete the pre-class homework assignment which includes reading through the lab handout and investigating the suggested research. This assignment is usually 2-4 questions designed to have students prepare for the upcoming experiment. For example, before the first day of a new lab, students are asked what they learned during their pre-class research and if they have any questions or concerns about the lab handout. Between the first and second class meeting of the two-week experiment, students are expected to reflect on what they have already done and prepare for what they plan to do next. Typically, the 2-4 questions include reflections from the prior week, such as any issues their group ran into on the first day, and what they intend on doing during the second day of the experiment. Answers to the pre-class homework serve as additional information that the instructors can draw on during the class; knowing what questions and confusions that their students might have can help instructors be more responsive during class. Overall, the goal of the pre-class homework is for the students to come into class prepared to conduct their experiment and this assignment is used to hold them accountable for that preparation.
\textbf{In-class Participation}: With the overall intent of improving students' specific laboratory skills and practices that are outlined in the course learning goals (Sec.~\ref{LGSection}), students receive in-class participation grades and feedback after every lab section (green squares in Figs~\ref{weekly} \&~\ref{snpsht}) on their engagement with respect to these practices. As the lab handouts do not provide students with specific steps that they must take to complete the experiment, students are expected to make most of the decisions together as a group. Generally, students have control over how their investigation proceeds; however, this control varies between experiments (i.e. students choose how to set up the equipment, what to measure, how to take measurements, etc.). The in-class participation grades and feedback are where students are assessed most frequently and where they have the quickest turnaround to implement the changes. See Sec.~\ref{sec:assA} for the details of how in-class participation is assessed.
\textbf{Lab Notebooks}: For each experiment that the students engage in, they are expected to document their work in a lab notebook. In comparison to formal lab reports, lab notebooks are considered a more authentic approach to documenting experimental work. Furthermore, lab notebooks provide students with space to decide what is important and how to present it. The lab notebooks are the primary source that the students use to create their CPs. Like in-class participation, students receive lab notebook feedback much more regularly than CP feedback, so they have greater opportunity to reflect and make improvements. The specific details of the assessment of lab notebooks will be explained in Sec.~\ref{sec:assA}.
\textbf{CP Homeworks:} Three times during the semester the students complete CPHW assignments in addition to that week's pre-class homework. Each CPHW focuses on a relevant portion of the CPs (e.g., making a figure and a caption). Through the CPHWs, the aim is for students to develop experience with more of the CP components. In addition, students receive feedback on these different aspects (see Sec.~\ref{sec:assA}) , which they can act upon before they have to complete their final CPs.
\textbf{Communication Projects:} Throughout each semester, the students complete two CPs, the first of which is a smaller portion of their overall course grade. With the goal of providing the students with a second opportunity to conduct a CP after receiving initial feedback, this course design feature intends to create less pressure on students during their first CP assignment. Students are expected to reflect on the process, their grade, and the feedback before they have to complete another CP. The CP assessment details will be discuss further in Sec.~\ref{sec:assB}.
\subsection{Additional Supports}\label{sec:sup}
Along with the support structures for the core components of the course sequence, additional supports have been designed to ease students into the more authentic features of DATA Lab such as designing experiments and documenting progress in lab notebooks. DL1 begins with three weeks of workshops (purple squares in Fig.~\ref{weekly}), followed by the introductory experiment (light green squares in Fig.~\ref{weekly}) that all of the students complete. DL2 begins with an introductory experiment as well, under the assumption that the students already went through DL1. The workshops and introductory experiments are designed to assist the students in navigating the different requirements and expectations of the overall course sequence, and of a typical experiment within each course. The additional support structures are described in detail below.
\textbf{DL1 Workshops:} The first workshop focuses on measurement and uncertainty with a push for the students to discuss and share their ideas (LG1,3). The students perform several different measurements -- length of a metal block, diameter of a bouncy ball, length of a string, mass of a weight, and the angle of a board. Each group discusses the potential uncertainty associated with one of the measurements. Then, students perform one additional measurement and assign uncertainty to it. The second workshop also focuses on uncertainty but in relation to data analysis and evaluating models (LG2,4) using the concept of a spring constant. Students collect the necessary measurements, while addressing the associated uncertainty and plot the measurements to analyze how the plot relates to the model of a spring. The final workshop focuses on proper documentation. The lab handouts do not contain their own procedure, so each student is expected to document the steps they take and their reasoning (LG4) in their lab notebook. In preparation for the third workshop, as a pre-class homework, students submit a procedure for making a peanut butter and jelly sandwich, which they discuss and evaluate in class. Students are then tasked with developing a procedure to determine the relationship between different parameters (length of a spring and mass added, angle of metal strip and the magnets placed on it, or time for a ball to roll down a chute and how many blocks are under the chute. At the end of each workshop the students turn in their notebooks, just as they would at the end of any experiment.
\textbf{Introductory Experiments:} In DL1, the introductory experiment occurs after the three workshops. All students conduct a free-fall experiment where they must determine the acceleration due to gravity and the terminal velocity for a falling object. In DL2, the introductory experiment is the first activity in the course. This is because students will have already completed DL1 prior to taking DL2; rather than being slowly introduced to what DATA Lab focuses on, students can be reminded in a single experiment. The introductory experiment for DL2 involves Ohm's Law; students must determine the resistance of a given resistor.
As these are the first DATA Lab experiments for either course, the instructors take a more hands-on and guiding approach than they will later in the semester. In DL1, these instructional changes represent a dramatic shift from the guidance students had during the workshops where instructors are often quite involved. In DL2, the one week lab is intended to be simple enough that students can be reminded of the expectations with respect to the overall learning goals of the course.
\textbf{CP Prep Day:} As discussed in the prior section, the CPs comprise a large portion of the students' total grade in the course. In addition to the supports that were already mentioned -- in class grades, notebooks, CPHW, and a lower stakes CP1 -- in the spring semester, the MSU academic calendar offers time for a communication project prep day (pink squares in Fig.~\ref{weekly}). This gives the students an extra day where they have time to work on their CPs in class. They can take additional measurements, seek help from their group or instructor, or work on the project itself. This prep day allows for a gentler transition into the CPs with a bit more guidance. It also reduces the amount of work that the students have to do outside of class.
\section{In Course Assessments}
\label{assessments}
The DATA Lab activities described above were designed around the overall learning goals outlined in Sec.~\ref{LGSection}. As such, the course assessments were also aligned with these overall course goals. There are two types of assessments used in DATA Lab -- formative (to help the students improve upon their work) and summative (to evaluate the students' output); these are separated for clarity. In this section, the various assessment tools are discussed with respect to the overall learning goals of the course.
\subsection{Formative Assessments} \label{sec:assA}
In DATA Lab the formative assessments are comprised of students' work on their in-class activities, lab notebooks, and CPHWs. Other than the pre-class homework, which is graded on completion, there is a rubric for each activity for which students receive a score. Each is structured to ensure that any improvements students make carry over to their CPs.
\textbf{In-class Participation}: In-class participation feedback is broken into group, which covers the general things everyone in the group or the group as a whole needs to work on, and individual, which is specific to the student and not seen by other group members. The general structure of the feedback follows an evaluation rubric used in other introductory courses and focuses on something they did well, something they need to work on, and advice on how to improve \cite{irving2017p3}. It is expected that students will work on the aspects mentioned in their prior week's feedback during the next week's class. Students are graded based on their response to that feedback. Any improvements they make with respect to the learning goals in class will also likely impact how well they complete their CPs.
Students' in-class participation is assessed with respect to two components, group function and experimental design. Specifically, group function covers their work in communication, collaboration, and discussion (LG3,4). For communication they are expected to contribute to and engage in group discussions. To do well in collaboration, students should come to class prepared and actively participate in the group’s activities. Discussion means working as a group to understand the results of their experiment. Experimental design evaluates the process that students take through the experiment and their engagement in experimental physics practices (LG1,2). They are expected to engage with and show competence in use of equipment, employ good experimental practices (i.e., work systematically, make predictions, record observations, and set goals) and take into account where uncertainty plays into the experimental process (i.e., reduce, record, and discuss it).
Specifically for the DL1 Workshops, instructors grade students differently than they would for a typical experiment. The emphasis for the workshops is on the group function aspect of the rubrics, communication and participation. This is because the students are being eased into the expectations that the instructors have around experimental work.
\textbf{Lab Notebooks}: Feedback and grades for lab notebooks are only provided after the experiment is completed (the two week block in Figs~\ref{weekly} \&~\ref{snpsht}). Students receive individual feedback on their notebook, although members of a group may receive feedback on some of the same things simply because they conducted the experiment together. Like for in-class participation, it is expected that the students will work on the aspects mentioned in their feedback for the next lab notebook and the instructor can remind them of these things in class during the experiment.
Lab notebooks are also graded over two components, experimental design and discussion. Experimental design focuses on the experimental process and how students communicate it (LG1,4). Here, instructors typically look for clearly recorded steps and results, and intentional progression through the experiment. Discussion covers uncertainty in the measurements and the models, as well as the results, with respect to any plots and conclusions (LG2,4). These evaluation rubrics for the lab notebooks were designed to be aligned with the those for the CPs, so that when students work toward improving their notebooks they are also making improvements that will benefit their CPs. For example, if a student is getting better at analyzing data and communicating their results within their notebooks, instructors should expect the same improvement to transfer to their CPs.\\
\indent For the DL1 Workshops, the lab notebooks are graded on the same components but the grades and feedback are specifically focused on the parts of the rubric that the students should have addressed in each of the previous workshops. For example, as documentation is emphasized in the last workshop, the students are not heavily penalized on poorly documented procedures in the first two workshops.\\
\textbf{CPHW}: The goal of this CPHW is to have students think about creating a more complete CP that connects their in-class work to the bigger picture. Students are evaluated on the quality and relevance of their sources, including the background and real-life connections (LG2,4). Each CPHW has a different rubric because each one addresses a different aspect of the CPs.
\textit{Figure and caption}: The students create a figure with a robust caption based on the data from one of the labs they completed. Both the figure and the caption are evaluated on communication and uncertainty (LG2,4). For the plot, the students are expected to visualize the data clearly with error bars and it should provide insight into the various parameters within the experiment. For the caption, students need to discuss what is being plotted, make comparisons to the model including deviations, and draw conclusions that include uncertainty. \\
\textit{Abstract}: For a given experiment, students write a research abstract that covers the main sections of their project including introduction, methods, results, and conclusion. These are assessed on experimental process (motivation and clarity of the experiment) (LG1,4), and discussion (results and conclusions) (LG2,4).\\
\textit{Critique (DL1 only)}: Students are given an example proposal that they must read, critique, and grade. This assignment plays two roles. First, students must examine a proposal, which should help to produce their own. Second, students must critique the proposal, which should help them provide better critiques to their peers. Students' performance is evaluated based on their identification of the different components of a proposal, and the quality of the feedback they provide (LG4).\\
\textit{Background (DL2 only)}: Students are tasked with finding three out-of-class sources related to one of their optics experiments, which they must summarize and connect back to the experiment.
\subsection{Summative Assessment} \label{sec:assB}
The CPs form the sole summative assessment of student learning in DATA Lab. As described above, each of the formative assessments are designed to align with the goals of the CPs.
\textbf{CPs}: As mentioned above, although students conduct the experiments together, the CPs are completed individually. In DL1, students' CP is a proposal that emphasizes their prior work and discusses a proposed piece of future work. As a result, the CP rubric is divided into two sections, prior and future work. Within those sections, there is a focus on experimental design and discussion. This rubric was iterated on after piloting the course for two semesters as it was found that students would often neglect either their future work or prior work when they were not directly addressed in the rubric; the rubrics were reorganized in order to account for this. Experimental design, which covers methods and uncertainty, focuses on the experimental methods and the uncertainty in measurements, models, and results when students discuss their prior work (LG1,2,4). In future work, experimental design refers to the proposed experimental methodology and the reasoning behind their choices (LG1,4). For the student's discussion of prior work, the rubric emphasizes how the they communicate their results (LG2,4). When students discuss their future work, the rubric emphasizes the novelty of the proposed experiment and the arguments made on the value of the project (LG1,4).
In DL2, students' CP is a poster that they present to their classmates for peer review. The rubric includes an additional component on the presentation itself, but the rubric still emphasizes the experimental design and discussion. Experimental design covers communication of the experimental process including students' reasoning and motivation. Discussion focuses on the discussion of uncertainty (i.e., in the measurements and models) and the discussion of results (i.e., in the plot and conclusions). The additional component focusing on presentation is divided into specifics about the poster (i.e., its structure, figures, layout) and the student's presentation of the project (i.e., clear flow of discussion, ability to answer questions).
\section{Example Experiment}
\label{ExpOverSection}
Overall, the course structures, supports, and assessments of DATA Lab have been discussed. In this section, the key supports will be grounded in examples from a specific experiment. The details of a specific two-week experiment will be described to better contextualize the features of the course. Additional experiments are listed in Tables \ref{DL1exp} and \ref{DL2exp} in the Appendix. The chosen experiment is from DL2 and is called ``Snell's Law: Rainbows''. In this experiment, students explore the index of refraction for different media and different wavelengths of light.
Before attending the first day of the laboratory activity, students are expected to conduct the pre-class homework assignment, including the recommended research in Fig.~\ref{snells1}. In addition, the homework questions for the first day of a new experiment address the pre-class research, as follows:
\begin{figure}
\fbox{
\parbox{0.85\columnwidth}{
{\bf Research Concepts}\\
\begin{flushleft}
To do this lab, it will help to do some research on the concepts underlying the bending of light at interfaces including:
\begin{itemize}[noitemsep,nolistsep]
\item Snell's Law (get more details than presented here)
\item Refraction and how it differs from reflection
\item Index of refraction of materials
\item Fiber optics
\item Using this simulation might be helpful: http://goo.gl/HEflDI
\item How to obtain estimates for fits in your data (e.g., the LINEST function in Excel - http://goo.gl/wiZH3p)
\end{itemize}
\end{flushleft}
}}
\caption{Pre-class research prompts for the Snell's Law lab.\label{snells1}}
\end{figure}
\begin{figure*}[t]
\fbox{
\parbox{1.8\columnwidth}{
{\bf Part 1 - Observing Light in Water}\\
\begin{flushleft}
At your table, you have a tank of water and a green laser. Turn on the green laser and point it at the water's surface.
\begin{itemize}[noitemsep,nolistsep]
\item What do you notice about the beam of light in the water?
\item What about the path the light takes from the source to the bottom of the tank?
\end{itemize}
Let's get a little quantitative with this set up. Can you measure the index of refraction of the water? You have a whiteboard marker, a ruler, and a protractor to help you. Don't worry about making many measurements, just see if you can get a rough estimate by taking a single measurement.
\begin{itemize}
\item What does your setup and procedure look like for this experiment?
\item What part(s) of your setup/procedure is(are) the main source of uncertainty for this
measurement?
\item Can you gain a sense of the uncertainty in this measurement?
\item How close is your predicted value to the ``true value" of the index of refraction of mater?\\
\end{itemize}
\end{flushleft}
\vspace{11pt}
\noindent\rule{4in}{0.4pt}
\vspace{11pt}
\begin{flushleft}
On the optical rail you have a half circle shape of acrylic that is positioned on a rotating stage, with angular measurements. You also have a piece of paper with a grid attached to a black panel (i.e., a ``beam stop"). Using this setup, you will test Snell's Law for the green laser. Your group will need to decide how to set up your experiments and what measurements you will make. You should sketch the setup in your lab notebook and it would be good to be able to explain how your measurements relate to Snell's Law (i.e., how will the laser beam travel and be bent by the acrylic block?). In conducting this experiment, consider,
\begin{itemize}[noitemsep,nolistsep]
\item What measurements do you need to make?
\item What is the path of the laser beam and how does it correspond to measurements that you are making?
\item What is a good experimental procedure for testing Snell's Law?
\item What kind of plot is a useful one to convey how the model (Snell's Law) and your measurements match up?
\item Where is the greatest source of uncertainty in your experimental setup? What does that mean about the uncertainty in your measurements?
\end{itemize}
\end{flushleft}
}}
\caption{Snell's Law: Rainbows Lab Handout. \textit{Top}: Exploring refraction, first day of Snell's Law. \textit{Bottom}: Beginning model evaluation, main Snell's Law activity. \label{snells23}}
\end{figure*}
\begin{enumerate}
\item Describe something you found interesting in your pre-class research.
\item From reading your procedure, where do you think you may encounter challenges in this lab? What can you do to prepare for these?
\item Considering your assigned lab, is there anything specific about the lab handout that is unclear or confusing?
\end{enumerate}
The first day of the lab begins with exploring refraction in a water tank. Students are asked to qualitatively explore the index of refraction of the water using a simple setup (Fig.~\ref{snells23}). The exploration is fully student led; they investigate the laser and tank, discussing what they see with their group as they go and recording their observations in their notebooks. Students observe that the path of the light changes once the laser crosses the air-water boundary. Students are then lead to a quantitative exploration by determining the index of refraction of the water; instructors expect the students to have an idea of how to do this after their pre-class research. If students are not sure how to start, they are encouraged to search for Snell's Law online where they can quickly find a relevant example. The instructors check in with the students toward the end of this work. Typically, instructors will ask about the questions outlined in the lab handout.
The next part of the experiment is where students work to gain precision in their measurements and evaluate the model of the system. This part is most similar to a traditional laboratory course. The difference is that the students are told the goal but not how to proceed (see Fig.~\ref{snells23}). There are a number of decisions they must make as a group as they progress. Students record and explain their decisions in their lab notebooks; they might also discuss them with their instructor.
Typically by the end of the first day students know how to set up their experiment and have documented that in their lab notebooks. They are unlikely to have taken more than one measurement (the design and investigation phase in Fig.~\ref{snpsht}). They will return the following week to complete their experiment. The homework questions between the first week and the week that they return emphasize students' reflections on the previous week. Students also are asked think about the experiment outside of class. The typical homework questions prior to Week 2 are the following:
\begin{enumerate}
\item Because you will be working on the same lab this week, it is useful to be reflective on your current progress and plans. Describe where your group ended up in your current lab, and what you plan to do next.
\item Now that you are halfway through your current lab and are more familiar with the experiment, what have you done to prepare for this upcoming class?
\item Describe something that you found interesting in your current lab and what you would do to investigate it further.
\end{enumerate}
\begin{figure*}[t]
\includegraphics[clip, trim=0 0 0 0, width=0.8\linewidth]{CPExample-New.png}
\caption{Sample of a student's Communication Project for DL2. \textit{Blue}: graph with sine of the angle of incidence plotted against sine of the angle of refraction for each wavelength of light. \textit{Green}: The slope for each wavelength, which is the index of refraction of the block. \textit{Red}: Results and conclusions where they discuss the differences in the indices of refraction and how that is related to rainbows.\label{poster}}
\end{figure*}
The second week starts with setting up the experiment again and beginning the process of taking multiple measurements. At this point, students often break up into different roles: someone manipulating the equipment, one or two people taking measurements, and someone recording the data and/or doing calculations. These roles are what students appear to fall into naturally, and are not assigned to them. Although, if one student is always working in excel or always taking the measurements, instructors will address it in their feedback where they encourage the students to switch roles.
The next step depends on the amount of time that students have left in the class. If there is not much time, students focus on the data from one wavelength of light. If they have more time, they can make the same measurements with lasers of different wavelengths. In both cases, students can determine the index of refraction of the acrylic block. With multiple wavelengths, students are able to see that the index of refraction depends on wavelength. This leads to a conversation with the instructor about how this relates to rainbows and a critique of the model of refraction -- Snell's Law.
Most of the analysis that students conduct in this example experiment is the same regardless of how many lasers they collected data (discussion and analysis in Fig.~\ref{snpsht}). While considering the different variables in their experiment, students are expected to make a plot where the slope tells them something about the physical system. In this case, the design is intended for the students to plot the sine of the angle of incidence on the x-axis and the sine of the angle of refraction on the y-axis, which makes the slope the index of refraction of the acrylic block. The optics experiments occur in the second half of the semester after the students have become familiar with constructing linear plots from nonlinear functions. For this lab, students usually do not have much difficulty determining what they should plot. After they obtain the slope and the error in the slope, students will typically compare it to the known index of refraction of the acrylic block. They must research this online as it is not provided anywhere for them in the lab handout.
The second day of the experiment ends with a discussion of their plot. Students construct a conclusion in their notebooks that summarizes the results, what they found, what they expected, reasons for any differences, and an explanation of what it all means in the larger physics context.
After the experiment, the students may have their third and final CPHW, background/literature review. In the case of Snell's Law, students would be asked to find three additional sources where these concepts are used in some other form of research, often in the field of medicine but also in physics or other sciences. Students then summarize what they did in class and connect their experimental work to the sources that they found.
The student can choose to do their second CP on this experiment. An example of a poster can be seen in Fig.~\ref{poster}. In the figure, three key features are highlighted. First, in the blue box, is the graph where students plotted all three wavelengths of light. In the green box, is the slope for each color, which is the index of refraction of the acrylic for each laser. Finally, in the red boxes, are their results and conclusion. In the top box, students explained why their indices are different, that is, because of the assumption that Snell's Law is wavelength independent. In the bottom box, they make the connection to rainbows. The student would present this poster during the in-class poster session, to their peers and their instructor.
\section{Redesign Efficacy}
\label{efficacy}
To measure the efficacy of the DATA Lab course transformation, the Colorado Learning Attitudes about Science Survey for Experimental Physics (E-CLASS) \cite{zwickl2014epistemology} was implemented in the traditional laboratory course as well as the transformed courses. The E-CLASS is a research-based assessment tool used to measure students' epistemology and expectations about experimental physics \cite{wilcox2016students,hu2017qualitative,wilcox2018summary}. The well-validated survey consists of 30 items (5-point Likert scale) where students are asked to rate their level of agreement with each statement. The scoring method of this assessment was adapted from previous studies \cite{adams2006new}. First, the 5-point Likert scale is compressed into a 3-point scale; ``(dis)agree" and ``strongly (dis)agree" are combined into one category. Then, student responses are compared to the expert-like response; a response that is aligned with the expert-like view is assigned a $+1$ and a response that is opposite to the expert-like view is assigned a $-1$. All neutral responses are assigned a 0. For our comparison between the traditional and transformed courses, we will report the percentage of students with expert-like responses.
In DL1 and DL2, the E-CLASS was administered as an online extra credit assignment both pre- and post-instruction. Throughout the course transformation, DL1 and DL2 collected a total of 1,377 and 925 students, respectively, with matched (both pretest and post-test) E-CLASS scores. Figure \ref{eclass} shows the fraction of students with expert-like responses in the traditional course and the transformed course for (a) DL1 and (b) DL2. Students in the traditional courses had a decrease of 3\% and and 1\%, respectively, in their expert-like attitudes and beliefs toward experimental physics from pre- to post-instruction. However, in the transformed DATA Lab courses, the students' expert-like views of experimental physics increased by 4\% in DL1 and by 6\% in DL2.
To explore the impact of the course transformation after controlling for students' incoming epistemology and expectations about experimental physics, ANCOVA was used to evaluate the student's attitudes and beliefs post-instruction between the traditional courses and the transformed courses. For both DL1 and DL2, results showed that there was a significant difference in ECLASS post-test percentages between the traditional courses and the transformed courses ($ps<0.001$). Specifically for DL1, results demonstrated a significant 7\% post-test difference in expert-like responses between the traditional course and the transformed course after controlling for ECLASS pretest scores. For DL2, there was a significant 9\% difference in post-test responses between the traditional and transformed courses after controlling for the student's incoming ECLASS responses. Overall, the transformation in both DL1 and DL2 had a positive impact on students' epistemological views and expectations about experimental physics.
\begin{figure}
\includegraphics[clip, trim=0 0 0 0, width=\linewidth]{ECLASS-3-26-2019.png}
\caption{Fraction of students with expert-like responses for (a) DL1 and (b) DL2.\label{eclass}}
\end{figure}
\section{Conclusion}
\label{conclusions}
In this paper, the large scale transformation of the MSU algebra-based physics labs for life science students was described. The design was divorced from the specific physics content because the learning goals developed from a faculty consensus design did not include specific content. This design means that the individual lab activities do not matter {\it per se}, but instead the structure of the course and how students work through the lab are what is important. Theoretically, one could adapt this design to a chemistry or biology lab by making adjustments to the kinds of lab activities, and relevant changes to the learning goals. That being said, there are still key structures to ensure the functioning of the course which will be covered in detail in a subsequent paper (e.g. a leadership team of four instructors, two GTAs and two ULAs, tasked with maintaining consistent grading and instruction across the sections).
The transformation was centered to emphasize experimental physics practices. The overall efforts were focused on the two course series because the majority of the students that are taking courses in the physics department at MSU are enrolled in the introductory algebra based series, specifically 2,000 students per year. In addition, the majority of the student instructors in the MSU physics and astronomy department, nearly 80 graduate teaching assistants and undergraduate learning assistants, teach in these labs. Because of its scale, special attention was given to the voice of the physics faculty in the development of the learning goals for DATA Lab \cite{wieman2017improving}. The entire course was designed around the faculty-consensus learning goals, which are all based around physics laboratory practices (Sec.~\ref{LGSection}). From course structures to assessments, everything was intentionally aligned with the overall learning goals. Each component of the course builds upon another through the two semester sequence. Each individual lab activity builds upon skills that will be valuable for each subsequent activity, from lab handouts to pre-class homework assignments. Such an effort was put into designing this course sequence in large part because of the number of MSU undergraduate students they are serving. The value in physics labs for these non-majors lies in the scientific practices on which the redesign was centered. Those skills and practices are what they will take with them into their future careers.
\begin{acknowledgments}
This work was generously supported by the Howard Hughes Medical Institute, Michigan State University's College of Natural Science, as well as the Department of Physics and Astronomy. The authors would like to thank the faculty who participated in the discussion of learning goals. Additionally, we would like to thank S. Beceiro-Novo, A. Nair, M. Olsen, K. Tollefson, S. Tessmer, J. Micallef, V. Sawtelle, P. Irving, K. Mahn, J. Huston who have supported the development and operation of DATA Lab. We also thank the members of the Physics Education Research Lab who have given feedback on this work and this manuscript.
\end{acknowledgments}
\section{Introduction}
\LaTeX\ is typesetting software that is widely used by mathematicians
and physicists because it is so good at typesetting equations. It is
also completely programmable, so it can be configured to produce
documents with almost any desired formatting, and to automatically
number equations, figures, endnotes, and so on.
To prepare manuscripts for the American Journal of Physics (AJP),
you should use the REV\TeX\ 4.1 format for Physical Review B
preprints, as indicated in the \texttt{documentclass} line at the top
of this article's source file. (If you're already familiar with
\LaTeX\ and have used other \LaTeX\ formats, please resist the
temptation to use them, or to otherwise override REV\TeX's formatting
conventions, in manuscripts that you prepare for AJP.)
This sample article is intended as a tutorial, template, and reference for
AJP authors, illustrating most of the \LaTeX\ and REV\TeX\ features that
authors will need. For a more comprehensive introduction to \LaTeX,
numerous books and online references are available.\cite{latexsite,
wikibook, latexbook} Documentation for the REV\TeX\ package
can be found on the APS web site.\cite{revtex}
\LaTeX\ is free software, available for Unix/Linux, Mac OS X, and Windows
operating systems. For downloading and installation instructions, follow
the links from the \LaTeX\ web site.\cite{latexsite} It is most
convenient\cite{cloudLaTeX} to install a ``complete \TeX\ distribution,''
which will include \LaTeX, the underlying \TeX\ engine, macro packages
such as REV\TeX, a large collection of fonts, and GUI tools for editing
and viewing your documents. To test your installation, try to process
this sample article.
\section{Ordinary text and paragraphs}
To typeset a paragraph of ordinary text, just type the text in your source
file like this. Put line breaks
wherever
you
want, and don't worry about extra spaces between words, which \LaTeX\ will ignore. You can almost always trust \LaTeX\ to make your paragraphs look good, with neatly justified margins.
To start a new paragraph, just leave a blank line in your source file.
A few punctuation characters require special treatment in \LaTeX. There
are no ``smart quotes,'' so you need to use the left-quote key (at the
top-left corner of the keyboard) for a left quote, and the ordinary apostrophe
key (next to the semi-colon) for a right quote. Hit either key twice for double
quotes, which are standard in American English. Don't use shift-apostrophe
to make double quotes. Use single quotes when they're nested inside a
double-quoted quotation. When a period or comma belongs at the end of
a quotation, put it inside the quotes---even if it's not part of what you're
quoting.\cite{nevermindlogic}
Your fingers also need to distinguish between a hyphen (used for
multi-word adjectives and for hyphenated names like Lennard-Jones), an
en-dash (formed by typing two consecutive hyphens, and used for ranges
of numbers like 1--100), and an em-dash (formed out of three consecutive
hyphens and used as an attention-getting punctuation symbol---preferably
not too often).
Some non-alphanumeric symbols like \$, \&, and \% have special meanings
in a \LaTeX\ source file, so if you want these symbols to appear in the output,
you need to precede them with a backslash.
There are also special codes for generating the various accents
that can appear in foreign-language words and names, such as Amp\`ere
and Schr\"odinger.\cite{FontEncodingComment}
You can switch to \textit{italic}, \textbf{bold}, and \texttt{typewriter} fonts
when necessary. Use curly braces to enclose the text that is to appear in
the special font. In general, \LaTeX\ uses curly braces to group characters
together for some common transformation.
Notice that any word or symbol preceded by the backslash character is
a special instruction to \LaTeX, typically used to produce a special
symbol or to modify the typeset output in some way. These instructions
are also called \textit{control sequences} or \textit{macros}.
After you've used \LaTeX\ for a while, the little finger of your right
hand will be really good at finding the backslash and curly-brace keys.
\section{Math symbols}
To type mathematical symbols and expressions within a paragraph, put
them between \$ signs, which indicate \textit{math mode}: $ab + 2c/d = e-3f$.
\LaTeX\ ignores spaces in math mode, using its own algorithms to determine
the right amount of space between symbols. Notice that an ordinary letter
like~$x$, when used in math mode, is automatically typeset in italics.
This is why you need to use math mode for all mathematical
expressions (except plain numerals), even when they don't contain any
special symbols. But don't use math mode to italicize ordinary \textit{words}.
Besides ordinary letters and numerals and common arithmetic symbols, math
mode provides a host of other characters that you can access via control
sequences.\cite{wikimathpage} These include Greek letters like $\pi$ and
$\Delta$ (note capitalization), symbols for operations and relations such
as $\cdot$, $\times$, $\pm$, $\gg$, $\leq$, $\sim$, $\approx$, $\propto$,
and $\rightarrow$, and special symbols like $\nabla$, $\partial$, $\infty$,
and~$\hbar$. You can decorate symbols with dots ($\dot x$ or $\ddot x$),
arrows ($\vec\mu$), bars ($\bar x$ or $\overline m$), hats ($\hat x$),
tildes ($\tilde f$ or $\widetilde w$), and radicals ($\sqrt\pi$, $\sqrt{2/3}$).
Parentheses and square brackets require no special keystrokes, but you
can also make curly braces and angle brackets: $\{\langle\ \cdots\ \rangle\}$.
To make subscripts and superscripts, use the underscore and caret
(circumflex) symbols on your keyboard: $x^\mu$, $g_{\mu\nu}$, $\delta^i_j$,
$\epsilon^{ijk}$. Notice that you need to put the subscript or superscript
in curly braces if it's longer than one character (or one control sequence).
You can even make nested subscripts and superscripts, as in $e^{-x^2}$.
If a subscript consists of an entire word or word-like abbreviation,
we usually put it in plain Roman type: $x_\textrm{max}$. If you need to
put a subscript or superscript \textit{before} a symbol, use an empty
set of curly braces: ${}^{235}_{\ 92}\textrm{U}$. (Notice the trick of using
backslash-space put a space before the 92.)
\newcommand{\bE}{\mathbf{E}}
To make boldface letters you use the \verb/\mathbf/ control sequence, as in
$\nabla\times\mathbf{E} = -\partial\mathbf{B}/\partial t$. For bold Greek
letters like $\boldsymbol{\omega}$, you need to use \verb/\boldsymbol/
instead. You can also use calligraphic ($\mathcal{E}$), Fraktur
($\mathfrak{D}$), and blackboard bold ($\mathbb{R}$) fonts, if you need them.
If you'll be using a symbol in a special font repeatedly, you can save
some keystrokes by defining an abbreviation for it; for example, the
definition \verb/\newcommand{\bE}{\mathbf{E}}/ allows you to type simply
\verb/\bE/ to get $\bE$.
Unit abbreviations, as in $1~\mathrm{eV} = 1.6\times10^{-19}~\mathrm{J}$,
should be in the plain Roman font, not italics. You can access this font
from math mode using \verb/\mathrm/. For function names like $\sin\theta$,
$\exp x$, and $\ln N!$, \LaTeX\ provides special control sequences,
which you should use instead of \verb/\mathrm/ whenever possible because
they work better with \LaTeX's automatic spacing algorithms.
But \LaTeX\ doesn't always get the spacing right in mathematical formulas.
In the previous paragraph we had to use the \verb/~/ symbol to
manually insert a space between each number and its units. The \verb/~/
symbol actually represents an unbreakable space, where \LaTeX\ will never
insert a line break. For occasional minor adjustments to the spacing
in a \LaTeX\ expression, you can insert or remove a little
space with \verb/\,/ and \verb/\!/. Use these macros sparingly,
because \LaTeX's default spacing rules will provide more consistency
within and among AJP articles. The most common use of \verb/\,/
is in expressions like $T\,dS - P\,dV$.
\section{Displayed equations}
\label{DispEqSection}
When an equation is important and/or tall and/or complicated, you should
display it on a line by itself, with a number. To do this, you put
\verb/\begin{equation}/ before the equation and \verb/\end{equation}/
after it, as in
\begin{equation}
\int_0^\infty \! \frac{x^3}{e^x - 1} \, dx = 6\sum_{k=1}^\infty \frac1{k^4} =
6\left(\frac{\pi^4}{90}\right) = \frac{\pi^4}{15}.
\end{equation}
This example also shows how to make the sum and integral symbols, big parentheses,
and built-up fractions. (Don't put built-up fractions in a
non-displayed equation, because there won't be enough vertical space in
AJP's final, single-spaced paragraphs. Use the slashed form, $x^3/(e^x-1)$,
instead.)
If you want to refer to an equation elsewhere in your manuscript, you can
give it a label. For example, in the equation
\begin{equation}
\label{deriv}
\frac{\Delta x}{\Delta t} \mathop{\longrightarrow}_{\Delta t\rightarrow0} \frac{dx}{dt}
= \lim_{\Delta t\rightarrow0} \frac{\Delta x}{\Delta t}
\end{equation}
we've inserted \verb/\label{deriv}/ to label this equation
\texttt{deriv}.\cite{labelnames} To refer to
Eq.~(\ref{deriv}), we then type \verb/\ref{deriv}/.\cite{footnotes} Notice
that AJP's style conventions also require you to put the equation number in
parentheses when you refer to it, and to abbreviate ``Eq.''\ unless it's at
the beginning of a sentence.
Some equations require more complicated layouts. In the equation
\begin{equation}
E_n = (n + \tfrac12)\hbar, \quad \textrm{where}\ n = 0, 1, 2, \ldots,
\end{equation}
we've used \verb/\quad/ to leave a wide space and \verb/\textrm/ to put ``where''
in plain Roman type. To create a matrix or column vector, as in
\begin{equation}
\begin{bmatrix}
t' \\
x' \\
\end{bmatrix}
=
\begin{pmatrix}
\gamma & -\beta\gamma \\
-\beta\gamma & \gamma \\
\end{pmatrix}
\begin{bmatrix}
t \\
x \\
\end{bmatrix},
\end{equation}
you can use the \texttt{pmatrix} and/or \texttt{bmatrix} environment,
for matrices delimited by parentheses and/or brackets. There's also
a plain \texttt{matrix} environment that omits the delimiters.
In this and other examples of \LaTeX\ tables and arrays, the \verb/&/
character serves as a ``tab'' to separate columns, while the \verb/\\/
control sequence marks the end of a row.
For a list of related equations, with nicely lined-up equals signs,
use the \texttt{eqnarray} environment:
\begin{eqnarray}
\oint \vec B \cdot d\vec\ell & = & -\frac{d\Phi_E}{dt} ; \\
\oint \vec E \cdot d\vec\ell & = & \mu_0\epsilon_0\frac{d\Phi_B}{dt} + \mu_0 I.
\end{eqnarray}
You can also use \texttt{eqnarray} to make a multi-line equation, for example,
\begin{eqnarray}
\mathcal{Z}
& = & 1 + e^{-(\epsilon-\mu)/kT} + e^{-2(\epsilon-\mu)/kT} + \cdots \nonumber \\
& = & 1 + e^{-(\epsilon-\mu)/kT} + (e^{-(\epsilon-\mu)/kT})^2 + \cdots \nonumber \\
& = & \frac{1}{1 - e^{-(\epsilon-\mu)/kT}}.
\end{eqnarray}
Here the first column of the second and third lines is empty. Note that you
can use \verb/\nonumber/ within any line to suppress the generation of
an equation number; just be sure that each multi-line equation has at least
one number.
Another commonly used structure is the \texttt{cases} environment, as in
\begin{equation}
m(T) =
\begin{cases}
0 & T > T_c \, , \\
\bigl(1 - [\sinh 2 \beta J]^{-4} \bigr)^{1/8} & T < T_c \, .
\end{cases}
\end{equation}
At AJP we require that you put correct punctuation before and after every
displayed equation, treating each equation as part of a correctly punctuated
English sentence.\cite{mermin} The preceding examples illustrate good
equation punctuation.
\section{Figures}
\LaTeX\ can import figures via the \verb/\includegraphics/ macro.
For AJP, you should embed this in the \texttt{figure} environment, which
can place the figure in various locations. This environment also lets
you add a caption (which AJP requires) and an optional label for referring
to the figure from elsewhere. See Fig.~\ref{gasbulbdata} for an example.
\begin{figure}[h!]
\centering
\includegraphics{GasBulbData.eps}
\caption{Pressure as a function of temperature for a fixed volume of air.
The three data sets are for three different amounts of air in the container.
For an ideal gas, the pressure would go to zero at $-273^\circ$C. (Notice
that this is a vector graphic, so it can be viewed at any scale without
seeing pixels.)}
\label{gasbulbdata}
\end{figure}
Most \LaTeX\ implementations can import a variety of graphics formats.
For graphs and line drawings you should use vector (i.e., resolution-independent)
graphics saved in encapsulated PostScript (.eps) or portable document
format (.pdf). Most good graphics software systems can save to one
or both of these formats. Please don't use a rasterized graphics format
(such as .jpg or .png or .tiff) for graphs or line drawings.
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{ThreeSunsets.jpg}
\caption{Three overlaid sequences of photos of the setting sun, taken
near the December solstice (left), September equinox (center), and
June solstice (right), all from the same location at 41$^\circ$ north
latitude. The time interval between images in each sequence is approximately
four minutes.}
\label{sunsets}
\end{figure}
For photographs and other images that are \textit{inherently} made
of pixels (that is, rasters or bitmaps), \LaTeX\ can
(usually) handle the .jpg and .png formats as well as .eps and .pdf.
Figure~\ref{sunsets} is a .jpg example. For final production, however,
AJP prefers that raster images be in .tiff format. Most \LaTeX\ systems
can't import .tiff images, so we recommend using .png or .jpg with \LaTeX\
for your initial submission, while saving a higher-quality .tiff version
to submit as a separate file after your manuscript is conditionally accepted
for publication.
Please refer to the AJP editor's web site\cite{editorsite} for more details
on AJP's requirements for figure preparation.
\section{Tables}
Tables are somewhat similar to figures: You use the \texttt{table} environment
to let them ``float'' to an appropriate location, and to automatically number
them and format their captions. But whereas the content of a figure comes
from an external file, the content of a table is typeset directly in \LaTeX.
For that you use the \texttt{tabular} environment, which uses \verb/&/ and
\verb/\\/ for tabbing and ending rows, just like the \texttt{matrix} and
\texttt{eqnarray} environments discussed in Section~\ref{DispEqSection}.
Table~\ref{bosons} shows a fairly simple example. Notice that the caption comes
before the table itself, so it will appear above the table instead of below.
The \texttt{ruledtabular} environment, which surrounds \texttt{tabular},
provides the double horizontal lines at the top and bottom, and stretches
the table horizontally out to the margins. (This will look funny for tables
intended to fill only one column of a final journal page, but there's no
need to worry about such cosmetic details.)
\begin{table}[h!]
\centering
\caption{Elementary bosons}
\begin{ruledtabular}
\begin{tabular}{l c c c c p{5cm}}
Name & Symbol & Mass (GeV/$c^2$) & Spin & Discovered & Interacts with \\
\hline
Photon & $\gamma$ & \ \ 0 & 1 & 1905 & Electrically charged particles \\
Gluons & $g$ & \ \ 0 & 1 & 1978 & Strongly interacting particles (quarks and gluons) \\
Weak charged bosons & $W^\pm$ & \ 82 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$, $\gamma$ \\
Weak neutral boson & $Z^0$ & \ 91 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$ \\
Higgs boson & $H$ & 126 & 0 & 2012 & Massive particles (according to theory) \\
\end{tabular}
\end{ruledtabular}
\label{bosons}
\end{table}
Every table is a little bit different, and many tables will require
further tricks; see Refs.\ \onlinecite{wikibook} and~\onlinecite{latexbook}
for examples. Note that the AJP style does not ordinarily use lines
to separate rows and columns in the body of a table.
\section{Special formats}
\subsection{Block quotes}
If a quoted passage is long or important, you can use the \texttt{quote}
environment to typeset it as a block quote, as in this passage from The
Feynman Lectures:\cite{feynman}
\begin{quote}
A poet once said, ``The whole universe is in a glass of wine.'' We will
probably never know in what sense he meant that, for poets do not write
to be understood. But it is true that if we look at a glass of wine closely
enough we see the entire universe.
\end{quote}
\subsection{Numbered lists}
To create a numbered list, use the \texttt{enumerate} environment and start
each entry with the \verb/\item/ macro:
\begin{enumerate}
\item You can't win.
\item You can't even break even.
\item You can't get out of the game.
\end{enumerate}
\subsection{Unnumbered lists}
For a bulleted list, just use \texttt{itemize} instead of \texttt{enumerate}:
\begin{itemize}
\item Across a resistor, $\Delta V = \pm IR$.
\item Across a capacitor, $\Delta V = \pm Q/C$.
\item Across an inductor, $\Delta V = \pm L(dI/dt)$.
\end{itemize}
\subsection{Literal text}
For typesetting computer code, the \texttt{verbatim} environment reproduces
every character verbatim, in a typewriter font:
\begin{verbatim}
u[t_] := NIntegrate[
x^2 * Sqrt[x^2+t^-2] / (Exp[Sqrt[x^2+t^-2]] + 1), {x,0,Infinity}]
f[t_] := NIntegrate[
x^2 * Log[1+ Exp[-Sqrt[x2+t^-2]]], {x,0,Infinity}]
Plot[((11Pi^4/90) / (u[t]+f[t]+(2Pi^4/45)))^(1/3), {t,0,3}]
\end{verbatim}
There's also a \verb/\verb/ macro for typesetting short snippets of verbatim
text within a paragraph. To use this macro, pick any character that doesn't
appear within the verbatim text to use as a delimiter. Most of the examples
in this article use \texttt{/} as a delimiter, but in \verb|{a/b}| we've used
\verb/|/ instead.
\section{Endnotes and references}
This article has already cited quite a few endnotes, using the \verb/\cite/
macro. See the end of this article (and source file) for the endnotes
themselves, which are in an environment called \texttt{thebibliography}
and are created with the \verb/\bibitem/ macro. These macros require
you to give each endnote a name. The notes will be numbered in the
order in which the \verb/\bibitem/ entries appear, and AJP requires that
this order coincide with the order in which the notes are first cited in
the article. You can cite multiple endnotes in a single \verb/\cite/,
separating their names by commas. And you can cite each note as many
times as you like.
Notice that in the AJP (and Physical Review B) style, the citation numbers
appear as superscripts. Think carefully about the optimal placement of
each citation, and try not to attach citations to math symbols where the
numbers might be misinterpreted as exponents. Often there will be a
punctuation symbol after the word where you attach the citation; you
should then put the citation \textit{after} the punctuation, not
before.\cite{nevermindlogic}
If you want to refer directly to Ref.~\onlinecite{mermin} (or any other)
in a sentence, you can do so with the \verb/\onlinecite/ macro.
Most endnotes consist of bibliographic citations.\cite{noBIBTeX} Be sure
to learn and use the AJP styles for citing books,\cite{latexbook}
articles,\cite{dyson} edited volumes,\cite{examplevolume} and
URLs.\cite{latexsite} For example, article titles are in double quotes,
while book titles are in italics. Pay careful attention to all punctuation
symbols in citations. Note that AJP requires that all article citations
include titles as well as beginning and ending page numbers.
Please use standard abbreviations, as listed in the AIP Style
Manual,\cite{AIPstylemanual} for journal titles.
\section{Conclusion}
We hope this article will help you prepare beautifully typeset
manuscripts for the American Journal of Physics. Good typesetting requires
considerable attention to detail, but this effort will pay off by making your
manuscript easier and more enjoyable to read. Your colleagues, reviewers,
and editors will be grateful for your effort.
Of course, we encourage you to put as much care into the \textit{content}
of your manuscript as you put into its form. The AIP Style
Manual\cite{AIPstylemanual} is an indispensable reference on good physics
writing, covering everything from planning and organization to standard
spellings and abbreviations.
Most important of all, please familiarize yourself with the AJP Statement
of Editorial Policy,\cite{editorsite} which describes the types of manuscripts
that AJP publishes and the audience for which AJP authors are expected to write.
You wouldn't want to put all that care into preparing a manuscript for AJP,
only to find that AJP is the wrong journal for your manuscript.
We look forward to receiving your submission to AJP.
\section{Introduction}
\LaTeX\ is typesetting software that is widely used by mathematicians
and physicists because it is so good at typesetting equations. It is
also completely programmable, so it can be configured to produce
documents with almost any desired formatting, and to automatically
number equations, figures, endnotes, and so on.
To prepare manuscripts for the American Journal of Physics (AJP),
you should use the REV\TeX\ 4.1 format for Physical Review B
preprints, as indicated in the \texttt{documentclass} line at the top
of this article's source file. (If you're already familiar with
\LaTeX\ and have used other \LaTeX\ formats, please resist the
temptation to use them, or to otherwise override REV\TeX's formatting
conventions, in manuscripts that you prepare for AJP.)
This sample article is intended as a tutorial, template, and reference for
AJP authors, illustrating most of the \LaTeX\ and REV\TeX\ features that
authors will need. For a more comprehensive introduction to \LaTeX,
numerous books and online references are available.\cite{latexsite,
wikibook, latexbook} Documentation for the REV\TeX\ package
can be found on the APS web site.\cite{revtex}
\LaTeX\ is free software, available for Unix/Linux, Mac OS X, and Windows
operating systems. For downloading and installation instructions, follow
the links from the \LaTeX\ web site.\cite{latexsite} It is most
convenient\cite{cloudLaTeX} to install a ``complete \TeX\ distribution,''
which will include \LaTeX, the underlying \TeX\ engine, macro packages
such as REV\TeX, a large collection of fonts, and GUI tools for editing
and viewing your documents. To test your installation, try to process
this sample article.
\section{Ordinary text and paragraphs}
To typeset a paragraph of ordinary text, just type the text in your source
file like this. Put line breaks
wherever
you
want, and don't worry about extra spaces between words, which \LaTeX\ will ignore. You can almost always trust \LaTeX\ to make your paragraphs look good, with neatly justified margins.
To start a new paragraph, just leave a blank line in your source file.
A few punctuation characters require special treatment in \LaTeX. There
are no ``smart quotes,'' so you need to use the left-quote key (at the
top-left corner of the keyboard) for a left quote, and the ordinary apostrophe
key (next to the semi-colon) for a right quote. Hit either key twice for double
quotes, which are standard in American English. Don't use shift-apostrophe
to make double quotes. Use single quotes when they're nested inside a
double-quoted quotation. When a period or comma belongs at the end of
a quotation, put it inside the quotes---even if it's not part of what you're
quoting.\cite{nevermindlogic}
Your fingers also need to distinguish between a hyphen (used for
multi-word adjectives and for hyphenated names like Lennard-Jones), an
en-dash (formed by typing two consecutive hyphens, and used for ranges
of numbers like 1--100), and an em-dash (formed out of three consecutive
hyphens and used as an attention-getting punctuation symbol---preferably
not too often).
Some non-alphanumeric symbols like \$, \&, and \% have special meanings
in a \LaTeX\ source file, so if you want these symbols to appear in the output,
you need to precede them with a backslash.
There are also special codes for generating the various accents
that can appear in foreign-language words and names, such as Amp\`ere
and Schr\"odinger.\cite{FontEncodingComment}
You can switch to \textit{italic}, \textbf{bold}, and \texttt{typewriter} fonts
when necessary. Use curly braces to enclose the text that is to appear in
the special font. In general, \LaTeX\ uses curly braces to group characters
together for some common transformation.
Notice that any word or symbol preceded by the backslash character is
a special instruction to \LaTeX, typically used to produce a special
symbol or to modify the typeset output in some way. These instructions
are also called \textit{control sequences} or \textit{macros}.
After you've used \LaTeX\ for a while, the little finger of your right
hand will be really good at finding the backslash and curly-brace keys.
\section{Math symbols}
To type mathematical symbols and expressions within a paragraph, put
them between \$ signs, which indicate \textit{math mode}: $ab + 2c/d = e-3f$.
\LaTeX\ ignores spaces in math mode, using its own algorithms to determine
the right amount of space between symbols. Notice that an ordinary letter
like~$x$, when used in math mode, is automatically typeset in italics.
This is why you need to use math mode for all mathematical
expressions (except plain numerals), even when they don't contain any
special symbols. But don't use math mode to italicize ordinary \textit{words}.
Besides ordinary letters and numerals and common arithmetic symbols, math
mode provides a host of other characters that you can access via control
sequences.\cite{wikimathpage} These include Greek letters like $\pi$ and
$\Delta$ (note capitalization), symbols for operations and relations such
as $\cdot$, $\times$, $\pm$, $\gg$, $\leq$, $\sim$, $\approx$, $\propto$,
and $\rightarrow$, and special symbols like $\nabla$, $\partial$, $\infty$,
and~$\hbar$. You can decorate symbols with dots ($\dot x$ or $\ddot x$),
arrows ($\vec\mu$), bars ($\bar x$ or $\overline m$), hats ($\hat x$),
tildes ($\tilde f$ or $\widetilde w$), and radicals ($\sqrt\pi$, $\sqrt{2/3}$).
Parentheses and square brackets require no special keystrokes, but you
can also make curly braces and angle brackets: $\{\langle\ \cdots\ \rangle\}$.
To make subscripts and superscripts, use the underscore and caret
(circumflex) symbols on your keyboard: $x^\mu$, $g_{\mu\nu}$, $\delta^i_j$,
$\epsilon^{ijk}$. Notice that you need to put the subscript or superscript
in curly braces if it's longer than one character (or one control sequence).
You can even make nested subscripts and superscripts, as in $e^{-x^2}$.
If a subscript consists of an entire word or word-like abbreviation,
we usually put it in plain Roman type: $x_\textrm{max}$. If you need to
put a subscript or superscript \textit{before} a symbol, use an empty
set of curly braces: ${}^{235}_{\ 92}\textrm{U}$. (Notice the trick of using
backslash-space put a space before the 92.)
\newcommand{\bE}{\mathbf{E}}
To make boldface letters you use the \verb/\mathbf/ control sequence, as in
$\nabla\times\mathbf{E} = -\partial\mathbf{B}/\partial t$. For bold Greek
letters like $\boldsymbol{\omega}$, you need to use \verb/\boldsymbol/
instead. You can also use calligraphic ($\mathcal{E}$), Fraktur
($\mathfrak{D}$), and blackboard bold ($\mathbb{R}$) fonts, if you need them.
If you'll be using a symbol in a special font repeatedly, you can save
some keystrokes by defining an abbreviation for it; for example, the
definition \verb/\newcommand{\bE}{\mathbf{E}}/ allows you to type simply
\verb/\bE/ to get $\bE$.
Unit abbreviations, as in $1~\mathrm{eV} = 1.6\times10^{-19}~\mathrm{J}$,
should be in the plain Roman font, not italics. You can access this font
from math mode using \verb/\mathrm/. For function names like $\sin\theta$,
$\exp x$, and $\ln N!$, \LaTeX\ provides special control sequences,
which you should use instead of \verb/\mathrm/ whenever possible because
they work better with \LaTeX's automatic spacing algorithms.
But \LaTeX\ doesn't always get the spacing right in mathematical formulas.
In the previous paragraph we had to use the \verb/~/ symbol to
manually insert a space between each number and its units. The \verb/~/
symbol actually represents an unbreakable space, where \LaTeX\ will never
insert a line break. For occasional minor adjustments to the spacing
in a \LaTeX\ expression, you can insert or remove a little
space with \verb/\,/ and \verb/\!/. Use these macros sparingly,
because \LaTeX's default spacing rules will provide more consistency
within and among AJP articles. The most common use of \verb/\,/
is in expressions like $T\,dS - P\,dV$.
\section{Displayed equations}
\label{DispEqSection}
When an equation is important and/or tall and/or complicated, you should
display it on a line by itself, with a number. To do this, you put
\verb/\begin{equation}/ before the equation and \verb/\end{equation}/
after it, as in
\begin{equation}
\int_0^\infty \! \frac{x^3}{e^x - 1} \, dx = 6\sum_{k=1}^\infty \frac1{k^4} =
6\left(\frac{\pi^4}{90}\right) = \frac{\pi^4}{15}.
\end{equation}
This example also shows how to make the sum and integral symbols, big parentheses,
and built-up fractions. (Don't put built-up fractions in a
non-displayed equation, because there won't be enough vertical space in
AJP's final, single-spaced paragraphs. Use the slashed form, $x^3/(e^x-1)$,
instead.)
If you want to refer to an equation elsewhere in your manuscript, you can
give it a label. For example, in the equation
\begin{equation}
\label{deriv}
\frac{\Delta x}{\Delta t} \mathop{\longrightarrow}_{\Delta t\rightarrow0} \frac{dx}{dt}
= \lim_{\Delta t\rightarrow0} \frac{\Delta x}{\Delta t}
\end{equation}
we've inserted \verb/\label{deriv}/ to label this equation
\texttt{deriv}.\cite{labelnames} To refer to
Eq.~(\ref{deriv}), we then type \verb/\ref{deriv}/.\cite{footnotes} Notice
that AJP's style conventions also require you to put the equation number in
parentheses when you refer to it, and to abbreviate ``Eq.''\ unless it's at
the beginning of a sentence.
Some equations require more complicated layouts. In the equation
\begin{equation}
E_n = (n + \tfrac12)\hbar, \quad \textrm{where}\ n = 0, 1, 2, \ldots,
\end{equation}
we've used \verb/\quad/ to leave a wide space and \verb/\textrm/ to put ``where''
in plain Roman type. To create a matrix or column vector, as in
\begin{equation}
\begin{bmatrix}
t' \\
x' \\
\end{bmatrix}
=
\begin{pmatrix}
\gamma & -\beta\gamma \\
-\beta\gamma & \gamma \\
\end{pmatrix}
\begin{bmatrix}
t \\
x \\
\end{bmatrix},
\end{equation}
you can use the \texttt{pmatrix} and/or \texttt{bmatrix} environment,
for matrices delimited by parentheses and/or brackets. There's also
a plain \texttt{matrix} environment that omits the delimiters.
In this and other examples of \LaTeX\ tables and arrays, the \verb/&/
character serves as a ``tab'' to separate columns, while the \verb/\\/
control sequence marks the end of a row.
For a list of related equations, with nicely lined-up equals signs,
use the \texttt{eqnarray} environment:
\begin{eqnarray}
\oint \vec B \cdot d\vec\ell & = & -\frac{d\Phi_E}{dt} ; \\
\oint \vec E \cdot d\vec\ell & = & \mu_0\epsilon_0\frac{d\Phi_B}{dt} + \mu_0 I.
\end{eqnarray}
You can also use \texttt{eqnarray} to make a multi-line equation, for example,
\begin{eqnarray}
\mathcal{Z}
& = & 1 + e^{-(\epsilon-\mu)/kT} + e^{-2(\epsilon-\mu)/kT} + \cdots \nonumber \\
& = & 1 + e^{-(\epsilon-\mu)/kT} + (e^{-(\epsilon-\mu)/kT})^2 + \cdots \nonumber \\
& = & \frac{1}{1 - e^{-(\epsilon-\mu)/kT}}.
\end{eqnarray}
Here the first column of the second and third lines is empty. Note that you
can use \verb/\nonumber/ within any line to suppress the generation of
an equation number; just be sure that each multi-line equation has at least
one number.
Another commonly used structure is the \texttt{cases} environment, as in
\begin{equation}
m(T) =
\begin{cases}
0 & T > T_c \, , \\
\bigl(1 - [\sinh 2 \beta J]^{-4} \bigr)^{1/8} & T < T_c \, .
\end{cases}
\end{equation}
At AJP we require that you put correct punctuation before and after every
displayed equation, treating each equation as part of a correctly punctuated
English sentence.\cite{mermin} The preceding examples illustrate good
equation punctuation.
\section{Figures}
\LaTeX\ can import figures via the \verb/\includegraphics/ macro.
For AJP, you should embed this in the \texttt{figure} environment, which
can place the figure in various locations. This environment also lets
you add a caption (which AJP requires) and an optional label for referring
to the figure from elsewhere. See Fig.~\ref{gasbulbdata} for an example.
\begin{figure}[h!]
\centering
\includegraphics{GasBulbData.eps}
\caption{Pressure as a function of temperature for a fixed volume of air.
The three data sets are for three different amounts of air in the container.
For an ideal gas, the pressure would go to zero at $-273^\circ$C. (Notice
that this is a vector graphic, so it can be viewed at any scale without
seeing pixels.)}
\label{gasbulbdata}
\end{figure}
Most \LaTeX\ implementations can import a variety of graphics formats.
For graphs and line drawings you should use vector (i.e., resolution-independent)
graphics saved in encapsulated PostScript (.eps) or portable document
format (.pdf). Most good graphics software systems can save to one
or both of these formats. Please don't use a rasterized graphics format
(such as .jpg or .png or .tiff) for graphs or line drawings.
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{ThreeSunsets.jpg}
\caption{Three overlaid sequences of photos of the setting sun, taken
near the December solstice (left), September equinox (center), and
June solstice (right), all from the same location at 41$^\circ$ north
latitude. The time interval between images in each sequence is approximately
four minutes.}
\label{sunsets}
\end{figure}
For photographs and other images that are \textit{inherently} made
of pixels (that is, rasters or bitmaps), \LaTeX\ can
(usually) handle the .jpg and .png formats as well as .eps and .pdf.
Figure~\ref{sunsets} is a .jpg example. For final production, however,
AJP prefers that raster images be in .tiff format. Most \LaTeX\ systems
can't import .tiff images, so we recommend using .png or .jpg with \LaTeX\
for your initial submission, while saving a higher-quality .tiff version
to submit as a separate file after your manuscript is conditionally accepted
for publication.
Please refer to the AJP editor's web site\cite{editorsite} for more details
on AJP's requirements for figure preparation.
\section{Tables}
Tables are somewhat similar to figures: You use the \texttt{table} environment
to let them ``float'' to an appropriate location, and to automatically number
them and format their captions. But whereas the content of a figure comes
from an external file, the content of a table is typeset directly in \LaTeX.
For that you use the \texttt{tabular} environment, which uses \verb/&/ and
\verb/\\/ for tabbing and ending rows, just like the \texttt{matrix} and
\texttt{eqnarray} environments discussed in Section~\ref{DispEqSection}.
Table~\ref{bosons} shows a fairly simple example. Notice that the caption comes
before the table itself, so it will appear above the table instead of below.
The \texttt{ruledtabular} environment, which surrounds \texttt{tabular},
provides the double horizontal lines at the top and bottom, and stretches
the table horizontally out to the margins. (This will look funny for tables
intended to fill only one column of a final journal page, but there's no
need to worry about such cosmetic details.)
\begin{table}[h!]
\centering
\caption{Elementary bosons}
\begin{ruledtabular}
\begin{tabular}{l c c c c p{5cm}}
Name & Symbol & Mass (GeV/$c^2$) & Spin & Discovered & Interacts with \\
\hline
Photon & $\gamma$ & \ \ 0 & 1 & 1905 & Electrically charged particles \\
Gluons & $g$ & \ \ 0 & 1 & 1978 & Strongly interacting particles (quarks and gluons) \\
Weak charged bosons & $W^\pm$ & \ 82 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$, $\gamma$ \\
Weak neutral boson & $Z^0$ & \ 91 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$ \\
Higgs boson & $H$ & 126 & 0 & 2012 & Massive particles (according to theory) \\
\end{tabular}
\end{ruledtabular}
\label{bosons}
\end{table}
Every table is a little bit different, and many tables will require
further tricks; see Refs.\ \onlinecite{wikibook} and~\onlinecite{latexbook}
for examples. Note that the AJP style does not ordinarily use lines
to separate rows and columns in the body of a table.
\section{Special formats}
\subsection{Block quotes}
If a quoted passage is long or important, you can use the \texttt{quote}
environment to typeset it as a block quote, as in this passage from The
Feynman Lectures:\cite{feynman}
\begin{quote}
A poet once said, ``The whole universe is in a glass of wine.'' We will
probably never know in what sense he meant that, for poets do not write
to be understood. But it is true that if we look at a glass of wine closely
enough we see the entire universe.
\end{quote}
\subsection{Numbered lists}
To create a numbered list, use the \texttt{enumerate} environment and start
each entry with the \verb/\item/ macro:
\begin{enumerate}
\item You can't win.
\item You can't even break even.
\item You can't get out of the game.
\end{enumerate}
\subsection{Unnumbered lists}
For a bulleted list, just use \texttt{itemize} instead of \texttt{enumerate}:
\begin{itemize}
\item Across a resistor, $\Delta V = \pm IR$.
\item Across a capacitor, $\Delta V = \pm Q/C$.
\item Across an inductor, $\Delta V = \pm L(dI/dt)$.
\end{itemize}
\subsection{Literal text}
For typesetting computer code, the \texttt{verbatim} environment reproduces
every character verbatim, in a typewriter font:
\begin{verbatim}
u[t_] := NIntegrate[
x^2 * Sqrt[x^2+t^-2] / (Exp[Sqrt[x^2+t^-2]] + 1), {x,0,Infinity}]
f[t_] := NIntegrate[
x^2 * Log[1+ Exp[-Sqrt[x2+t^-2]]], {x,0,Infinity}]
Plot[((11Pi^4/90) / (u[t]+f[t]+(2Pi^4/45)))^(1/3), {t,0,3}]
\end{verbatim}
There's also a \verb/\verb/ macro for typesetting short snippets of verbatim
text within a paragraph. To use this macro, pick any character that doesn't
appear within the verbatim text to use as a delimiter. Most of the examples
in this article use \texttt{/} as a delimiter, but in \verb|{a/b}| we've used
\verb/|/ instead.
\section{Endnotes and references}
This article has already cited quite a few endnotes, using the \verb/\cite/
macro. See the end of this article (and source file) for the endnotes
themselves, which are in an environment called \texttt{thebibliography}
and are created with the \verb/\bibitem/ macro. These macros require
you to give each endnote a name. The notes will be numbered in the
order in which the \verb/\bibitem/ entries appear, and AJP requires that
this order coincide with the order in which the notes are first cited in
the article. You can cite multiple endnotes in a single \verb/\cite/,
separating their names by commas. And you can cite each note as many
times as you like.
Notice that in the AJP (and Physical Review B) style, the citation numbers
appear as superscripts. Think carefully about the optimal placement of
each citation, and try not to attach citations to math symbols where the
numbers might be misinterpreted as exponents. Often there will be a
punctuation symbol after the word where you attach the citation; you
should then put the citation \textit{after} the punctuation, not
before.\cite{nevermindlogic}
If you want to refer directly to Ref.~\onlinecite{mermin} (or any other)
in a sentence, you can do so with the \verb/\onlinecite/ macro.
Most endnotes consist of bibliographic citations.\cite{noBIBTeX} Be sure
to learn and use the AJP styles for citing books,\cite{latexbook}
articles,\cite{dyson} edited volumes,\cite{examplevolume} and
URLs.\cite{latexsite} For example, article titles are in double quotes,
while book titles are in italics. Pay careful attention to all punctuation
symbols in citations. Note that AJP requires that all article citations
include titles as well as beginning and ending page numbers.
Please use standard abbreviations, as listed in the AIP Style
Manual,\cite{AIPstylemanual} for journal titles.
\section{Conclusion}
We hope this article will help you prepare beautifully typeset
manuscripts for the American Journal of Physics. Good typesetting requires
considerable attention to detail, but this effort will pay off by making your
manuscript easier and more enjoyable to read. Your colleagues, reviewers,
and editors will be grateful for your effort.
Of course, we encourage you to put as much care into the \textit{content}
of your manuscript as you put into its form. The AIP Style
Manual\cite{AIPstylemanual} is an indispensable reference on good physics
writing, covering everything from planning and organization to standard
spellings and abbreviations.
Most important of all, please familiarize yourself with the AJP Statement
of Editorial Policy,\cite{editorsite} which describes the types of manuscripts
that AJP publishes and the audience for which AJP authors are expected to write.
You wouldn't want to put all that care into preparing a manuscript for AJP,
only to find that AJP is the wrong journal for your manuscript.
We look forward to receiving your submission to AJP.
|
\section{Introduction}
Among the various approaches used in meson physics, the formalism of
Bethe-Salpeter and Dyson-Schwinger equations (DSEs) plays a traditional and
indispensable role. The Bethe-Salpeter equation (BSE) provides a
field-theoretical starting point to describe hadrons as relativistic bound
states of quarks and/or antiquarks. For instance, the DSE and BSE framework has
been widely used in order to obtain nonperturbative information about the
spectra and decays of the whole lightest pseudoscalar nonet, with an
emphasis on the QCD pseudo-Goldstone boson --- the pion \cite{PSEUDO}.
Moreover, the formalism satisfactorily provides a window to the 'next-scale'
meson sector, too, including vector, scalar \cite{SCALARY} and excited mesons.
Finally, electromagnetic form factors of mesons have been calculated with this
approach for space-like momenta \cite{FORMFAKTORY}.
When dealing with bound states composed of light quarks, then it is unavoidable
to use the full covariant BSE framework. Nonperturbative knowledge of the
Green's function, which makes part of the BSE kernel, is required. Very often,
the problem is solved in Euclidean space, where it is more tractable, as
there are no Green's function singularities there. The physical amplitudes
can be then obtained by continuation to Minkowski space. Note that the
extraction of mass spectra is already a complicated task \cite{BHKRW2007}, not
to speak of an analytic continuation of Euclidean-space form factors.
When dealing with heavy quarkonia or mixed heavy mesons like $B_c$
(found at Fermilab by the CDF Collaboration \cite{BCmesons}),
some simplifying approximations are possible. Different
approaches have been developed to reduce the computational complexity of the
full four-dimensional (4D) BSE. The so-called instantaneous \cite{INSTA} and
quasi-potential approximations \cite{QUASI}
can reduce the 4D BSE to a 3D equation in a Lorentz-covariant manner. In
practice, such 3D equations are much more tractable, since their resolution is
less involved, especially if one exploits the considerable freedom in
performing the 3D reduction. Also note that, contrary to the BSE in the ladder
approximation, these equations reduce to the Schr\"{o}dinger equation of
nonrelativistic Heavy-Meson Effective Theory and nonrelativistic QCD
\cite{HEAVY}. However, the interaction kernels of the reduced equations
often correspond to input based on economical phenomenological models, and the
connection to the underlying theory (QCD) is less clear (if not abandoned from
the onset).
In the present paper, we extend the method of solving the full 4D BSE,
originally developed for pure scalar theories \cite{NAKAN,KUSIWI,SAUADA2003},
to theories with nontrivial spin degrees of freedom. Under a certain
assumption on the functional form of Green's functions, we develop a method
of solving the BSE directly in Minkowski space, in its original manifestly
Lorentz-covariant 4D form. In order to make our paper as self-contained as
possible, we shall next supply some basic facts about the BSE approach to
relativistic mesonic bound states.
The crucial step to derive the homogeneous BSE for bound states is the
assumption that the bound state reflects itself in a pole of the four-point
Green's function for on-shell total momentum $P$, with $P^2=M_j^2$, viz.\
\begin{equation}
G^{(4)}(p,p',P)=\sum_j\frac{-i}{(2\pi)^4}\frac{\psi_j(p,P_{os})
\bar{\psi_j}(p',P_{os})}{2E_{p_j}(P^0-E_{p_j}+i\epsilon)}+\mbox{regular terms}\;,
\end{equation}
where $E_{p_j}=\sqrt{\vec{p}\,{}^2+M_j^2}$ and $M_j$ is the (positive) mass of
the bound state characterized by the BS wave function $\psi_j$ carrying the
set of quantum numbers $j$.
Then the BSE can be conventionally written in momentum space like
\begin{eqnarray}
S_1^{-1}(p_+,P)\psi(p,P)S_2^{-1}(p_-,P)&&=-i\int\frac{d^4k}{(2\pi)^4}
V(p,k,P)\psi(p,P)\, ,
\\
p_+&&=p+\alpha P \, ,
\nonumber \\
p_-&&=p-(1-\alpha)P \, ,
\nonumber
\end{eqnarray}
or, equivalently, in terms of BS vertex function $\Gamma$ as
\begin{eqnarray} \label{wakantanka}
\Gamma(p,P)&=&-i\int\frac{d^4k}{(2\pi)^4}V(p,k,P)S_1(k_+,P)
\Gamma(p,P)S_2(k_-,P) \, ,
\end{eqnarray}
where we suppress all Dirac, flavor and Lorentz indices, and $\alpha\in(0,1)$.
The function $V $ represents the two-body-irreducible interaction kernel, and
$S_i$ ($i=1,2$) are the dressed propagators of the constituents. The free
propagators read
\begin{equation}
S_i^0(p)=\frac{\not p+m_i}{p^2-m^2_i+i\epsilon}.
\end{equation}
Concerning solutions to the BSE (\ref{wakantanka}) for pseudoscalar mesons,
they have the generic form \cite{LEW}
\begin{equation} \label{gen.form}
\Gamma(q,P)=\gamma_5[\Gamma_A+\Gamma_Bq.P\not\!q+\Gamma_C\not\!P+
\Gamma_D\not\!q\not\!P+ \Gamma_E\not\!P\not\!q] ,
\end{equation}
where
the $\Gamma_i$, with $i=A,B,C,D,E$, are scalar functions of their arguments
$ P,q$. If the bound state has a well-defined charge parity, say
${\cal{C}}=1$, then these functions are even in $q.P$, and furthermore
$\Gamma_D=-\Gamma_E$.
As was already discussed in Ref.~\cite{MUNCZEK}, the dominant contribution to
the BSE vertex function for pseudoscalar mesons comes from the first term in
Eq.~(\ref{gen.form}). This is already true, at a 15\% accuracy level, for the
light pseudoscalars $\pi,K,\eta$, while in the case of ground-state heavy
pseudoscalars, like the $\eta_c$ and $\eta_b$, the contributions from the other
tensor components in Eq.~(\ref{gen.form}) are even more negligible.
Hence, at this stage of our Minkowski calculation, we also approximate our
solution by taking $\Gamma=\gamma_5\Gamma_A$.
The interaction kernel is approximated by the dressed gluon propagator,
with the interaction gluon-quark-antiquark vertices taken in their bare forms.
Thus, we may write
\begin{equation} \label{landau}
V(p,q,P)=g^2(\kappa) D_{\mu\nu}(p-q,\kappa)\gamma^{\nu}\otimes\gamma^{\mu} \, ,
\end{equation}
where the full gluon propagator is renormalized at a scale $\kappa$. The
effective running strong coupling $\alpha_s$ is then related to $g$ through the
equations
\begin{eqnarray} \label{gluon}
g^2(\kappa)D_{\mu\nu}(l,\kappa)&&=
\alpha_s(l,\kappa)\frac{ P^T_{\mu\nu}(l)}{l^2+i\epsilon}-\xi g^2(\kappa)
\frac{l_{\mu}l_{\nu}}{l^4+i\epsilon}\, ,
\\
\alpha_s(q,\kappa)&&=\frac{g^2(\kappa)}{1-\Pi(q^2,\kappa)}\, ,
\nonumber \\
P^T_{\mu\nu}(l)&&=-g_{\mu\nu}+\frac{l_{\mu}l_{\nu}}{l^2}\, .
\nonumber
\end{eqnarray}
From the class of $\xi$-linear covariant gauges, the Landau gauge $\xi=0$ will
be employed throughout the present paper.
In the next section, we shall derive the solution for the dressed-ladder
approximation to the BSE, i.e., all propagators are considered dressed ones,
and no crossed diagrams are taken into account. The BSE for quark-antiquark
states has many times been treated in Euclidean space, even beyond the ladder
approximation. Most notably, the importance of dressing the proper vertices in
the light-quark sector was already stressed in Ref.~\cite{ACHJO}, so our
approximations are certainly expected to have a limited validity.
Going beyond the rainbow ($\gamma_{\mu}$) approximation is straightforward
but rather involved. (For comparison, see the Minkowski study of
Schwinger-Dyson equations published in Refs.~\cite{SAULIJHEP,SAULI2}),
the latter paper including the minimal-gauge covariant vertex instead of the
bare one). In the present paper, we prefer to describe the computational
method rather than carrying out a BSE study with the most sophisticated
kernel known in the literature.
The set-up of this paper is as follows. In Sec.~2 we describe the method of
solving the BSE. As a demonstration, numerical results are presented in
Sec.~3. Conclusions are drawn in Sec.~4. The detailed derivations of the integral
equation, that we actually solved numerically, are presented in the Appendices.
\section{Integral representation and solution of the BSE}
In this section we describe our method of solving the BSE in Minkowski space.
It basically assumes that the various Green's functions appearing in the
interaction kernel can be written as weighted integrals over the various
spectral functions (i.e., the real distribution) $\rho$.
More explicitly stated, the full quark and gluon propagators, the latter ones
in the Landau gauge, are assumed to satisfy the standard Lehmann
representation, which reads
\begin{equation} \label{srforquark}
S(l)=\int_{0}^{\infty}d\omega\frac{\rho_v(\omega)\not l+
\rho_s(\omega)}{l^2-\omega+i\epsilon}\,,
\end{equation}
\begin{equation} \label{srforgluon}
G_{\mu\nu}(l)=\int_{0}^{\infty}d\omega\frac{\rho_g(\omega)}{l^2-\omega+i\epsilon}
P^T_{\mu\nu}(l) \, ,
\end{equation}
where $\rho$ is a real istribution.
Until now, with certain limitations, the
integral representations ~(\ref{srforquark}) and (\ref{srforgluon}) have been used for the nonperturbative evaluation
of Green's functions in various models \cite{SAFBP}. However, we should note
here that the true analytic structure of QCD Green's functions is not reliably
known (also see Refs.~\cite{ALKSME,FISHER1,SABIA}), which studies suggests the tructure given by ~(\ref{srforquark}) and (\ref{srforgluon}) is not sufficient if not excluded. In this case, the lehmann representation or perhaps the ussage of real $\rho$ in the integral representation ~(\ref{srforquark}) and (\ref{srforgluon}) can be regarded as an analyticized approximation of the true
quark propagator. The complexification of $rho$ within the complex integration path is one of the straightforward and questionable generalization \cite{ARRBRO}. The general question of the existence of Lehamnn represintation in QCD is beyond the scope of presented paper and we do not discussed the problem furthermore.
Furthermore, we generalize here the idea of the Perturbation Theory Integral
Representation (PTIR) \cite{NAKAN}, specifically for our case. The PTIR
represents a unique integral representation (IR) for an $n$-point Green's function
defined by an $n$-leg Feynman integral.
The generalized PTIR formula for the $n$-point function in a theory involving
fields with arbitrary spin is exactly the same as in the original scalar theory
considered in Ref.~\cite{NAKAN}, but the spectral function now acquires a
nontrivial tensor structure. Let us denote such a generalized weight function by
$\rho(\alpha,x_i)$. Then, it can be clearly decomposed into the sum
\begin{equation}
\rho(\alpha,x_i)_{\mbox{\scriptsize scalar theory}}\rightarrow \sum_j
\rho_j(\alpha,x_i){\cal{P}}_j ,
\end{equation}
where $\alpha,x_i$ represent the set of spectral variables, and $j$ runs over
all possible independent combinations of Lorentz tensors and Dirac matrices
$P_j$. The function $\rho_j(\alpha,x_i)$ just represents the PTIR weight
function of the $j$-th form factor (the scalar function by definition), since
it can obviously be written as a suitable scalar Feynman integral. Leaving
aside the question of (renormalization) scheme dependence, we refer the reader
to the textbook by Nakanishi \cite{NAKAN} for a detailed derivation of the
PTIR. The simplest examples of such "generalized" integral representations corresponds with Lehmann
representations for spin half ~(\ref{srforquark}) and spin one propagators (\ref{srforgluon}).
Let us now apply our idea to the pseudoscalar bound-state vertex function
keeping in mind that the singularity structure (given by the denominators)
of the r.h.s.\ of the BSE is the same as in the scalar models studied in
Refs.~\cite{KUSIWI,SAUADA2003}, the appropriate IR for the pseudoscalar
bound- state vertex function $\Gamma_A(q,P)$ should read
\begin{equation} \label{repr}
\Gamma_A(q,P)=\int_{0}^{\infty} d\omega \int_{-1}^{1}dz
\frac{\rho_A^{[N]}(\omega,z)}
{\left[F(\omega,z;P,q)\right]^N}\, ,
\end{equation}
where we have introduced a useful abbreviation for the denominator of the
IR~(\ref{repr}), viz.\
\begin{equation} \label{efko}
F(\omega,z;P,q)=\omega-(q^2+q.Pz+P^2/4)-i\epsilon \, ,
\end{equation}
with $N$ a free integer parameter.
Substituting the IRs~(\ref{repr}), (\ref{srforgluon}), (\ref{srforquark}) into
the r.h.s.\ of the BSE~(\ref{wakantanka}), one can analytically integrate over
the loop momenta. Assuming the uniqueness theorem \cite{NAKAN}, we should
arrive at the same IR~(\ref{repr}), because of the r.h.s.\ of the
BSE~(\ref{wakantanka}). The derivation is given in Appendix A for the cases
$N=1,2$.
In other words, we have converted the momentum BSE (with a singular kernel)
into a homogeneous two-dimensional integral equation for the real weight
function $\rho_A^{[N]}(\omega,z)$, i.e.,
\begin{equation} \label{madrid}
\rho^{[N]}_A(\tilde{\omega},\tilde{z})=
\int_{0}^{\infty} d\omega \int_{-1}^{1}dz
V^{[N]}(\tilde{\omega}, \tilde{z};\omega,z)\rho^{[N]}_A(\omega,z) ,
\end{equation}
where the kernel $V^{[N]}(\tilde{\omega}, \tilde{z};\omega,z)$ is a regular
multivariable function.
The kernel $V^{[N]}$ also automatically supports the domain $\Omega$ where
the function $\rho^{[N]}_A(\omega,z)$ is nontrivial. This domain is always
smaller then the infinite strip $[0,\infty)\times[-1,1]$, as is explicitly
assumed by the boundaries of the integrals over $\omega$ and $z$.
For instance, with the simplest kernel parametrized by a free gluon
propagator and constituent quarks of mass $m$, we get for the flavor-singlet
meson $\rho^{[N]}_A(\omega,z)\neq 0$ only if $\omega>m^2$.
In our approach, to solve the momentum BSE in Minkowski space is equivalent
to finding a real solution to the real integral equation~(\ref{madrid}). No
special choice of frame is required. If one needs the resulting vertex
function, can be obtained by numerical integration over $\rho_N$ in an
arbitrary reference frame.
\section{ Numerical Results}
In this section we discuss the numerical solution of the BSE with various
interaction kernels. For that purpose, we shall vary the coupling strength
as well as the effective gluon mass $m_g$. We are mainly concerned with the
range of binding energies that coincide with those of heavy quarkonia, which
systems we shall study in future work. Moreover, we take a discrete set of
values for the mass $m_g$, such that it runs from zero to the value of the
constituent quark mass. These values are expected to be relevant for the case
of a true gluon propagator (when $m_g$ is replaced by the continuous spectral
variable $\omega$ (\ref{srforgluon})). Thus, in each case, the corresponding
gluon density is $\rho_g(c)=N_g\delta(c-m^2_g)$, which specifies the kernel
of the BSE to be (in the Landau gauge)
\begin{equation} \label{gluon2}
V(q-p)=g^2
\frac{-g_{\mu\nu}+\frac{(q-p)_{\mu}(q-p)_{\nu}}{(q-p)^2}}
{(q-p)^2-m_g^2+i\epsilon}
\gamma^{\nu}\otimes\gamma^{\mu}
\end{equation}
where the prefactor (including the trace of the color matrices) is simply
absorbed in the coupling constant. For our actual calculation, we use the bare
constituent propagator $S_i(p_i)$ with heavy quark mass $M\equiv m$ (see Appendix A
for this approximation).
Firstly, we follow the standard procedure: after fixing the bound-state mass
($\sqrt{P^2}$), we look for a solution by iterating the BSE for a spectral
function with fixed coupling constant $\alpha=g^2/(4\pi)$.
Very similarly to the scalar case \cite{SAUADA2003}, the choice $N=2$ for the
power of $F$ in the IR of the bound-state vertex function is the preferred one.
This choice is a reasonable compromise between on the one hand limiting
numerical errors and on the other hand avoiding the computational obstacles
for high $N$. Here we note that using $N=1$ is rather unsatisfactory
(comparing with the massive Wick-Cutkosky model), since then we do not find any
stable solution for a wide class of input parameters $g$, $m_g$. In contrast,
using the value $N=2$ we obtain stable results for all possible interaction
kernels considered here. This includes the cases with vanishing $m_g$, which
means that the numerical problems originally present in the scalar models
\cite{SAUADA2003} are fully overcome here. The details of our numerical
treatment are given in Appendix B.
As is more usual in the nonrelativistic case, we fix the coupling constant
$\alpha=g^2/(4\pi)$ and then look for the bound-state mass spectrum. We find
the same results in either case, whether $P$ or $\alpha$ is fixed first,
noting however that in the latter case the whole integration in the kernel
$K$ needs to be carried out in each iteration step, which makes the problem
more computer-time consuming.
\begin{figure}
\centerline{ \mbox{\psfig{figure=soub3.ps,height=14.0truecm,
width=14.0truecm,angle=0}} }
\caption[99]{\label{figBSE} The rescaled weight function
$\tau=\frac{\rho^{[2]}(\omega,z)}{\omega^2}$ for the following model parameters:
$\eta=0.95$, $m_g=0.001M$, $\alpha_s=0.666$; the small mass $m_g$
approximates the one-gluon-exchange interaction kernel. }
\end{figure}
The obtained solutions for varying $\alpha$ and mass $m_g$, with
a fixed fractional binding $\eta=\sqrt{P^2}/(2M)=0.95$, are given in Table~1.
If we fix the gluon mass at $m_g=0.5$ and vary the fractional binding $\eta$,
we obtain the spectrum of Table~2.
\begin{center}
\small{\begin{tabular}{|c|c|c|c|c|}
\hline \hline
$m_g/m_q $ & $10^{-3}$ & 0.01 & 0.1 & 0.5 \\
\hline
$\alpha$ & 0.666 & 0.669 & 0.745 & 1.029 \\
\hline \hline
\end{tabular}}
\end{center}
\begin{center}
TABLE 1. Coupling constant $\alpha_s=g^2/(4\pi )$ for
several choices of $ m_g/M$,
with given binding fraction $\eta=\sqrt{P^2}/(2M)=0.95$.
\end{center}
\begin{center}
\small{\begin{tabular}{|c|c|c|c|c|}
\hline \hline
$\eta: $ &0.8 & 0.9 & 0.95 & 0.99 \\
\hline
$\alpha$ &1.20 & 1.12 & 1.03 & 0.816 \\
\hline \hline
\end{tabular}}
\end{center}
\begin{center}
TABLE 2. Coupling $\alpha_s=g^2/(4\pi )$ as a
function of binding fraction $\eta=\sqrt{P^2}/(2M)$, for
exchanged massive gluon with $m_g=0.5M$.
\end{center}
For illustration, the weight function $\tilde{\rho}^{[2]}$ is
displayed in Fig.~\ref{figBSE}.
\section{ Summary and Conclusions}
The main result of the present paper is the development of a technical
framework to solve the bound-state BSE in Minkowski
space. In order to obtain the spectrum, no preferred reference frame is
needed, and the wave function can be obtained in an arbitrary frame
--- without numerical boosting --- by a simple integration of the weight
function.
The treatment is based on the usage of an IR for the
Green's functions of a given theory, including the bound-state vertices
themselves. The method has been explained and checked numerically on the
samples of pseudoscalar fermion-antifermion bound states. It was shown
that the momentum-space BSE can be converted into a real equation for a
real weight functions $\rho$, which is easily solved numerically.
The main motivation of the author was to develop a practical tool respecting
selfconsistency of DSEs and BSEs. Generalizing this study to other mesons,
such as vectors and scalars, and considering more general flavor or isospin
structures, with the simultaneous improvement of the approximations
(correctly dressed gluon propagator, dressed vertices, etc.), will be an
essential step towards a fully Lorentz-covariant description of a plethora of
transitions and form factors in the time-like four-momentum region.
\
\
{\Large{ \bf Acknowledgments}}
\
I would like to thank George Rupp for his careful reading of the manuscript.
|
\section{Introduction}
Wireless sensor networks (WSNs) have attracted attention from wireless network research communities for more than a decade \cite{Akyildiz:2002:WSN:Survey}. Their abilities to collect environmental data and, to effectively and wirelessly send those data back to central, processing nodes, have been identified by research work such as \cite{senvm}. Recently, with the emerging Internet of Things (IoT) \cite{stankovic_iot}, researchers foresee its potentials to bring a new generation of the Internet where things are connected to engender unprecedented intelligence that facilitates people's daily lives, enterprise's tasks, and city-wide missions. WSNs are an integral part of the IoT because they are the sources of sensed data to be processed and analyzed by the computing clouds. Therefore, WSNs in the IoT are expected to handle operations with more density, diversity, and tighter integration.
WSNs often demand that sensed data be incorporated with timestamps so that the systems can fuse, distinguish and sequence the data consistently. Therefore, several protocols proposing to synchronize a global time in WSNs include \cite{Maroti:2004:FTS:1031495.1031501,5211944,Apicharttrisorn:2010:EGT:1900724.1901046}. However, some systems do not require a global notion of time; they demand sensor nodes to sample data at the same time to take sequential snapshots of an environment. Such systems include \cite{Werner-Allen:2005:FSN:1098918.1098934}, which is the first protocol to achieve this task of ``synchronicity''. Desynchronization is an \emph{inverse} of synchronicity because it requires nodes \emph{not} to work at same time; hence, desynchronization can provide nodes with collision-free and even equitable access to a shared resource. A concrete example is a system using a Time Division Multiple Access or TDMA protocol, in which nodes utilize a shared medium at different time slots to avoid collision. In addition, desynchronization can schedule duty cycles of sensor nodes; in other words, nodes covering the same sensing area take turns waking up to sense the environment while others are scheduled to sleep to save the limited energy. Other potential applications of desynchronization include techniques to increase a sampling rate in multiple analog-to-digital converters, to schedule resources in multi-core processors, and to control traffic at intersections \cite{4274893}.
In this paper, we propose a stable desynchronization algorithm for multi-hop wireless sensor networks called Multi-hop Desynchronization With an ARtificial Force field or M-DWARF. We use TDMA to validate our algorithm and evaluate its performance. M-DWARF uses basic concepts of artificial force fields in DWARF \cite{Choochaisri:2012:DAF:2185376.2185378}. However, to support multi-hop networks and to avoid their hidden terminal problems, M-DWARF adds two mechanisms called \textit{Relative Time Relaying} and \textit{Force Absorption}. With these features added, M-DWARF is able to desynchronize multi-hop networks without collision from hidden terminals while maintaining maximum channel utilization. We evaluate M-DWARF's functionality on TelosB motes \cite{telosb} and its performance on TOSSIM \cite{Levis:2003:TAS:958491.958506}, a TinyOS simulator. We compare M-DWARF with Extended Desynchronization for Multi-hop Networks or EXTENDED-DESYNC (also referred as EXT-DESYNC) \cite{MK09DESYNC} and Lightweight coloring and desynchronization for networks or LIGHTWEIGHT \cite{5062165} on several topologies of multi-hop wireless networks. According to the simulation results, M-DWARF is the only desynchronization that has all the three properties - fast convergence, high stability, and maintained fairness. Moreover, in \cite{Choochaisri:2012:DAF:2185376.2185378}, we prove that desynchronization using artificial force fields is a convex function; that is, it converges to a global minimum without local ones. In addition, our stability analysis (in a supplementary document) proves that once DWARF or M-DWARF systems reach a state of desynchrony, they become stable. In other words, DWARF or M-DWARF provides a static equilibrium of desynchronized force fields at steady states. Once nodes in the systems deviate from a state of desynchrony, they will attempt to return back to the balance of the forces immediately. Our stability analysis not only suggests the stability of our desynchronization our algorithms but also proves that the systems will eventually converge to an equilibrium.
In the next section, we briefly explain how our proposed desynchronization, M-DWARF, works and describe its contributions. In Section \ref{sec:related_work}, we survey related literature of desynchronization in temporal and spatial domains, as well as TDMA scheduling algorithms. Section \ref{sec:desync_algo} recalls the basic concepts of DWARF and then explains the M-DWARF algorithm in detail. Finally, Appendix \ref{sec:psuedocode_m_dwarf} shows the psuedocodes of M-DWARF.
\section{Contributions}
\label{sec:contribution}
In order to understand the contributions of the M-DWARF algorithm, it is important to know basically how it works.
In addition to DWARF, M-DWARF has two mechanisms, which are named relative time relaying and force absorption, to support multi-hop topologies. The algorithmic steps of M-DWARF can be enumerated as follows.
\begin{enumerate}
\item Nodes, which are not initially desynchronized, set a timer to fire in $T$ time unit and listen to all one-hop neighbors.
\item Upon receiving a firing message from its one-hop neighbor, the node marks the current time to be the relative phase reference. Then, it reads relative phases of its two-hop neighbors which are included within the firing message. After that, it calculates their two-hop neighbors' phases by using the relative phase reference as the offset. The details are explained in Section \ref{sec:relative}.
\item When the timer, $T$, expires, a node broadcasts a firing message containing relative phases of its one-hop neighbors. Then, it calculates a new time phase to move on the phase circle, which is based on the summation of artificial forces from all phase neighbors within two hops where some forces are absorbed as explained in Section \ref{sec:absorption}. Then, the node sets a new timer according to the new calculated phase.
\item Similar to DWARF, a node adjusts its phase by using Eq. \ref{eq:newphase} with $K$ calculated from Eq. \ref{eq:arbitrary_T}.
\end{enumerate}
All nodes in the artificial force field (in the period circle) iteratively run the same algorithm until the force is balanced.
The pseudo-code of this algorithm is shown in Appendix \ref{sec:psuedocode_m_dwarf}. The contributions of our proposed desynchronization algorithms are listed as follows.
\paragraph{Autonomous operations} M-DWARF works without any master or root nodes, so it does not need to elect or select such nodes and also does not have a single point of failure in this sense. Moreover, nodes do not need knowledge about network topology; they only have to know their one-hop and two-hop neighbors in order for the desynchronization to function correctly. In addition, M-DWARF is easy to be deployed because it works independently without any deployment setup. Finally, M-DWARF adapts itself very well to dynamics such as leaving or joining nodes.
\paragraph{Determinism} M-DWARF does not use any random operation, so its protocol behavior is deterministic and predictable.
\paragraph{Throughput} Thanks to its high stability, at a state of desynchrony, a node's time slot firmly stays at the same position in the next iteration, causing no or minimal interference with adjacent time slots. As a result, M-DWARF requires less \emph{guard time} between slots, allowing nodes to fully utilize the resource or medium because of its stable desynchronization. Without stability, the beginning and the end of the frames are likely to collide or interfere with the adjacent ones because time slots of the next iteration do not strictly stay at the same position. Moreover, M-DWARF requires low overhead. In each time period \textit{T}, each node only broadcasts a desynchronization message containing one-hop and two-hop neighbor information; in other words, it does not need any two-way or three-way handshakes like many other protocols.
\paragraph{Delay} A node that starts to join the desynchronization network needs to broadcast a desynchronization message to declare itself to the network and occupy its own time slot. Therefore, it has to wait one time period to send the first data frame. By determining the slot size and transmission bitrate, the node can predict how many iterations it needs to finish transmitting one packet. In consequence, all the nodes in the network route can share information regarding the end-to-end delay of a packet traversal. Therefore, upper and lower limits of such a delay can be determined.
\paragraph{Interoperability} Because M-DWARF does not assume any radio hardware specifications or MAC-layer techniques, it is highly interoperable with open standards. It assumes only that all nodes can transmit and receive data at the same spectrum. Although WSNs are a target platform of this paper, we argue that our desynchronization and TDMA algorithms can be applied to generic wireless multi-hop networks \cite{Sgora:2015:TDMA} or wireless mesh networks \cite{Vijayalayan:2013:Scheduling}.
\paragraph{Complexity} M-DWARF has low complexity for the following three reasons. First, it does not require time synchronization between nodes. Second, there are no handshakes; only one-way broadcasting is necessary. Third, its computational complexity depends only on the number of one-hop and two-hop neighbors, instead of the entire network size.
\paragraph{Energy Efficiency} Because of M-DWARF's high stability, data frames of adjacent time slots are less likely to collide or interfere with each other. Therefore, it has lower probability for packet retransmission, which is a factor for energy inefficiency in networks.
\paragraph{Channel Utilization Fairness} M-DWARF can provide equal access for all the neighbor nodes. The artificial forces are balanced between neighbor nodes, so all the nodes converge to occupy equal slot sizes.
\section{Related Work}
\label{sec:related_work}
Our related work can be mainly divided into three categories - 1) desynchronization on a temporal domain in wireless networks 2) desynchronization on a spatial domain in robotics 3) TDMA scheduling algorithms. Then, we summarize the properties of related work of the first category in Table \ref{tab:compared}.
\subsection{Desynchronization on a Temporal Domain in Wireless Networks}
\label{sec:timedesync}
To the best of our knowledge, Self-organizing desynchronization or DESYNC \cite{4379660} is the first to introduce the desynchronization problem. In DESYNC, a node simply attempts to stay in the middle phase position between its previous and next phase neighbors. By repeating this simple algorithm, all nodes will eventually and evenly be spread out on a temporal ring. However, the error from one phase neighbor is also propagated to the other phase neighbors and is indefinitely circulated inside the network. As a consequence, DESYNC's error is quite high even after convergence. C. M. Lien et al. propose Anchored desynchronization or ANCHORED \cite{anchored} that uses the same method as DESYNC but requires one anchored node to fix the phase of its oscillator. However, because ANCHORED uses only the phase information of the phase neighbors, it shall suffer the similar desynchronization error as DESYNC. In contrast, our work relies on all received neighbors' information and is therefore more robust to the error from one phase neighbor. In \cite{4663417}, the authors describe how DESYNC works on multi-hop networks and explain an extension for DESYNC by exchanging two-hop neighbor information.
Inversed Mirollo-Strogatz or INVERSE-MS \cite{4274893}, designed to converge faster than DESYNC, is an inverse algorithm of the synchronicity work by \cite{MS1990}.
At a steady state, INVERSE-MS maintains a dynamic equilibrium (\textit{i.e.}, nodes keep changing time positions while maintaining desynchronization). However, in INVERSE-MS, the time period is distorted whereas our algorithm does not distort the time period.
In Extended Desynchronization or EXT-DESYNC \cite{MK09DESYNC}, the authors propose a desynchronization algorithm that is similar to the extension proposed in \cite{4663417}. Each node sends its one-hop neighbors' relative time information to all of its one-hop neighbors.
Then, the one-hop neighbors relay such information to two-hop neighbors so that each node knows two-hop relative time information.
Consequently, each node can presume that there are two-hop neighbors appearing on the time circle.
Therefore, each node uses time information of both one-hop and two-hop neighbors to desynchronize with the same algorithm as in DESYNC. One mechanism in our multi-hop algorithm proposed in this paper is partly based on this notion.
M-DESYNC \cite{5062256} is a localized multi-hop desynchronization algorithm that works on a granularity of time slots. This protocol uses a graph coloring model for solving desynchronization. It starts by estimating the required number of time slots with a two-hop maximum degree or the maximum number of colors. This estimation allows nodes in M-DESYNC to immediately choose each predefined slot or color and helps M-DESYNC converge very fast. However, M-DESYNC requires that all nodes have a global notion of time in order to share the common perception of time slots. Furthermore, M-DESYNC claims that it works only on acyclic networks. On the contrary, our algorithm does not require a global notion of time and can work on both acyclic and cyclic networks.
A. Motskin et al. \cite{5062165} propose a simple, lightweight desynchronization algorithm, namely LIGHTWEIGHT, that is also based on a graph coloring model. Unlike M-DESYNC, the algorithm works on general graph networks and does not need the global time. To ensure that the selected time slot does not overlap with others', a node needs to listen to the shared medium for a full time period before claiming the slot. The listening mechanism can only avoid collision with one-hop neighbors but cannot avoid collision with two-hop neighbors (\textit{i.e.}, the hidden terminal problem). On the contrary, our algorithm works well on multi-hop networks; each node can effectively avoid collision with two-hop neighbors.
Furthermore, without a common notion of time, the starting time of each slot is quite random; as a result, several time gaps are too small to be used as time slots. This external fragmentation problem poorly reduces resource utilization of the system. Finally, to converge faster, their algorithm overestimates the number of required time slots. Hence, several large time gaps are also left unused and the resource utilization is undoubtedly and significantly lowered. In our work, nodes gradually adapt their phases to be separated from each other as far as possible. Therefore, the external fragmentation problem is reduced and the resource utilization is maximized.
T. Pongpakdi et al. propose Orthodontics-inspired Desynchronization or DESYNC-ORT \cite{desyncort}. In their work, they use information from all one-hop neighbors and attempt to find nodes that are already in corrected time positions and tie them up together. This method is similar to the Orthodontics method that attempts to tie teeth which are already in corrected positions together.
The desynchronization errors of DESYNC-ORT are lower than those of DESYNC.
However, to calculate the correct positions, each node is required to know the total number of nodes in the system in advance. Additionally, the algorithm does not propose to solve the problem in multi-hop networks because nodes in two-hop neighbors cannot be tied together with one-hop neighbors. In contrast, our algorithm does not require nodes to have knowledge about the total number of nodes in advance but gradually and automatically adapts itself based on the current number of neighbors. Finally, our algorithm works on multi-hop networks.
Vehicular-network Desynchronization, abbreviated as V-DESYNC \cite{v-desync}, is proposed to desynchronize nodes in vehicular ad-hoc networks. Their work has a different objective; that is, the algorithm does not focus on fairness (\textit{i.e.}, nodes are not necessary to be equitably separated) because vehicular networks are highly dynamic. In our work, we focus on wireless sensor networks with static sensor nodes and we attempt to render resource utilization fairness among sensor nodes.
Table \ref{tab:compared} summarizes the features of works in this category. Note that the overhead of the proposed algorithm depends on whether the algorithm works on the single-hop or multi-hop mode.
\begin{table*}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Comparison of Desynchronization Protocols}
{
\begin{tabular}{|m{2cm}|m{1.5cm}|m{1.5cm}|m{1.2cm}|m{1.2cm}|m{1.5cm}|m{1.2cm}|m{1.5cm}|m{1cm}|}
\hline
\multirow{2}{*}{Algorithms} & \multicolumn{8}{c|}{Properties}\\
\cline{2-9}
& Period & Time sync & Fair Space & Multi- hop & Conver- gence & Error & Scalable & Over- head\\
\hline
\hline
DESYNC & Fixed & No & Yes & No & Moderate & High & Poor & Zero\\
\hline
ANCHORED & Fixed & No & Yes & No & Moderate & High & Poor & Zero\\
\hline
INVERSE-MS & Distorted & No & Yes & No & Fast & Low & Good & Zero\\
\hline
EXT-DESYNC & Fixed & No & Yes & Yes & Moderate & High & Poor & High\\
\hline
M-DESYNC & Fixed & Required & No & Yes & Fast & High & Good & Low\\
\hline
LIGHT- WEIGHT & Fixed & No & No & Yes & Fast & High & Good & Zero\\
\hline
DESYNC-ORT & Fixed & No & Yes & No & Moderate & Low & Good & Zero\\
\hline
V-DESYNC & Fixed & No & No & No & No & High & Good & Zero\\
\hline
DWARF & Fixed & No & Yes & No & Moderate & Low & Good & Very Low\\
\hline
M-DWARF (Proposed) & Fixed & No & Yes & Yes & Moderate & Low & Good & Low\\
\hline
\end{tabular}
}
\label{tab:compared}
\end{table*}
\subsection{Desynchronization on a Spatial Domain in Robotics}
\label{sec:spacedesync}
\begin{figure}
\centering
\subfloat[Robotic Close Ring]{
\label{fig:subfig:robotic_closed_ring}
\includegraphics[width=1.2in]{figure/robotic-closed-ring}
}
\hspace{1.5cm}
\subfloat[Robotic Perfect Close Ring]{
\label{fig:subfig:robotic_closed_ring_perfect}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-perfect}
}
\caption{Robotic pattern formation on a closed ring. (\ref{fig:subfig:robotic_closed_ring}) Robots are randomly placed on a closed ring. (\ref{fig:subfig:robotic_closed_ring_perfect}) In the perfect configuration, robots are equitably separated from each other.}
\label{fig:robotic_ring}
\end{figure}
In robotic pattern formation, multiple robots distributedly group and align themselves in geometric patterns such as circles, rectangles, and triangles. Without an explicit argument, robotic pattern formation can be abstracted as desynchronization on a spatial domain. Robots attempt to separate away from each other as far as possible to form such patterns; in other words, robots desynchronize themselves spatially to avoid the collision with each other in the spatial domain.
These two pieces of work \cite{suzuki-96,suzuki-99} investigate several robotic pattern formations. However, the pattern formation that is similar to desynchronization on the temporal domain is the formation on a closed ring.
Figure \ref{fig:robotic_ring} illustrates a robotic formation on a closed ring. In Figure \ref{fig:subfig:robotic_closed_ring}, initially, robots are randomly placed on any position on the closed ring. The perfect configuration of the formation is illustrated in Figure \ref{fig:subfig:robotic_closed_ring_perfect}; that is, robots are equivalently separated away on the ring.
Other papers such as \cite{defago04,cohen-08,flocchini-08} propose the algorithms that are similar to each other for robotic formations on a closed ring, assuming that robots have limited visibility range. Each robot attempts to adjust its position to the middle between two nearest robots on its left side and right side (Figure \ref{fig:closedring-desync}). In their papers, they prove that this simple algorithm eventually converges the robotic formation to the perfect configuration (Figure \ref{fig:closedring-desync-perfect}).
\begin{figure}
\centering
\subfloat[A Robotic Move]{
\label{fig:closedring-desync}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-desync}
}
\hspace{1.5cm}
\subfloat[Convergence to Perfect Configuration]{
\label{fig:closedring-desync-perfect}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-desync-perfect}
}
\caption{Moving to the midpoint algorithm. (a) Each robot moves to the midpoint between two nearest visible neighbors. (b) The algorithm converges to the perfect configuration..}
\label{fig:robotic-closed-ring-desync}
\end{figure}
In \cite{4141997}, heterogeneous robots are grouped in a distributed manner into teams that are equally spread out to cover the monitored area. Each robot has no global knowledge of others' absolute positions but can detect relative positions of the others with respect to itself as well as the type of the others.
To form a circle, an artificial force is used as an abstraction for velocity adaptation of a robot.
Robots of different types have attractive forces to each other, while robots of the same type have repulsive forces. As a result, a circle of heterogeneous robots is formed and robots are spaced on the circle (see Figure \ref{fig:circular_formation}). This work inspires our desynchronization algorithm.
\begin{figure}
\centering
\includegraphics[width=3.0in]{figure/robo_new}
\caption{Results of Robotic Circular Formation. Robots with two different types form the circle.}
\label{fig:circular_formation}
\end{figure}
\subsection{TDMA Scheduling Algorithms}
\label{sec:tdma}
Other works that are related to desynchronization protocols are distributed Time Division Multiple Access (TDMA) protocols. Distributed TDMA protocols are similar to M-DESYNC \cite{5062256}; their protocols work on a granularity of time slots. Similar to M-DESYNC, many of the distributed TDMA protocols such as TRAMA \cite{trama}, Parthasarathy \cite{Parthasarathy}, ED-TDMA \cite{edtdma}, and Herman \cite{herman} assume time is already slotted or all nodes are synchronized to achieve the same global clock.
In our work, we do not require time synchronization and do not assume already slotted time.
S. C. Ergen et al. \cite{Ergen:2010:TDMAAlgo} propose node-based and level-based TDMA scheduling algorithms for WSNs. Their techniques mainly derive from graph coloring algorithms in which the \textit{colors} must have been predefined. In contrast, our desynchronization algorithms never predefine slots but rather allow nodes to adjust their slots with those in the neighborhood on-the-fly.
K. S. Vijayalayan et al. \cite{Vijayalayan:2013:Scheduling} survey distributed scheduling techniques for wireless mesh networks and A. Sgora et al. \cite{Sgora:2015:TDMA} provide an extensive survey of recent advances in TDMA algorithms in wireless multi-hop networks.
Both the survey papers interestingly give a comprehensive overview of the TDMA scheduling algorithms and techniques that are still being investigated and further developed by wireless network researchers worldwide.
\section{DWARF and M-DWARF Desynchronization Algorithms}
\label{sec:desync_algo}
In this section, we briefly introduce the concept of \textit{artificial force field} and our previous work of desynchronization for single-hop networks \cite{Choochaisri:2012:DAF:2185376.2185378} because it is necessary to understand the basic concepts before the proposed algorithm for multi-hop networks can be understood.
\subsection{Desynchronization Framework, Artificial Force Field and DWARF Algorithms}
\label{AFF_DWARF}
\subsubsection{Desynchronization Framework}
The desynchronization framework is depicted as a time circle in Figure \ref{fig:time_circle}.
The perimeter of a time circle represents a configurable time period $T$ of nodes' oscillators.
The time position or \textit{phase} of each node represents its turn to perform a task (\textit{e.g.}, accessing a shared resource, sampling data, or firing a message).
The system is desynchronized when all nodes are separated in the time circle. We define the terms used in the desynchronization context as follows.
\begin{figure}
\centering
\includegraphics[width=3.2in]{figure/time_circle_new}
\caption{Desynchronization framework: $\phi_1$ and $\phi_2$ are phases of node 1 and 2 respectively. While all the four nodes are phase neighbors to each other, node 2 and 4 are the previous and next phase neighbor of node 1 respectively. The left figure shows a desynchrony state that will converge to the perfect desynchrony state as in the right figure.}
\label{fig:time_circle}
\end{figure}
\begin{definition}[Phase]
A phase $\phi_i$ of node $i$ is the time position on the circle of a time period $T$, where $0 \leq \phi_i < T$ and $T \in \mathbb{R}^+$.
\end{definition}
\begin{definition}[Phase Neighbor]
Node $j$ is a phase neighbor of node $i$ if node $i$ perceives the existence of node $j$ through reception of $j$'s messages at the phase $\phi_i + \phi_{i,j}$, where $\phi_{i,j}$ is the phase difference between node $j$ and node $i$,
\begin{equation}
\phi_{i,j} = \left\{
\begin{array}{l l}
\phi_j - \phi_i & \quad \mbox{if $\phi_j \geq \phi_i$,}\\
T - (\phi_i - \phi_j) & \quad \mbox{if $\phi_j < \phi_i$.}\\ \end{array} \right.
\end{equation}
\end{definition}
\begin{definition}[Next Phase Neighbor]
Node $j$ is the next phase neighbor of node $i$ if $\phi_{i,j} = \underset{k \in S}{\min}\{\phi_{i,k}\}$, where $S$ is a set of node $i$'s neighbors.
\end{definition}
\begin{definition}[Previous Phase Neighbor]
Node $j$ is the previous phase neighbor of node $i$ if \\$\phi_{i,j} = \underset{k \in S}{\max}\{\phi_{i,k}\}$, where $S$ is a set of node $i$'s neighbors.
\end{definition}
\begin{definition}[Desynchrony State]
The system is in a desynchrony state if $\phi_i \neq \phi_j$ for all $i, j \in V$ and $i \neq j$, where $V$ is a set of nodes in a network that cannot share the same phase.
\end{definition}
\begin{definition}[Perfect Desynchrony State]
The system is in the perfect desynchrony state if it is in the desynchrony state and $\phi_{i,j} = T/N$ for all $i \in V$, $j$ is $i$'s previous phase neighbor, and $N$ is the number of nodes in a network that cannot share the same phase.
\end{definition}
We note that two nodes can share the same phase if they are not within the two-hop communication range of each other.
\subsubsection{Artificial Force Field}
\label{AFF}
An artificial force field is an analogy to the circle of a time period where nodes have repulsive forces to each other.
Nodes are in the same force field if they can communicate with each other or share the same medium.
If node $i$ and node $j$ are on the same force field, they have repulsive forces to push one another away.
A closer pair of nodes has a higher magnitude of force than a farther pair does.
The time interval between two nodes is derived from the phase difference between them.
If two nodes have a small phase difference, they have a high magnitude of force and vice versa.
In other words, a repulsive force is an inverse of a phase difference between two nodes:
\begin{equation}
f_{i,j} = - \frac{1}{\Delta \phi_{i,j} / T}, \quad \Delta \phi_{i,j} \in (-\frac{T}{2}, \frac{T}{2}),
\label{eq:force}
\end{equation}
where $f_{i,j}$ is the repulsive force from node $j$ to node $i$ on a time period $T$ and $\Delta \phi_{i,j}$ is the phase difference between node $i$ and $j$.
We note that $\Delta \phi_{i,j}$ is not equal to 0 because if two nodes fire at the same time, their firings collide and two nodes do not record other's firing. Additionally, at $T/2$ or $-T/2$, a node does not repel an opposite node because the forces are balanced.
A repulsive force can be positive (clockwise repulsion) or negative (counterclockwise repulsion).
A positive force is created by a node on the left half of the circle relative to the node being considered whereas a negative force is created by a node on the right half.
Figure \ref{fig:force_field} represents a field of repulsive forces on node 1.
\begin{figure}
\centering
\includegraphics[width=1.5in]{figure/forcefield}
\caption{Artificial Force Field. Arrow lines represent repulsive forces from node 2, 3, and 4 to node 1. A shorter and thicker line is a stronger force. A force from node 4 is a positive force and two forces from node 2 and 3 are negative forces.}
\label{fig:force_field}
\end{figure}
Each node in the force field moves to a new time position or phase proportional to the total received force.
Given $n$ nodes in a force field, the total force on a node $i$ is the following:
\begin{equation}
\mathcal{F}_i = \sum_{\substack{j=1\\ j \neq i}}^{n}{f_{i,j}}.
\label{eq:fsum}
\end{equation}
Eventually, nodes reach an equilibrium where the total force of the system is close to zero and each pair of phase neighbor nodes has the same time interval.
This equilibrium state also indicates the perfect desynchrony state because all nodes are equally spaced on the time circle.
\subsubsection{DWARF, the Single-Hop Desynchronization Algorithm}
\label{DWARF}
We assume that, initially, nodes are not desynchronized and each node sets a timer to fire in $T$ time unit.
After setting the timer, each node listens to all its neighbors until the timer expires.
When receiving a firing message from its neighbor, the (positive or negative) repulsive force from that neighbor is calculated based on the phase difference.
When the timer expires, a node broadcasts a firing message to neighbors.
Then, the node calculates a new time phase to move on the circle based on the summation of forces from all neighbors and sets a new timer according to the new time phase.
It is reasonable now to question how far a node should move or adjust its phase.
In our work, given the total received force $\mathcal{F}_i$, the node $i$ adjusts to a new time phase $\phi_i^{'}$,
\begin{equation}
\phi_i^{'} = (\phi_i + K\mathcal{F}_i) \mod T,
\label{eq:newphase}
\end{equation}
where $\phi_i$ is the current phase of the node $i$.
Undoubtedly, the proper value of the coefficient $K$ leads to the proper new phase.
The value of $K$ is similar to a step size which is used in artificial intelligence techniques.
Therefore, if the value of $K$ is too small, the system takes much time to converge.
On the other hand, if the value of $K$ is too large, the system may overshoot the optimal value and fail to converge.
We observe that, given the same time period, fewer nodes in the system result in bigger phase difference between two phase neighbors. To be desynchronized, nodes in sparse networks must make a larger adjustment to their time phases than nodes in dense networks.
Therefore, the same total received force should have a greater impact on a node in sparse networks than on a node in dense networks.
To reflect this observation, the coefficient $K$ is inversely proportional to a power of the number of nodes $n$,
\begin{equation}
K = c_1 \times n^{-c_2}, \text{ where } c_1, c_2 \geq 0.
\end{equation}
Therefore, we have conducted an experiment to find the proper value of $c_1$ and $c_2$.
We set a time period $T$ to 1000 and vary the number of nodes.
In the specific number of nodes, we first simulate to see the trend of the value $K$ that leads to small errors.
Then, we select a range of good $K$ values and simulate 100 times to obtain the average desynchronization error for each $K$ value.
In each simulation, we randomly set an initial phase of each node between 0 and $T$ (period value).
Finally, we select the $K$ value that results in the lowest error.
After getting the proper $K$ value for each number of nodes, we plot the relation between $K$ and the number of nodes (Figure \ref{fig:relation_k_n}) and use a mathematical tool to calculate the power regression. The obtained relation function between $K$ and $n$ (the trendline in Figure \ref{fig:relation_k_n}) consists of $c_1$ and $c_2$ values as follows:
\begin{equation}
K = 38.597 \times n^{-1.874}. \nonumber
\end{equation}
\begin{figure}
\centering
\includegraphics[width=2.2in]{figure/k-trendline}
\caption{Relation of the coefficient $K$ with a number of nodes $n$}
\label{fig:relation_k_n}
\end{figure}
However, this $K$ value is derived by setting $T$ equal to 1000.
Therefore, for an arbitrary value of $T$,
\begin{equation}
\label{eq:arbitrary_T}
K = 38.597 \times n^{-1.874} \times \frac{T}{1000}.
\end{equation}
The proof of Eq. \ref{eq:arbitrary_T} can be found in \cite{Choochaisri:2012:DAF:2185376.2185378}. Moreover, in \cite{Choochaisri:2012:DAF:2185376.2185378}, we also prove that the force function of DWARF has the convexity property; that is, it has one global minima and no local minima. Additionally, in this paper, we provide stability analysis of DWARF and M-DWARF in the Supplementary Material.
\subsection{M-DWARF, the Multi-hop Desynchronization Algorithm (Proposed)}
\label{sec:m_dwarf}
In this section, we extend the artificial force field concept to desynchronization in multi-hop networks. We begin with applying DWARF directly to a simple multi-hop topology and find out how the algorithm fails on such a topology in Section \ref{sec:hidden-terminal}. Then, we propose two simple yet effective resolutions, relative time relaying and force absorption, to extend DWARF for multi-hop networks in Section \ref{sec:relative} and \ref{sec:absorption} respectively. Additionally, we provide pseudo-code of M-DWARF in Appendix \ref{sec:psuedocode_m_dwarf}.
\subsubsection{The Hidden Terminal Problem}
\label{sec:hidden-terminal}
To see how DWARF works on a multi-hop network, we set a simple 3-node chain topology as illustrated in Figure \ref{fig:3nodes-chain-hidden}. In this topology, node 1 can receive firing messages from node 2 but cannot receive from node 3; on the other hand, node 3 can receive firing messages from node 2 but cannot receive from node 1. However, node 2 can receive firing messages from both node 1 and 3 which are not aware of each other's transmission. This simple scenario causes messages to collide at node 2.
\begin{figure}
\centering
\includegraphics[width=2.5in]{figure/3nodes-chain-hidden}
\caption{The hidden terminal problem.}
\label{fig:3nodes-chain-hidden}
\end{figure}
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=2in]{figure/3nodes-chain-dwarf}
\label{fig:3nodes-chain-dwarf}
}
\hspace{1.0cm}
\subfloat[]{
\includegraphics[width=2in]{figure/3nodes-chain-expected}
\label{fig:3nodes-chain-expected}
}
\caption{(a) shows noisy phases due to message collision at node 2. (b) shows the expected result of the perfect desynchrony state.}
\end{figure}
We simulate DWARF by setting the time period to 1000 milliseconds with nodes starting up randomly.
The simulation result is shown in Figure \ref{fig:3nodes-chain-dwarf}. Node 2's and node 3's phases are plotted relatively to the node 1's phase which is relatively plotted to itself at 0. The noisy vertical line is the wrapping-around phase of node 3. It shows that node 1 and node 3 fire messages approximately at the same phase, causing message collision at node 2.
However, the expected result (\textit{i.e.} perfect desynchrony state) should be that three nodes are separated equivalently because all nodes will interfere each other if they fire messages at the same phase. The expected result is shown in Figure \ref{fig:3nodes-chain-expected} where each node is equivalently separated from each other approximately by 1000/3 milliseconds.
The problematic result is caused by the hidden terminal problem as demonstrated in Figure \ref{fig:3nodes-chain-hidden}; node 1 and node 3 are hidden to each other in this multi-hop topology. While node 3 is firing a message, node 1 senses a wireless channel and does not detect any signal from node 3 because the signal from node 3 is not strong enough within the signal range of node 1 and vice versa. Therefore, node 1 and node 3 notice only that there are two nodes, which are itself and node 2, in their perceived networks. Therefore, node 1 and node 3 simultaneously attempt to adjust their phases to the opposite side of node 2 in their time circles which are the same phase. As a result, messages of node 1 and 3 recursively collide at node 2.
The hidden terminal problem does not only affect the performance of DWARF but also affect that of DESYNC.
This is due to the fact that, in DESYNC, a node adjusts its phase based on firing messages from its perceived phase neighbors. Therefore, if it fails to know the phase or presence of its two-hop neighbors, none of their phases is perceived. In \cite{MK09DESYNC}, EXT-DESYNC, an extension of DESYNC, is proposed to solve the hidden terminal problem based on a relative time relaying mechanism. Based on the similar idea, we extend DWARF to support multi-hop topologies.
However, only relative time relaying mechanism does not lead DWARF to an optimal solution in some cases. Therefore, this paper also proposes a \textit{force absorption} mechanism for extending DWARF to support multi-hop networks more efficiently.
\subsubsection{Relative Time Relaying}
\label{sec:relative}
The first idea to solve the hidden terminal problem is straightforward. If a node does not know the firing times of its second-hop neighbors, its one-hop neighbors relay such information. Therefore, instead of firing only to notify others of its phase, each node includes their one-hop neighbors' firing time into a firing message.
However, due to our assumption that nodes' clocks are not synchronized, relying on second-hop neighbors' firing timestamps from its one-hop neighbors could lead to wrong phase adjustment. This problematic scenario is demonstrated in Figure \ref{fig:broadcast-problem}. Figure \ref{fig:broadcast-problem-topo} illustrates the firing message of node 2 that contains timestamps of its one-hop neighbors and Figure \ref{fig:broadcast-problem-ring} displays the problem. The inner circle represents the local time of node 1 and the outer circle represents the local time of node 2. The figure indicates that the local reference times (at 0 millisecond) of node 1 and node 2 are different. Therefore, if node 1 uses the node 3's firing time relayed by node 2, which is 125 milliseconds, node 1 will misunderstand the exact time phase of node 3. The misunderstood phase of node 3 is depicted as a dash circle.
\begin{figure}
\centering
\subfloat[]{
\label{fig:broadcast-problem-topo}
\includegraphics[width=1.8in]{figure/broadcast-problem-topo}
}
\hspace{2cm}
\subfloat[]{
\label{fig:broadcast-problem-ring}
\includegraphics[width=1.8in]{figure/broadcast-problem-ring}
}
\caption{A problem of relying on second-hop neighbors' firing timestamps from its one-hop neighbors. (a) shows node 2 firing a message. (b) displays node 1's misperception of phases.}
\label{fig:broadcast-problem}
\end{figure}
This problem can be simply solved by using relative phases instead of actual local timestamps. Each node fires a message that includes relative phases of its one-hop neighbors. A receiving node marks the firing phase of the firing node as a reference phase. Then, the receiving node perceives its second-hop neighbors' phases as relative phase offsets to the reference phase. Figure \ref{fig:broadcast-relative} shows how M-DWARF desynchronizes a three-node multi-hop chain network.
\begin{figure*}
\centering
\subfloat[]{
\label{fig:broadcast-relative-topo}
\includegraphics[width=1.8in]{figure/broadcast-relative-topo}
}
\hfill
\subfloat[]{
\label{fig:broadcast-relative-ring}
\includegraphics[width=1.8in]{figure/broadcast-relative-ring_new}
}
\hfill
\subfloat[]{
\label{fig:broadcast-relative-perfect}
\includegraphics[width=1.8in]{figure/broadcast-relative-perfect_new}
}
\caption{M-DWARF solves the problem by using one-hop neighbors' relative phases. (a) shows node 2's firing message. (b) shows how node 1 marks the node 2's phase as a reference phase and uses it as an offset for calculating the node 3's phase. (c) Eventually, nodes are in the perfect desynchrony state.}
\label{fig:broadcast-relative}
\end{figure*}
\subsubsection{Force Absorption}
\label{sec:absorption}
As we mentioned earlier, DWARF with the relative time relaying mechanism does not solve some cases.
These cases are when there are at least two second-hop neighbors that can share the same phase without interference. For example, in a 4-node chain network illustrated in Figure \ref{fig:4nodes-chain-topo}, node 2 and node 3 are physically far beyond two hops. Therefore, they can fire messages at the same time phase without causing message collisions, as shown in Figure \ref{fig:4nodes-chain-expected}.
However, in M-DWARF, node 0 perceives that node 2 and node 3 are at the same phase. Therefore, there are two forces from node 2 and node 3 to repel node 0 clockwise but there is only force from node 1 to repel node 0 counter-clockwise. Consequently, node 0 cannot stay at the middle between node 1 and the group of node 2 and 3 (see Figure \ref{fig:4nodes-chain-dwarf}).
\begin{figure*}
\centerline{
\subfloat[]{\includegraphics[scale=0.3]{figure/4nodes-chain-topo
\label{fig:4nodes-chain-topo}}
\hfill
\subfloat[]{\includegraphics[scale=0.20]{figure/4nodes-chain-expected
\label{fig:4nodes-chain-expected}}
\hfill
\subfloat[]{\includegraphics[scale=0.20]{figure/4nodes-chain-dwarf
\label{fig:4nodes-chain-dwarf}}
}
\caption{The problem of the DWARF algorithm. (a) displays 4-node chain topology. (b) shows node 0's local view. (c) displays an imperfect desynchrony state.}
\label{fig:4nodes-chain}
\end{figure*}
Therefore, we propose a novel force absorption mechanism for multi-hop desynchronization based on the artificial force field. The objective of this mechanism is to absorb the overwhelming force from at least two nodes that can fire at the same phase without interference.
The mechanism works as follows. A node receives a full repulsive force from the next/previous phase neighbor as in DWARF. However, a force from the second-next / second-previous phase neighbor is partially absorbed by the next / previous phase neighbor. The magnitude of the absorbed force depends on the phase interval between the next / previous and the second-next / second-previous phase neighbors. The closer the second-next / second-previous phase neighbor moves to the next / previous phase neighbor, the lower the magnitude of the absorbed force becomes. Eventually, when the second-next / second-previous phase neighbor moves to the same phase as the next / previous phase neighbor, the additional force from the second-next / second-previous phase neighbor is fully absorbed. Consequently, the magnitude of two forces repelling the considered node is approximately equal to only the magnitude of one force. This principle is applied recursively; that is, the force from the third-next / third-previous phase neighbors is absorbed by the second-next / second-previous phase neighbor, and the force from the fourth-next / fourth-previous phase neighbor is absorbed by the third-next/third-previous phase neighbor, and so forth.
Figure \ref{fig:4nodes-chain-dwarf-absorb} illustrates this mechanism. In Figure \ref{fig:4nodes-chain-dwarf-absorb-split}, the force from node 2 to node 0 is absorbed by node 3 (the absorbed force is displayed in a blur line). Thus, from node 2, there is only small magnitude of force left to node 0. Eventually, in Figure \ref{fig:4nodes-chain-dwarf-absorb-perfect}, node 2 moves to the same phase as node 3 because they do not interfere each other and the force from node 2 is fully absorbed. Consequently, the network can be in the perfect desynchrony state.
\begin{figure*}
\centerline{
\subfloat[]{\includegraphics[scale=0.2]{figure/4nodes-chain-dwarf-absorb-split
\label{fig:4nodes-chain-dwarf-absorb-split}}
\hfil
\subfloat[]{\includegraphics[scale=0.2]{figure/4nodes-chain-dwarf-absorb-perfect
\label{fig:4nodes-chain-dwarf-absorb-perfect}}
}
\caption{M-DWARF solves the problem with force absorption; the blur line represented an absorbed force. (a) shows node 2's force is being absorbed by node 3. (b) displays the perfect desynchrony state.}
\label{fig:4nodes-chain-dwarf-absorb}
\end{figure*}
Let $f_{i,j}$ be a full repulsive force from node $j$ to node $i$, $f_{i,j}^{'}$ be an absorbed force from node $j$ to node $i$, $T$ is the time period, and $\Delta \phi_{i,j}$ is the phase difference between node $i$ and $j$.
The force function for multi-hop networks is the following:
\begin{alignat}{2}
f_{i,j} &= \frac{1}{\Delta \phi_{i,j} / T}, \text{where }\Delta \phi_{i,j} \in (-\frac{T}{2}, \frac{T}{2}) \nonumber \\
f_{i,i + 1}^{'} &= f_{i,i + 1} \nonumber \\
f_{i, i - 1}^{'} &= f_{i,i - 1} \nonumber \\
f_{i,j}^{'} &= f_{i,x} - f_{i,j},
\label{eq:force-absorb}
\end{alignat}
where $j \notin \left\{i -1, i + 1\right\}$ and $x = (j - \frac{\Delta \phi_{i,j}}{|\Delta \phi_{i,j}|}) \mod n$.
For $f_{i,x}$, if node $j$ repels node $i$ forward, $x$ is $j + 1$. In contrast, if node $j$ repels node $i$ backward, $x$ is $j - 1$. At $T/2$ or $-T/2$, a node does not repel an opposite node because they are balanced.
For example, in Figure \ref{fig:4nodes-chain-dwarf-absorb}, node 0 calculates the force from node 2 as the following:
\begin{alignat}{2}
f_{0,2}^{'} &= f_{0,3} - f_{0,2} \nonumber \\
&= \frac{1}{\Delta \phi_{0,3} / T} - \frac{1}{\Delta \phi_{0,2} / T}. \nonumber
\end{alignat}
Noticeably, if node 2 moves close to node 3, the value of $\Delta \phi_{0,2}$ is close to the value of $\Delta \phi_{0,3}$. Then, the magnitude of force $f_{0,2}$ is reduced.
Finally, when $\Delta \phi_{0,2}$ is equal to $\Delta \phi_{0,3}$ as in Figure \ref{fig:4nodes-chain-dwarf-absorb-perfect}, the magnitude of force $f_{0,2}$ becomes 0; in other words, the force is fully absorbed.
\newpage
|
\section{Model}
We consider a collection of N three-level V-type atoms located at the same position. We label the ground state as $\ket{1}$ and the two excited states as $\ket{2}$ and $\ket{3}$, and the transition frequency from level j to i as $\omega_{ij}$. A weak drive field which is resonantly tuned to $\omega_{21}$ prepares the atomic system in a timed-Dicke state. As the drive field is turned off, we detect the photons emitted from the cloud in the forward direction. In the experiment, the atomic cloud has a finite size, but for theoretical simplicity we can assume it to be point-like ensemble interacting each other through the vacuum field modes. This is because we are measuring the forward scattering, where any phases of emitted photons due to the atomic position distribution is exactly compensated by the phases initially imprinted on the atoms by the drive field \cite{Scully_2006}. Additionally, the transitions $\ket{1}\leftrightarrow\ket{2}$ and $\ket{1}\leftrightarrow\ket{3}$ interact with the field effectively with the same phase considering that the atomic cloud size is much smaller compared to $2\pi c/\omega_{23}$. We note that while the forward-scattered field is collectively enhanced, the decay rate of the atoms arising from interaction with the rest of the modes is not cooperative \cite{Bienaime_2011}.
The atomic Hamiltonian $H_A$ and the vacuum field Hamiltonian $H_F$ are
\eqn{\begin{split}
H_A &= \sum_{m=1}^{N}\sum_{j=2,3} \hbar \omega_{j1} \hat{\sigma}_{m,j}^+ \hat{\sigma}_{m,j}^-,\\
H_F &= \sum_{k} \hbar \omega_{k} \hat{a}_{k}^{\dagger} \hat{a}_{k},
\label{eq:H-0}
\end{split}}
where $\hat{\sigma}_{m,j}^{\pm}$ is the raising/lowering operator acting on $m^\mr{th}$ atom and $j^\mr{th}$ level, $\hat{a}_k^{\dagger}$ and $\hat{a}_k$ are the field creation/annihilation operators of the corresponding frequency mode $\omega_{k}$, and $N$ refers to the effective number of atoms acting cooperatively in the forward direction.
First, we prepare the atomic system by a weak drive field. The atom-drive field interaction Hamiltonian is
\eqn{H_{\text{AD}}=-\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \hat{\sigma}_{m,j}^+ e^{-i \omega_D t} + \hat{\sigma}_{m,j}^- e^{i \omega_D t} }.\label{eq:H-AD}}
Here, $\omega_D$ is the drive frequency and $\Omega_j^m \equiv \vec{d}_{j1}^m\cdot\vec{\epsilon_{D}}\,E_D$ is the Rabi frequency of $j^\mr{th}$ level, where $\vec{d}_{j1}^{m}$ is the dipole moment of $\ket{j}\leftrightarrow\ket{1}$ transition of $m^\mr{th}$ atom, $\vec{\epsilon}_D$ is the polarization unit vector of the drive field, and $E_D$ is the electric field of the drive field. Given that the atomic ensemble is driven with the common field in our experiment, we will assume that the atomic dipoles are aligned with the drive and each other. We can thus omit the atomic labels to write $\Omega_j$.
The interaction Hamiltonian describing the atom-vacuum field interaction, under the rotating wave approximation, is given as
\eqn{
H_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{m,j}(\omega_k) \bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k} + \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}}.
}
Here, the atom-field coupling strength $g_{m,j}(\omega_k) \equiv \vec{d}_{j1}^{m} \cdot \vec{\epsilon}_k\sqrt{\frac{\omega_k}{2\hbar\varepsilon_0 V}}$,
where $\vec{\epsilon}_k$ is the polarization unit vector of the field mode, $\varepsilon_0$ is the vacuum permittivity, and $V$ is the field mode volume. As justified previously, the atomic dipoles are aligned to each other and we write $g_{j}(\omega_k)$.
Also, note that the sum over k only refers to the forward-scattered modes. The spontaneous emission arising from the rest of the modes is to be considered separately later.
\section{Driven dynamics}
\begin{figure*}[t]
\centering
\includegraphics[width = 3.5 in]{fig_laser_extinction.eps}
\caption{\textbf{(a)} The drive field intensity (red circles) at turn-off edge characterized as the truncated $\cos^4\bkt{\frac{\pi}{2}\frac{t-t_0}{\tau}}$ function (red solid line) bridging the on and off state of the intensity. Here, $t_0 = -4$ ns and the fall-time $\tau=3.5$ ns are assumed. While the intensity of the drive field turns off mostly within $\approx$ 3.5 ns, additional 0.5-ns waiting time is provided before the the data analysis of the collective emission begins at $t=0$ as shown in Fig.\,\ref{fig_decay}\,(b), to further remove the residual drive intensity and the transient effect from our measurement.}
\label{fig_laser_extinction}
\end{figure*}
We consider here the driven dynamics of a atoms. Moving to the rotating frame with respect to the drive frequency, and tracing out the vacuum field modes, we can write the following Born-Markov master equation for the atomic density matrix:
\eqn{
\der{\hat{\rho}_A}{t} = -\frac{i}{\hbar} \sbkt{\widehat{H}_A + \widehat H_{AD}, \hat {\rho}_A} - \sum_{m,n = 1}^N\sum_{i,j = 2,3} \frac{\Gamma_{ij,mn}^{(D)}}{2} \sbkt{ \hat \rho_A \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- + \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- \hat \rho_A - 2\widehat{\sigma}_{n,j} ^- \hat \rho_A \widehat{\sigma}_{m,i} ^+ },
}
where $\widehat H_A = - \sum_{m=1}^{N}\sum_{j=2,3} \hbar \Delta_{j} \widehat{\sigma}_{m,j}^+ \widehat{\sigma}_{m,j}^-$ is the free atomic Hamiltonian and $\widehat H_{AD} = -\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \widehat{\sigma}_{m,j}^+ + \widehat{\sigma}_{m,j}^- }$ is the atom-drive interaction Hamiltonian in the rotating frame, with $ \Delta_j \equiv \omega_{j1} - \omega_D$. The driven damping rates are defined as $ \Gamma_{ij,mn}^{(D) } \equiv \frac{\vec{d}^m_{i1} \cdot \vec{d}^n_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$, with the indices $i,j$ referring to the atomic levels, and $m,n$ to different atoms.
Using the above master equation, one can obtain the following optical Bloch equations for the case of a single atom:
\begin{subequations}
\eqn{\label{eq:optical-Bloch-eqa}
\partial_t \rho_{33} &= i\Omega_3(\rho_{13}-\rho_{31}) - \Gamma_{33}^{(D)}\rho_{33} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32} \\
\partial_t \rho_{22} &= i\Omega_2(\rho_{12}-\rho_{21}) - \Gamma_{22}^{(D)}\rho_{22}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32}\\
\partial_t \rho_{11} &= -i\Omega_3(\rho_{13}-\rho_{31}) -i\Omega_2(\rho_{12}-\rho_{21}) + \Gamma^{(D)}_{33}\rho_{33}+ \Gamma^{(D)}_{22}\rho_{22} + \Gamma^{(D)}_{23}\bkt{ \rho_{23} + \rho_{32}} \\
\partial_t \rho_{31} &= -i \Omega_2 \rho_{32} - i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}-i\Delta_3}\rho_{31}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{21}\\
\partial_t \rho_{13} &= i \Omega_2 \rho_{23} + i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}+i\Delta_3}\rho_{13}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{12}\\
\partial_t \rho_{21} &= -i \Omega_3 \rho_{23} - i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}-i\Delta_2}\rho_{21} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{31}\\
\partial_t \rho_{12} &= i \Omega_3 \rho_{32} + i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}+i\Delta_2}\rho_{12}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{13}\\
\partial_t \rho_{32} &= -i \Omega_2 \rho_{31} + i\Omega_3\rho_{12}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2} -i\omega_{23}}\rho_{32} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}}\\
\partial_t \rho_{23} &= i \Omega_2 \rho_{13} - i\Omega_3\rho_{21}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2}+i\omega_{23}}\rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}},
\label{eq:optical-Bloch-eqi}}
\end{subequations}
where we have defined the single atom driven damping rate as $\Gamma_{ij}^{(D) }\equiv \frac{\vec{d}_{i1} \cdot \vec{d}_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$.
Numerically solving Eq.\,\eqref{eq:optical-Bloch-eqa}--Eq.\,\eqref{eq:optical-Bloch-eqi} along with the normalization condition $\rho_{33}+\rho_{22}+\rho_{11}=1$ gives us the steady state density matrix $\rho_S$ for the atom. Substituting our experimental parameters, we get the populations: $\rho_{S,33}\approx 0$, $\rho_{S,22}\approx 10^{-10}$, and $\rho_{S,11}\approx 1$. The absolute value of the coherences are: $|\rho_{S,23}|\approx 0$, $|\rho_{S,21}|\approx 10^{-5}$, and $|\rho_{S,31}|\approx 0$. These estimates are made for $N \approx1 - 10$, assuming the collective driven damping rate to be $ \Gamma_{ij}^{(D)} (N) \approx (1 + Nf) \Gamma_{ij}^{(D) }$ with phenomenological value f=1 and the collective Rabi frequency to be $ \Omega_{j} \approx \sqrt{N} \Omega_{j}$. Thus we can conclude that the atomic ensemble is well within the single excitation regime in $\ket{2}$.
The 3.5 ns time window of laser extinction has broad spectral component and may excite extra population to $\ket{2}$ and $\ket{3}$. We numerically simulate the optical Bloch equation for this time window to find the density matrix after the laser turn-off. We model the laser turn-off shape as $\cos^4$ (see Fig. \ref{fig_laser_extinction}) and vary the Rabi frequency accordingly. Note that this is a calculation for estimate purposes and may not convey the full dynamics in the laser extinction period. Within the numerical precision limit which is set by the evolution time step ($10^{-5}$ ns) multiplied by $\Gamma_{ij} \approx 0.01$ GHz, we obtain the following density matrix values after the turn-off: $\rho_{33}\approx 0$, $\rho_{22}\approx 0$, $\rho_{11}\approx 1$, $\rho_{23}\approx 0$, $\rho_{12}\approx 10^{-5}$, and $\rho_{13}\approx 10^{-7}-10^{-6}$. Thus the laser turn-off edge doesn't produce any significant excitation in $\ket{3}$.
\section{Quantum beat dynamics}
As the drive field is turned off, the system evolves with the atom-vacuum field interaction Hamiltonian. Moving to the interaction representation with respect to $H_A+H_F$, we get the interaction Hamiltonian in the interaction picture:
\eqn{
\tilde{H}_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{j}(\omega_k)
\bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k}e^{i(\omega_{j1}-\omega_{k})t }
+ \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}e^{-i(\omega_{j1}-\omega_{k})t}},
\label{eq:H-AV}
}
Initially the system shares one excitation in $\ket{2}$ symmetrically, and the EM field is in the vaccum state such that
\eqn{
\ket{\Psi(0)} = \frac{1}{\sqrt{N}}\sum_{m=1}^{N}\hat{\sigma}_{m,2}^{+}\ket{11\cdots 1}\ket{\{0\}}.
\label{eq:psi-initial}
}
As the system evolves due to the atom-vacuum field interaction, it remains in the single-excitation manifold of total atom + field Hilbert space, as one can see from the interaction Hamiltonian (Eq.\,\eqref{eq:H-AV}):
\eqn{
\ket{\Psi(t)}=\bkt{\sum_{m=1}^{N}\sum_{j=2,3}c_{m,j}(t)\hat{\sigma}_{m,j}^{+}+\sum_{k}c_{k}(t)\hat{a}_k^{\dagger}}\ket{11\cdots 1}\ket{\{0\}}.
\label{eq:psi-evolved}
}
Now we solve the Schr\"odinger equation to find the time evolution of the atom + field system under the atom-field interaction using Eqs.\eqref{eq:psi-evolved} and \eqref{eq:H-AV} to obtain
\begin{subequations}
\begin{align}
& \partial_t c_{m,j}(t) = i\sum_{k} g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t} c_{\omega_k}(t), \\
& \partial_t c_{\omega_k}(t) = i\sum_{m=1}^{N}\sum_{j=2,3}g_j(\omega_k)e^{-i(\omega_{j1}-\omega_{k})t}c_{m,j}(t).
\end{align}
\label{eq:de-1}
\end{subequations}
Formally integrating Eq.\,\eqref{eq:de-1}(b) and plugging it in Eq.\,\eqref{eq:de-1}(a), we have
\eqn{
\partial_t c_{m,j}(t) = -\sum_{k}g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t}\int_0^t \mathrm{d}{\tau}\sum_{n=1}^{N}\sum_{l=2,3}g_l(\omega_k) e^{-i(\omega_{l1}-\omega_k)\tau} c_{n,l}(\tau).
}
We observe that $c_{m,2}(t)$'s ($c_{m,3}(t)$'s) have the same initial conditions and the same evolution equation, thus we can justifiably define $c_2(t) \equiv c_{m,2}(t)$ ($c_3(t)\equiv c_{m,3}(t)$).
Assuming a flat spectral density of the field and making the Born-Markov approximation we get
\begin{subequations}
\begin{align}
\partial_t c_2(t) &= -\frac{\Gamma_{22}^{(N)}}{2} c_2(t)-\frac{\Gamma_{23}^{(N)}}{2} e^{i\omega_{23}t}c_3(t),\\
\partial_t c_3(t) &= -\frac{\Gamma_{33}^{(N)}}{2} c_3(t)-\frac{\Gamma_{32}^{(N)}}{2} e^{-i\omega_{23}t}c_2(t),
\end{align}
\label{eq:de-2}
\end{subequations}
where we have defined $\Gamma_{jl}^{(N)}\equiv \Gamma_{jl} + Nf\Gamma_{jl}$, with $\Gamma_{jl} = \frac{\vec{d}_{j1}\cdot \vec{d}_{l1}\omega_{l1}^3}{3\pi \varepsilon_0 \hbar c^3}$ as the generalized decay rate into the quasi-isotropic modes and $Nf\Gamma_{jl}$ as the collective decay rate in the forward direction \cite{Bienaime_2011, Araujo_2016}. The factor $f$ represents the geometrical factor coming from restricting the emission to the forward scattered modes. We emphasize here that the emission into all the modes (not specifically the forward direction) denoted by $\Gamma_{jl}$ is added phenomenologically and is not collective. Considering that the atomic dipole moments induced by the drive field are oriented along the polarization of the driving field, we can obtain $\Gamma_{23}=\sqrt{ \Gamma_{22}\Gamma_{33}}$, which can be extended to $\Gamma_{23}^{(N)}=\sqrt{ \Gamma_{22}^{(N)}\Gamma_{33}^{(N)}}$.
To solve the coupled differential equations, we take the Laplace transform of Eq.\,\eqref{eq:de-2}(a) and (b):
\begin{subequations}
\begin{align}
s\tilde{c}_2(s)&=c_2(0)-\frac{\Gamma_{22}^{(N)}}{2}\tilde{c}_2(s)-\frac{\Gamma_{23}^{(N)}}{2}\tilde{c}_3(s-i\omega_{23}),\\
s\tilde{c}_3(s)&=c_3(0)-\frac{\Gamma_{33}^{(N)}}{2}\tilde{c}_3(s)-\frac{\Gamma_{32}^{(N)}}{2}\tilde{c}_2(s+i\omega_{23}),
\end{align}
\end{subequations}
where we have defined $\tilde{c}_j(s) \equiv \int_0^{\infty} c_j(t) e^{-st} \mathrm{d}(t)$ as the Laplace transform of $c_j \bkt{t}$. Substituting the initial conditions, we obtain the Laplace coefficients as
\begin{subequations}\begin{align}
\tilde{c}_2(s)&=\,\frac{1}{\sqrt{N}}\frac{s+\frac{\Gamma_{33}^{(N)}}{2}-i\omega_{23}}{s^2+(\Gamma_{\text{avg}}^{(N)}-i\omega_{23})s-i\omega_{23}\frac{\Gamma_{22}^{(N)}}{2}},\\
\tilde{c}_3(s)&=-\frac{\Gamma^{(N)}_{32}}{2\sqrt{N}}\,\frac{1}{s^2+(\Gamma^{(N)}_{\text{avg}}+i\omega_{23})s+i\omega_{23}\frac{\Gamma^{(N)}_{33}}{2}}.
\end{align}\end{subequations}
And the poles of the denominators are, respectively,
\begin{subequations}\begin{align}
s_{\pm}^{(2)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} + \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2}, \\
s_{\pm}^{(3)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} - \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2},
\end{align}\end{subequations}
where we have defined $\Gamma_{\text{avg}}^{(N)}=\frac{\Gamma_{33}^{(N)}+\Gamma_{22}^{(N)}}{2}$, $\Gamma_{\text{d}}=\frac{\Gamma_{33}^{(N)}-\Gamma_{22}^{(N)}}{2}$, and $\delta = \sqrt{\omega_{23}^2-\bkt{\Gamma^{(N)}_{\text{avg}}}^2+2i\omega_{23}\Gamma^{(N)}_{\text{d}}}$. The real part of the above roots corresponds to the collective decay rate of each of the excited states, while the imaginary part corresponds to the frequencies. The fact that $\delta$ is generally a complex number unless $\Gamma_{22}\neq\Gamma_{33}$ means that we will have modification to both the decay rate and the frequency. To see this more clearly, we can expand $\delta$ up to second order in $\Gamma_{jl}^{(N)}/\omega_{23}$, considering we are working in a spectroscopically well-separated regime ($\Gamma_{jl}^{(N)}\ll\omega_{23}$);
\eqn{
\delta \approx \omega_{23}\sbkt{1-\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2}+i\Gamma^{(N)}_{d}\sbkt{1+\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2},
}
the above poles become
\begin{subequations}\begin{align}
s_{+}^{(2)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{22}^{(N)}}{2\omega_{23}^2}} + i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}, \\
s_{-}^{(2)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{33}^{(N)}}{2{\omega_{23}^2}}} + i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2, \\
s_{+}^{(3)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{22}}{2\omega_{23}^2}} - i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2 \\
s_{-}^{(3)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{33}}{2\omega_{23}^2}} - i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}.
\end{align}\end{subequations}
The atomic state coefficients in time domain are
\begin{subequations}
\begin{align}
c_2(t) &= \frac{1}{2\sqrt{N}\delta} e^{- \Gamma^{(N)}_{\text{avg}}t/2} e^{i\omega_{23}t/2} \sbkt{(-i\Gamma^{(N)}_{d}-\omega_{23}+\delta)e^{i\delta t/2} + (i\Gamma^{(N)}_{d}+\omega_{23}+\delta)e^{-i\delta t/2}},\\
c_3(t) & =\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} e^{-\Gamma^{(N)}_{\text{avg}}t/2} e^{-i\omega_{23}t/2} \sbkt{e^{i\delta t/2}-e^{-i\delta t/2}}.
\end{align}
\label{eq:atom-coefficients}
\end{subequations}
Again, expanding $\delta$ under the condition $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get
\begin{subequations}
\begin{align}
c_2(t) &= \frac{1}{\sqrt{N}}\sbkt{ e^{- \Gamma^{(N)}_{22}t/2} - \bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2\frac{\delta^*}{\delta\,}e^{-\Gamma^{(N)}_{33}t/2}e^{i\omega_{23}t}},\\
c_3(t) & = -\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} \sbkt{e^{-\Gamma^{(N)}_{22}t/2}e^{-i\omega_{23}t}-e^{-\Gamma^{(N)}_{33}t/2}}.
\end{align}
\label{eq:atom-coefficients-2}
\end{subequations}
Note that the collection of $N$ atoms behaves like one ``super-atom'' which decays with a rate that is $N$-times that of an individual atom in the forward direction. We note that the system is not only superradiant with respect to the transition involving the initially excited level, but also with respect to other transitions as well as a result of the vacuum-induced coupling between the levels. Most population in $\ket{2}$ decays with the decay rate $\Gamma_{22}^{(N)}$, and small amount of it decays with $\Gamma_{33}^{(N)}$ and has corresponding level shift $\omega_{23}$. In $\ket{3}$ are the equal amount of components decaying with $\Gamma^{(N)}_{22}$ (and level shifted $-\omega_{23}$) and $\Gamma^{(N)}_{33}$. The small but nonzero contribution of $\ket{3}$ makes beating of frequency about $\omega_{23}$.
\section{Field Intensity}
The light intensity at position $x$ and time $t$ (assuming the atom is at position $x=0$ and it starts to evolve at time $t=0$) is
\eqn{
I(x,t) = \frac{\epsilon_0 c}{2}\bra{\Psi(t)}\hat{E}^{\dagger}(x,t) \hat{E}(x,t) \ket{\Psi(t)},
}
where the electric field operator is
\eqn{
\hat{E}(x,t) = \int_{-\infty}^{\infty} \mathrm{d} k \, E_k \hat{a}_k e^{ikx}e^{-i\omega_k t}.
}
Plugging in the electric field operator and the single-excitation ansatz (Eq.\,\eqref{eq:psi-evolved}), we obtain the intensity up to a constant factor:
\eqn{
I(x,t) \simeq N^2 \abs{e^{-i\omega_{23}\tau}c_2(\tau) + \frac{\Gamma_{23}}{\Gamma_{22}}c_3(\tau) }^2 \Theta(\tau),
\label{eq:field-intensity}
}
where $\tau = t-\abs{x/v}$.
Substituting Eqs. \eqref{eq:atom-coefficients}(a) and (b) in the above and approximating $\delta$ in the regime $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get
\eqn{
\frac{I(\tau)}{I_0} = e^{-\Gamma^{(N)}_{22}\tau}+\bkt{\frac{ \Gamma^{(N)}_{33}}{2\omega_{23}}}^2 e^{-\Gamma^{(N)}_{33}\tau} + \frac{\Gamma^{(N)}_{33}}{\omega_{23}} e^{-\Gamma^{(N)}_{\text{avg}}\tau} \sin(\omega_{23}\tau+\phi),
}
where $I_0$ is a normalization factor which increases as the number of atom increases. Neglecting the small second term in the right hand side, we get the relative beat intensity normalized to the main decay amplitude:
\eqn{
\text{beat amp.} = \frac{\Gamma^{(N)}_{33}}{\omega_{23}},
}
and the beat phase $\phi$:
\eqn{
\phi = \arctan\bkt{\frac{\Gamma^{(N)}_{22}}{\omega_{23}}}.
}
We see that even if there was no population in level 3 in the beginning, the vacuum field builds up a coherence between level 2 and level 3 to make a quantum beat. This is in line with the quantum trajectory calculation of single atom case \cite{Hegerfeldt_1994}, where the individual decay rates are replaced with collective decay rates. We can verify that the collective effect manifests in the beat size and the beat phase.
\section{Data analysis in Fig. \ref{fig_decay} (b)}
The modulated decay profiles of the flash after the peak are magnified in Fig.\,\ref{fig_decay} (b). The purpose of the figure is to visually compare the decay rate and the relative beat intensity $I_\mathrm{b}$, so we normalize each curve with the exponential decay amplitude such that the normalized intensity starts to decay from $\approx1$ at $t=0$. In practice, we fit the $I(t)$ shown in Fig.\,\ref{fig_decay} (a) after $t=0$ using Eq.\,\eqref{eq_intensity} to get $I_0$ for each curve, to get $I(t)/I_0$ curves as in Fig.\,\ref{fig_decay} (b). Note that, more precisely, it is the fitting curve that decays from $I(t)/I_0\approx1$, not the experimental data. In fact, the plotted data tend to be lower than the fitting curves near $t=0$, due to the effect of the transient behavior around the flash peak.
The inset displays the FFT of the beat signal shown in the main figure. We first subtract from $I(t)/I_0$ data the exponential decay profile the first term of the fitting function Eq.\,\eqref{eq_intensity} as well as the dc offset. The residual, which is a sinusoidal oscillation with an exponentially decaying envelop, is the beat signal represented by the second term of Eq.\,\eqref{eq_intensity}. The FFT of the beat signal has the lower background at $\omega = 0$ due to the pre-removal of the exponential decay and the offset. The linewidth of each spectrum is limited by the finite lifetime of the beat signal, which corresponds to $\Gamma^{(N)}_\mathrm{avg}$ as in Eq.\,\eqref{eq_intensity}.
\vspace{2cm}
\end{document} |
\section{Introduction}
\input{sections/s1_introduction}
\section{Related Work}
\label{sec:related}
\input{sections/s2_related}
\input{sections/f2_model}
\section{Method}
\label{sec:approach}
\input{sections/s3_approach}
\input{sections/t1_scannet_reg}
\section{Experiments}
\label{sec:experiments}
\input{sections/s4_experiments}
\section{Conclusion}
\label{sec:conclusion}
\input{sections/s5_conclusion}
\vspace{6pt} \noindent
\textbf{Acknowledgments}
We would like to thank the anonymous reviewers for their valuable comments and suggestions.
We also thank Nilesh Kulkarni, Karan Desai, Richard Higgins, and Max Smith for many helpful discussions and feedback on early drafts of this work.
\clearpage
\newpage
{\small
\bibliographystyle{format/ieee}
\subsection{Pairwise Registration}
\label{sec:exp_pcreg}
We first evaluate our approach on point cloud registration.
Given two RGB-D images, we estimate the 6-DOF pose that would best align the first input image to the second.
The transformation is represented by a rotation matrix $\mathbf{R}$ and translation vector $\mathbf{t}$.
\lsparagraph{Evaluation Metrics.}
We evaluate pairwise registration by evaluating the pose prediction as well as the chamfer distance between the estimated and ground-truth alignments.
We compute the angular and translation errors as follows:
$$
E_{\text{rotation}} = \arccos(\frac{Tr(\mathbf{R}_{pr}\mathbf{R}_{gt}^\top) - 1}{2}),
$$$$
E_{\text{translation}} = ||\mathbf{t}_{pr} - \mathbf{t}_{gt}||_2 .
$$
We report the translation error in centimeters and the rotation errors in degrees.
While pose gives us a good measure of performance, some scenes are inherently ambiguous and multiple alignments can explain the scene appearance; \eg, walls, floors, symmetric objects.
To address these cases, we compute the chamfer distance between the scene and our reconstruction.
Given two point clouds where $\mathcal{P}$ represents the correct alignment of the scene and $\mathcal{Q}$ represents our reconstruction of the scene,
we can define the closest pairs between the point clouds as set $\Lambda_{\mathcal{P}, \mathcal{Q}} = \{(p, \argmin_{q \in \mathcal{Q}} ||p - q||) : p \in \mathcal{P})$.
We then compute the chamfer error as follows:
$$
E_{\text{cham}} =
|\mathcal{P}|^{-1} \sum_{\mathclap{({p, q) \in \Lambda_{\mathcal{P}, \mathcal{Q}}}}} ||\mathbf{x}_p - \mathbf{x}_q||
+
|\mathcal{Q}|^{-1} \sum_{\mathclap{{(q, p) \in \Lambda_{\mathcal{Q}, \mathcal{P}}}}} ||\mathbf{x}_q - \mathbf{x}_p||.
$$
For each of these error metrics, we report the mean and median errors over the dataset as well as the accuracy for different thresholds.
We conduct our experiments on ScanNet and report the results in Table~\ref{tab:pose_scannet}.
We find that our model learns accurate point cloud registration; outperforming prior feature descriptors and performing on-par with supervised geometric registration approaches.
We next analyze our results through the questions posed at the start of this section.
\lsparagraph{Does unsupervised learning improve over off-the-shelf descriptors? }
Yes. We evaluate our approach against the traditional pipeline for registration: feature extraction using an off-the-shelf keypoint descriptor and alignment via RANSAC.
We show large performance gains over both traditional and learned descriptors.
It is important to note that FCGF and SuperPoint currently represent the state-of-the-art for feature descriptors.
Furthermore, both methods have been used directly, without further fine-tuning, to achieve the highest performance on image registration benchmarks~\cite{sarlin2020superglue} and geometric registration benchmarks~\cite{choy2020deep,gojcic2020learning}.
We also find that our approach learns features that can generalize to similar datasets. As shown in Table~\ref{tab:pose_scannet}, our model trained on 3D Match outperforms the off-the-shelf descriptors while being competitive with supervised geometric registration approaches.
\lsparagraph{Does RGB-D training alleviate the need for pose supervision? }
Yes.
We compare our approach to two recently proposed supervised point cloud registration approaches: DGR~\cite{choy2020deep} and 3D Multi-view Registration~\cite{gojcic2020learning}.
Since their model was trained on 3D Match, we also train our model on 3D match and report the numbers.
We find that our model is competitive with supervised approaches when trained on their dataset, and can outperform them when trained on ScanNet.
However, a direct comparison is more nuanced since those two classes of methods differ in two key ways: training supervision and input modality.
We argue that the recent rise in RGB-D cameras on both hand-held devices and robotic systems supports our setup.
First, the rise in devices suggests a corresponding increase in RGB-D raw data that will not necessarily be annotated with pose information.
This increase provides a great opportunity for unsupervised learning to leverage this data stream.
Second, while there are cases where depth sensing might be the better or only option (\eg, dark environment or highly reflective surfaces.), there are many cases where one has access to both RGB and depth information.
The ability to leverage both can increase the effectiveness and robustness of a registration system.
Finally, while we only learn visual features in this work, we note that our approach is easily extensible to learning both geometric and visual features since it is agnostic to how the features are calculated.
\subsection{Ablations}
\label{sec:exp_ablations}
We perform several ablation studies to better understand the model’s performance and its various components.
In particular, we are interested in better understanding the impact of the optimization and rendering parameters on the overall model performance.
While some ablations can only be applied during training (\eg, rendering choice), ablations that affect the correspondence estimation and fitting can be selectively applied during training, inference, or both.
Hence, we consider all the variants.
\vspace{0.2cm}
\lsparagraph{Joint Rendering.}
Our first ablation investigates the impact of our rendering choices by rendering the output images from the joint point cloud.
In \S~\ref{sec:method_render}, we discuss rendering alternate views to force the model to align the pointclouds to produce accurate renders.
As shown in Table~\ref{tab:ablations}, we find that naively rendering the joint point cloud results in a significant performance drop.
This supports our claim that a joint render would negatively impact the features learned since the model can achieve good photometric consistency even if the pointclouds are not accurately aligned.
\vspace{0.2cm}
\lsparagraph{Ratio Test.}
In our approach, we use Lowe's ratio test to estimate the weight for each correspondence.
We ablate this component by instead using the feature distance between the corresponding points to rank the correspondences.
Since this ablation can be applied to training or inference independently, we apply it to training, inference, or both.
Our results indicate that the ratio test is critical to our model's performance, as ablating it results in the largest performance drop.
This supports our initial claims about the utility of the ratio test as a strong heuristic for filtering correspondences.
It is worth noting that Lowe's ratio test~\cite{lowe2004distinctive} shows incredible efficacy in determining correspondence weights; a function often undertaken by far more complex models in recent work~\cite{choy2020deep,gojcic2020learning,ranftl2018deepfundamental,sarlin2020superglue}.
Our approach is able to perform well using such a simple filtering heuristic since it is also learning the features, not just matching them.
\vspace{0.2cm}
\lsparagraph{Randomized Subsets.}
In our model, we estimate $t$ transformations based on $t$ randomly sampled subsets. This is inspired by RANSAC~\cite{RANSAC} as it allows us to better handle outliers.
We ablate this module by estimating a single transformation based on all the correspondences.
Similar to the ratio test, this ablation can be applied to training or inference independently.
As shown in Table~\ref{tab:ablations}, ablating this component at test time results in a significant drop in performance.
Interestingly, we find that applying it during training and relieving it during testing improves performance.
We posit that this ablation acts similarly to DropOut~\cite{srivastava2014dropout} which forces the model to predict using a subset of the features and is only applied during training. As a result, the model is forced to learn better features during training, while gaining the benefits of randomized optimization during inference.
\vspace{0.2cm}
\lsparagraph{Number of subsets. }
We find that the number of subsets chosen has a significant impact on both run-time and performance.
During training, we sample 10 subsets of 80 correspondences each. During testing, we sample 100 subsets of 80 correspondences each.
For this set of experiments, we used the same pretrained weights and only vary the number of subsets used. Each subset still contains 80 correspondences.
As shown in Table~\ref{tab:runtime}, using a larger number of subsets improves the performance while also increasing the run-time.
Additionally, we find that the performance gains saturate at 100 subsets.
\subsection{Point Cloud Generation}
\label{sec:method_pcgen}
Given an input RGB-D image, $I \in \mathbb{R}^{4 \times H \times W}$, we would like to generate a point cloud $\mathcal{P} \in \mathbb{R}^{(6 + F) \times N}$.
Each point $p \in \mathcal{P}$ is represented by a 3D coordinate $\mathbf{x}_p \in \mathbb{R}^{3}$, a color $\mathbf{c}_p \in \mathbb{R}^{3}$, and a feature vector $\mathbf{f}_p \in \mathbb{R}^{F}$.
We first use a feature encoder to extract a feature map using each image's RGB channels.
The extracted feature map has the same spatial resolution as the input image.
As a result, one can easily convert the extracted features and input RGB into a point cloud using the input depth and known camera intrinsic matrix.
However, given that current depth sensors do not predict depth for every pixel, we omit pixels with missing depth from our generated point cloud.
In order to avoid heterogeneous batches, we mark points with missing depths so that subsequent operations ignore them.
\subsection{Correspondence Estimation}
\label{sec:method_corr}
Given two feature point clouds\footnote{As noted in Sec~\ref{sec:method_pcgen}, point clouds will have different numbers of valid points based on the input depth. While our method deals with this by tracking those points and omitting them from subsequent operations, we assume all the points are valid in our model description to enhance clarity.},
$\mathcal{P}$, $\mathcal{Q} \in \mathbb{R}^{(6 + F) \times N}$,
we would like to find the correspondences between the point clouds.
Specifically, for each point in $p \in \mathcal{P}$, we would like to find the point $q_p$ such that
\begin{equation}
q_{p} = \operatorname*{arg\,min}_{q\in{\mathcal{Q}}} D(\mathbf{f}_p, \mathbf{f}_q),
\end{equation}
where $D(p, q)$ is a distance-metric defined on the feature space.
In our experiments, we use cosine distance to determine the closest features.
We extract such correspondences for all the points in both $\mathcal{P}$ and $\mathcal{Q}$ since correspondence is not guaranteed to be bijective.
As a result, we have two sets of correspondences, $\mathcal{C}_{\mathcal{P} \to \mathcal{Q}}$ and $\mathcal{C}_{\mathcal{Q} \to \mathcal{P}}$, where each set consists of $N$ pairs.
\paragraph{Ratio Test.}
Determining the quality of each correspondence is a challenge faced by any correspondence-based geometric fitting approach.
Extracting correspondences based on only the nearest neighbor will result in many false positives due to falsely matching repetitive pairs or non-mutually visible portions of the images.
The standard approach is to estimate a weight for each correspondence that captures the quality of this correspondence.
Recent approaches estimate a correspondence weight for each match using self-attention graph networks~\cite{sarlin2020superglue}, PointNets~\cite{gojcic2020learning,yi2018learning}, and CNNs~\cite{choy2020deep}.
In our experiments, we found that a much simpler approach based on Lowe's ratio test~\cite{lowe2004distinctive} works well without requiring any additional parameters in the network.
The basic intuition behind the ratio test is that unique correspondences are more likely to be true matches.
As a result, the quality of correspondence $(p, q_p)$ is not simply determined by $D(p, q_p)$, but rather between the ratio $r$ which is defined as
\begin{equation}
r = \frac{D(p, q_{p, 1})}{D(p, q_{p, 2})},
\end{equation}
where $q_{p, i}$ is the $i$-th nearest neighbor to point $p$ in $\mathcal{Q}$.
Since $0 \leq r_p \leq 1$ and a lower ratio indicates a better match, we weigh each correspondence by $w = 1 - r$.
In the traditional formulation, one would define a distance ratio threshold for inlier vs outliers.
Instead, we rank the correspondences using their ratio weight and pick the top $k$ correspondences.
We pick an equal number of correspondences from $\mathcal{C}_{\mathcal{P} \to \mathcal{Q}}$ and $\mathcal{C}_{\mathcal{Q} \to \mathcal{P}}$.
Additionally, we keep the weights for each correspondence to use in the geometric fitting step.
Hence, we end up with a correspondence set $\mathcal{M} = \{(p, q, w)_i: 0 \leq i < k \}$ where $k{=}400$.
\subsection{Geometric Fitting}
\label{sec:method_fitting}
Given a set of correspondences $\mathcal{M}$, we would like to find the transformation, $\mathcal{T^{*}} \in \text{SE(3)}$ that would minimize the error between the correspondences
\begin{equation}
\mathcal{T^{*}} = \argmin_{\mathcal{T} \in~\text{SE}(3)} E(\mathcal{M}, \mathcal{T})
\label{eq:w_proc}
\end{equation}
where the error $E(\mathcal{M}, \mathcal{T})$ is defined as:
\begin{equation}
E(\mathcal{M}, \mathcal{T}) = |\mathcal{M}|^{-1} \sum_{(p, q, w) \in \mathcal{M}} w~(\mathbf{x}_p - \mathcal{T}(\mathbf{x}_q))^2
\label{eq:corr_err}
\end{equation}
This can be framed as a weighted Procrustes problem and solved using a weighted variant of Kabsch's algorithm~\cite{kabsch1976solution}.
While the original Procrustes problem minimizes the distance between a set of unweighted correspondences~\cite{gower1975generalized}, Choy \etal~\cite{choy2020deep} have shown that one can integrate weights into this optimization.
This is done by calculating the covariance matrix between the centered and weighted point clouds, followed by calculating the SVD on the covariance matrix.
For more details, see~\cite{choy2020deep, kabsch1976solution}.
Integrating weights into the optimization is important for two reasons.
First, it allows us to build robust estimators that can weigh correspondences based on our confidence in their uniqueness.
More importantly, it makes the optimization differentiable with respect to the weights, allowing us to backpropagate the losses back to the encoder for feature learning.
\paragraph{Randomized Optimization. }
While this approach is capable of integrating the weights into the optimization, it can still be sensitive to outliers with non-zero weights.
We take inspiration from RANSAC and use random sampling to mitigate the problem of outliers.
More specifically, we sample $t$ subsets of $\mathcal{M}$, and use Equation~\ref{eq:w_proc} to find $t$ candidate transformations.
We then choose the candidate that minimizes the weighted error on the full correspondence set.
Since the $t$ optimizations on the correspondence subsets are all independent, we are able to run them in parallel to make the optimization more efficient.
We deviate from classic RANSAC pipelines in that we choose the transformation that minimizes a weighted error, instead of maximizing inlier count, to avoid having to define an arbitrary inlier threshold.
It is worth noting that the model can be trained and tested with a different number of random subsets.
In our experiments, we train the model with 10 randomly sampled subsets of 80 correspondences each.
At test time, we use 100 subsets with 20 correspondences each.
We evaluate the impact of those choices on performance and run time in \S~\ref{sec:exp_ablations}.
\subsection{Point Cloud Rendering}
\label{sec:method_render}
The final step of our approach is to render the RGB-D images from the aligned point clouds. This provides us with our primary learning signals: photometric and depth consistency.
The core idea is that if the camera locations are estimated correctly, the point cloud renders will be consistent with the input images.
We use differentiable rendering to project the colored point clouds onto an image using the estimated camera pose and known intrinsics. Our pipeline is very similar to Wiles \etal~\cite{wiles2019synsin}.
A naive approach of simply rendering both point clouds suffers from a degenerate solution: the rendering will be accurate even if the alignment is incorrect.
An extreme case of this would be to always estimate cameras looking in opposite directions.
In that case, each image is projected in a different location of space and the output will be consistent without alignment.
We address this issue by forcing the network to render each view using only the other image's point cloud, as shown in Fig.~\ref{fig:mask_render}.
This forces the network to learn consistent alignment as a correct reconstruction requires the mutually visible parts of the scene to be correctly aligned.
This introduces another challenge: \textit{how to handle the non-mutually visible surfaces of the scene? }
While view synthesis approaches hallucinate the missing regions to output photo-realistic imagery~\cite{wiles2019synsin}, earlier work in differentiable SfM observed that the gradients coming from the hallucinated region negatively impact the learning~\cite{zhou2017unsupervised}.
Our solution to this problem is to only evaluate the loss for valid pixels.
Valid pixels, as shown in Fig~\ref{fig:mask_render}, are ones for which rendering was possible; \ie, there were points along the viewing ray for those pixels.
This is important in this work since invalid pixels can occur due to two reasons: non-mutually visible surfaces and pixels with missing depth.
While the first reason is due to our approach, the second reason for invalid pixels is governed by current depth sensors which do not produce a depth value for each pixel.
In our experiments, we found that pose networks are very susceptible to the issues above; the network starts estimating very large poses within the first hundred iterations and never recovers.
We also experimented with rendering the features and decoding them, similar to~\cite{wiles2019synsin}, but found that this resulted in worse alignment performance.
\input{sections/f4_model_visual}
\subsection{Losses}
We use three consistency losses to train our model: photometric, depth, and correspondence.
The photometric and depth losses are the L1 losses applied between the rendered and input RGB-D frames.
Those losses are masked to only apply to valid pixels, as discussed in \S~\ref{sec:method_render}.
Additionally, we use the correspondence error calculated in Eq.~\ref{eq:corr_err} as our correspondence loss.
We weight the photometric and depth losses with a weighting of 1 while the correspondence loss receives a weighting of 0.1.
|
\section{Introduction}
\label{sec:intro}
Image fusion is frequently involved in modern \mbox{image-guided} medical interventions, typically augmenting \mbox{intra-operatively} acquired \mbox{2-D}\xspace \mbox{X-ray}\xspace images with \mbox{pre-operative} \mbox{3-D}\xspace CT or MRI images. Accurate alignment between the fused images is essential for clinical applications and can be achieved using \mbox{2-D/3-D}\xspace rigid registration, which aims at finding the pose of a \mbox{3-D}\xspace volume in order to align its projections to \mbox{2-D}\xspace \mbox{X-ray}\xspace images. Most commonly, \mbox{intensity-based} methods are employed~\cite{markelj2010review}, where a similarity measure between the \mbox{2-D}\xspace image and the projection of the \mbox{3-D}\xspace image is defined and optimized as e.\,g.\xspace~described by Kubias~et~al.\xspace~\cite{IMG08}. Despite decades of investigations, \mbox{2-D/3-D}\xspace registration remains challenging. The difference in dimensionality of the input images results in an \mbox{ill-posed} problem. In addition, content mismatch between the \mbox{pre-operative} and \mbox{intra-operative} images, poor image quality and a limited field of view challenge the robustness and accuracy of registration algorithms. Miao~et~al.\xspace~\cite{DFM17} propose a \mbox{learning-based} registration method that is build upon the intensity-based approach. While they achieve a high robustness, registration accuracy remains challenging.
The intuition of \mbox{2-D/3-D}\xspace rigid registration is to globally minimize the visual misalignment between \mbox{2-D}\xspace images and the projections of the \mbox{3-D}\xspace image.
Based on this intuition, Schmid and Ch{\^e}nes~\cite{segm2014Schmid} decompose the target structure to local shape patches and model image forces using Hooke's law of a spring from image block matching.
Wang~et~al.\xspace~\cite{DRR17} propose a \mbox{point-to-plane} \mbox{correspondence (PPC)} model for \mbox{2-D/3-D}\xspace registration, which linearly constrains the global differential motion update using local correspondences. Registration is performed by iteratively establishing correspondences and performing the motion estimation.
During the intervention, devices and implants, as well as locally similar anatomies, can introduce outliers for local correspondence search (see Fig. \ref{fig:sample:td} and \ref{fig:sample:NGC}). Weighting of local correspondences, in order to emphasize the correct correspondences, directly influences the accuracy and robustness of the registration.
An iterative reweighted scheme is suggested by Wang~et~al.\xspace~\cite{DRR17} to enhance the robustness against outliers. However, this scheme only works when outliers are a minority of the measurements.
Recently, Qi~et~al.\xspace~\cite{PND17} proposed the PointNet, a type of neural network directly processing point clouds. PointNet is capable of internally extracting global features of the cloud and relating them to local features of individual points. Thus, it is well suited for correspondence weighting in \mbox{2-D/3-D}\xspace registration.
Yi~et~al.\xspace~\cite{LFG18} propose to learn the selection of correct correspondences for wide-baseline stereo images. As a basis, candidates are established, e.\,g.\xspace~using SIFT features. Ground truth labels are generated by exploiting the epipolar constraint. This way, an outlier label is generated. Additionally, a regression loss is introduced, which is based on the error in the estimation of a known essential matrix between two images. Both losses are combined during training. While including the regression
loss improves the results, the classification loss is shown to be important to find highly accurate correspondences.
The performance of iterative correspondence-based registration algorithms
(e.\,g.\xspace~\cite{segm2014Schmid}, \cite{DRR17})
can be improved by learning a weighting strategy for the correspondences.
However, automatic labeling of the correspondences is not practical for iterative methods as even correct correspondences may have large errors in the first few iterations.
This means that labeling cannot be performed by applying a simple rule such as a threshold based on the ground truth position of a point.
In this paper, we propose a method to learn an optimal weighting strategy for the local correspondences for rigid \mbox{2-D/3-D}\xspace registration directly with the criterion to minimize the registration error, without the need of per-correspondence ground truth annotations.
We treat the correspondences as a point cloud with extended \mbox{per-point} features and use a modified PointNet architecture to learn global interdependencies of local correspondences according to the PPC registration metric.
We choose to use the PPC model as it was shown to enable a high registration accuracy as well as robustness~\cite{DRR17}. Furthermore, it is differentiable and therefore lends itself to the use in our training objective function.
To train the network, we propose a novel training objective function, which is composed of the motion estimation according to the PPC model and the registration error computation steps. It allows us to learn a correspondence weighting strategy by minimizing the registration error.
We demonstrate the effectiveness of the learned weighting strategy by evaluating our method on \mbox{single-vertebra} registration, where we show a highly improved robustness compared to the original PPC registration.
\section{Registration and Learned Correspondence Weighting}
In the following section, we begin with an overview of the registration method using the PPC model.
Then, further details on motion estimation (see Sec.~\ref{sec:motionEstimation}) and registration error computation (see Sec.~\ref{sec:errorComputation}) are given, as these two steps play a crucial role in our objective function.
The architecture of our network is discussed in Sec.~\ref{sec:architecture}, followed by the introduction of our objective function in Sec.~\ref{sec:objective}.
At last, important details regarding the training procedure are given in Sec.~\ref{sec:training}.
\subsection{Registration Using Point-to-Plane Correspondences}
Wang~et~al.\xspace~\cite{DRR17} measure the local misalignment between the projection of a \mbox{3-D}\xspace volume $V$ and the \mbox{2-D}\xspace fluoroscopic (live \mbox{X-ray}\xspace) image $I^\mathrm{FL}$ and compute a motion which compensates for this misalignment.
Surface points are extracted from $V$ using the \mbox{3-D}\xspace Canny detector~\cite{CAE86}.
A set of contour generator points~\cite{hartley03contGen} $\set{\mathbf{w}_i}$, i.\,e.\xspace~surface points $\mathbf{w}_i\in\mathbb{R}^3$ which correspond to contours in the projection of $V$, are projected onto the image as $\set{\mathbf{p}_i}$, i.\,e.\xspace~a set of points $\mathbf{p}_i\in\mathbb{R}^3$ on the image plane.
Additionally, gradient projection images of $V$ are generated and used to perform local patch matching to find correspondences for $\mathbf{p}_i$ in $I^\mathrm{FL}$.
Assuming that the motion along contours is not detectable, the patch matching is only performed in the orthogonal direction to the contour.
Therefore, the displacement of $\mathbf{w}_i$ along the contour is not known, as well as the displacement along the viewing direction. These unknown directions span the plane $\mathrm{\Pi}_i$ with the normal $\mathbf{n}_i\in\mathbb{R}^3$. After the registration, a point $\mathbf{w}_i$ should be located on the plane $\mathrm{\Pi}_i$.
To minimize the point-to-plane distances $\distance{\mathbf{w}_i}{\mathrm{\Pi}_i}$, a linear equation is defined for each correspondence under the small angle assumption.
The resulting system of equations is solved for the differential motion $\delta\mathbf{v}\in\mathbb{R}^6$, which contains both rotational components in the axis-angle representation $\delta{\boldsymbol{\omega}}\in\mathbb{R}^3$ and translational components $\delta{\boldsymbol{\nu}}\in\mathbb{R}^3$, i.\,e.\xspace~$\delta\mathbf{v}=(\delta{\boldsymbol{\omega}}^\intercal, \delta{\boldsymbol{\nu}}^\intercal)^\intercal$.
The correspondence search and motion estimation steps are applied iteratively over multiple resolution levels.
To increase the robustness of the motion estimation, the maximum correntropy criterion for regression (MCCR)~\cite{LMC15} is used to solve the system of linear equations~\cite{DRR17}.
The motion estimation is extended to coordinate systems related to the camera coordinates by a rigid transformation by Schaffert~et~al.\xspace~\cite{MVD17}.
The PPC model sets up a linear relationship between the local point-to-plane correspondences and the differential transformation, i.\,e.\xspace a linear misalignment metric based on the found correspondences.
In this paper, we introduce a learning method for correspondence weighting, where the PPC metric is used during training to optimize the weighting strategy for the used correspondences with respect to the registration error.
\subsection{Weighted Motion Estimation}
\label{sec:motionEstimation}
Motion estimation according to the PPC model is performed by solving a linear system of equations defined by $\matr{A}\in\mathbb{R}^{N\times6}$ and $\vect{b}\in\mathbb{R}^N$, where each equation corresponds to one point-to-plane correspondence and $N$ is the number of used correspondences.
We perform the motion estimation in the camera coordinate system with the origin shifted to the centroid of $\set{\mathbf{w}_i}$. This allows us to use the regularized least-squares estimation
\begin{equation}
\delta\mathbf{v} = \underset{\delta\mathbf{v}'}{\arg\min}\left(\dfrac{1}{N}\norm{\matr{A}_s\delta\mathbf{v}'-\vect{b}_s}_2^2 + \lambda \norm{\delta\mathbf{v}'}_2^2\right)
\label{eq:LS}
\end{equation}
in order to improve the robustness of the estimation.
Here, $\matr{A}_s=\matr{S}\cdot\matr{A}$, $\vect{b}_s=\matr{S}\cdot\vect{b}$ and $\lambda$ is the regularizer weight. The diagonal matrix $\matr{S}=\text{diag}(\vect{s})$ contains weights $\vect{s}\in\mathbb{R}^N$ for all correspondences. As Eq.~\eqref{eq:LS} is differentiable w.\,r.\,t.\xspace $\delta\mathbf{v}'$, we obtain
\begin{equation}
\delta\mathbf{v}=\regPPC{\matr{A}, \vect{b},\mathbf{s}}=(\matr{A}_s^\intercal\matr{A}_s+N\cdot\lambda \matr{I})^{-1}\matr{A}_s^\intercal\vect{b}_s \enspace ,
\label{eq:LSClosedForm}
\end{equation}
where $\matr{I}\in\mathbb{R}^{6\times6}$ is the identity matrix.
After each iteration, the registration $\matr{T}\in\mathbb{R}^{4\times4}$ is updated as
\begin{equation}
\matr{T} =
\begin{pmatrix}
\cos(\alpha)\matr{I}+(1-\cos(\alpha)\vect{r}\vect{r}^\intercal)+\sin(\alpha)[\vect{r}]_\times & \delta{\boldsymbol{\nu}} \\ 0 & 1
\end{pmatrix}
\cdot \hat{\matr{T}}
\enspace ,
\label{eq:currReg}
\end{equation}
where $\alpha = \norm{\delta{\boldsymbol{\omega}}}$, $\vect{r} = \delta{\boldsymbol{\omega}}/\norm{\delta{\boldsymbol{\omega}}}$, $[\vect{r}]_\times\in\mathbb{R}^{3\times3}$ is a skew matrix which expresses the cross product with $\vect{r}$ as a matrix multiplication and $\hat{\matr{T}}\in\mathbb{R}^{4\times4}$ is the registration after the previous iteration~\cite{DRR17}.
\subsection{Registration Error Computation}
\label{sec:errorComputation}
In the training phase, the registration error is measured and minimized via our training objective function.
Different error metrics, such as the mean target registration error (mTRE) or the mean re-projection distance (mRPD) can be used. For more details on these metrics, see Sec.~\ref{sec:evalMetrics}.
In this work, we choose the projection error (PE)~\cite{GBD14}, as it directly corresponds to the visible misalignment in the images and therefore roughly correlates to the difficulty to find correspondences by patch matching for the next iteration of the registration method. The PE is computed as
\begin{equation}
e = \errorFunc{\matr{T}, \matr{T}^{\mathrm{GT}}}=\dfrac{1}{M} \sum_{j=1}^M \norm{\projection{\matr{T}}{\mathbf{q}_j}-\projection{\matr{T}^{\mathrm{GT}}}{\mathbf{q}_j}} \enspace ,
\label{eq:error}
\end{equation}
where a set of $M$ target points $\set{\mathbf{q}_j}$ is used and $j$ is the point index. $\projection{\matr{T}}{\cdot}$ is the projection onto the image plane under the currently estimated registration and $\projection{\matr{T}^{\mathrm{GT}}}{\cdot}$ the projection under the \mbox{ground-truth} registration matrix $\matr{T}^{\mathrm{GT}}\in\mathbb{R}^{4\times4}$. Corners of the bounding box of the point set $\set{\mathbf{w}_i}$ are used as $\set{\mathbf{q}_j}$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{architecture.pdf}
\caption{Modified PointNet~\cite{PND17} architecture used for correspondence weighting. Rectangles with dashed outlines indicate feature vectors (orange for local features, i.\,e.\xspace containing information from single correspondences, and red for global features, i.\,e.\xspace containing information from the entire set of correspondences). Sets of feature vectors (one feature vector per correspondence) are depicted as a column of feature vectors (three correspondences shown here).
MLP denotes a multi-layer perceptron, which is applied to each feature vector individually.
}
\label{fig:architecture}
\end{figure}
\subsection{Network Architecture}
\label{sec:architecture}
We want to weight individual correspondences based on their geometrical properties as well as the image similarity, taking into account the global properties of the correspondence set.
For every correspondence, we define the features
\begin{equation}
\vect{f}_i =
\begin{pmatrix}
\mathbf{w}_i^\intercal & \vect{n}_i^\intercal & \distance{\mathbf{w}_i}{\mathrm{\Pi}_i} & \text{NGC}_i&
\end{pmatrix}^\intercal \enspace ,
\label{eq:features}
\end{equation}
where $\text{NGC}_i$ denotes the normalized gradient correlation for the correspondences, which is obtained in the patch matching step.
The goal is to learn the mapping from a set of feature vectors $\set{\vect{f}_i}$ representing all correspondences to the weight vector $\vect{s}$ containing weights for all correspondences, i.\,e.\xspace~the mapping
\begin{equation}
\text{M}_{{\boldsymbol{\theta}}}: \set{\vect{f}_i} \mapsto \vect{s} \enspace ,
\end{equation}
where $\text{M}_{{\boldsymbol{\theta}}}$ is our network, and ${\boldsymbol{\theta}}$ the network parameters.
To learn directly on correspondence sets, we use the PointNet~\cite{PND17} architecture and modify it to fit our task (see Fig.~\ref{fig:architecture}).
The basic idea behind PointNet is to process points individually and obtain global information by combining the points in a symmetric way, i.\,e.\xspace~independent of order in which the points appear in the input~\cite{PND17}.
In the simplest variant, the PointNet consists of a multi-layer perceptron (MLP) which is applied for each point, transforming the respective $\vect{f_i}$ into a \mbox{higher-dimensional} feature space and thereby obtaining a local point descriptor.
To describe the global properties of the point set, the resulting local descriptors are combined by max pooling over all points, i.\,e.\xspace~for each feature, the maximum activation over all points in the set is retained.
To obtain per-point outputs, the resulting global descriptor is concatenated to the local descriptors of each point.
The resulting descriptors, containing global as well as local information, are further processed for each point independently by a second MLP.
For our network, we choose MLPs with the size of $8\times64\times128$ and $256\times64\times1$, which are smaller than in the original network~\cite{PND17}.
We enforce the output to be in the range of $[0;1]$ by using a softsign activation function~\cite{elliott1993better} in the last layer of the second MLP and modify it to re-scale the output range from $(-1;1)$ to $(0;1)$.
Our modified softsign activation function $f(\cdot)$ is defined as
\begin{equation}
f(x) = \left(\dfrac{x}{1+|x|}+1\right)\cdot0.5 \enspace ,
\end{equation}
where $x$ is the state of the neuron.
Additionally, we introduce a global trainable weighting factor which is applied to all correspondences.
This allows for an automatic adjustment of the strength of the regularization in the motion estimation step.
Note that the network is able to process correspondence sets of variable size so that no fixed amount of correspondences is needed and all extracted correspondences can be utilized.
\subsection{Training Objective}
\label{sec:objective}
We now combine the motion estimation, PE computation and the modified PointNet to obtain the training objective function as
\begin{equation}
\boldsymbol{\theta}=\underset{\mathbf{{\boldsymbol{\theta}}'}}{\arg\min}\dfrac{1}{K}\sum_{k=1}^K\errorFunc{\regPPC{\matr{A}_k, \vect{b}_k, \text{M}_{{\boldsymbol{\theta}}'}(\set{\mathbf{f}_i}_k)},\matr{T}^{\mathrm{GT}}_k} \enspace ,
\label{eq:Objective}
\end{equation}
where $k$ is the training sample index and $K$ the overall number of samples. Equation \eqref{eq:LSClosedForm} is differentiable with respect to $\vect{s}$, Eq.~\eqref{eq:currReg} with respect to $\delta\mathbf{v}$ and Eq.~\eqref{eq:error} with respect to $\matr{T}$.
Therefore, gradient-based optimization can be performed on Eq.~\eqref{eq:Objective}.
Note that using Eq.~\eqref{eq:Objective}, we learn directly with the objective to minimize the registration error and no per-correspondence \mbox{ground-truth} weights are needed.
Instead, the PPC metric is used to implicitly assess the quality of the correspondences during the back-propagation step of the training and the weights are adjusted accordingly. In other words, the optimization of the weights is driven by the PPC metric.
\subsection{Training Procedure}
\label{sec:training}
To obtain training data, a set of volumes $\setV$ is used, each with one or more \mbox{2-D}\xspace images $\set{I^\mathrm{FL}}$ and a known $\matr{T}^{\mathrm{GT}}$ (see Sec.~\ref{sec:data}). For each pair of images, 60 random initial transformations with an uniformly distributed mTRE are generated~\cite{SEM05}. For details on the computation of the mTRE and start positions, see Sec.~\ref{sec:evalMetrics}.
Estimation of correspondences at training time is computationally expensive.
Instead, the correspondence search is performed once and the precomputed correspondences are used during training.
Training is performed for one iteration of the registration method and start positions with a small initial error are assumed to be representative for subsequent registration iterations at test time.
For training, the number of correspondences is fixed to 1024 to enable efficient batch-wise computations. The subset of used correspondences is selected randomly for every training step. Data augmentation is performed on the correspondence sets by applying translations, \mbox{in-plane} rotations and horizontal flipping, i.\,e.\xspace reflection over the plane spanned by the vertical axis of the \mbox{2-D}\xspace image and the principal direction. For each resolution level, a separate model is trained.
\section{Experiments and Results}
\subsection{Data}
\label{sec:data}
\begin{figure}[t]
\centering
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S1Marked.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S2Marked.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S3Marked.jpg}
}
\hfill
\\
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S13D.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S23D.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S33D.jpg}
}
\hfill
\caption{Examples of \mbox{2-D}\xspace images used as $I^\mathrm{FL}$ (top row) and the corresponding \mbox{3-D}\xspace images used as $V$ (bottom row) in the registration evaluation. Evaluated vertebrae are marked by a yellow cross in the top row.}
\label{fig:data}
\end{figure}
We perform experiments for \mbox{single-view} registration of individual vertebrae.
Note that \mbox{single-vertebra} registration is challenging due to the small size of the target structure and the presence of neighbor vertebrae. Therefore, achieving a high robustness is challenging.
We use clinical C-arm CT acquisitions from the thoracic and pelvic regions of the spine for training and evaluation. Each acquisition consists of a sequence of \mbox{2-D}\xspace images acquired with a rotating C-arm. These images are used to reconstruct the \mbox{3-D}\xspace volume. To enable reconstruction, the C-arm geometry has to be calibrated with a high accuracy (the accuracy is $\leq 0.16$\,mm for the projection error at the iso-center in our case). We register the acquired \mbox{2-D}\xspace images to the respective reconstructed volume and therefore the ground truth registration is known within the accuracy of the calibration.
Vertebra are defined by an axis-aligned volume of interest (VOI) containing the whole vertebra. Only surface points inside the VOI are used for registration. We register the projection images (resolution of $616\times480$ pixels, pixel size of 0.62\,mm) to the reconstructed volumes (containing around 390 slices with slice resolution of $512\times512$ voxels and voxel size of 0.49\,mm).
To simulate realistic conditions, we add Poisson noise to all \mbox{2-D}\xspace images and rescale the intensities to better match fluoroscopic images.
The training set consists of \mbox{19 acquisitions} with a total of \mbox{77 vertebrae}.
For each vertebra, \mbox{8 different} \mbox{2-D}\xspace images are used. An additional validation set of \mbox{23 vertebrae} from \mbox{6 acquisitions} is used to monitor the training process.
The registration is performed on a test set of 6 acquisitions. For each acquisition, \mbox{2 vertebrae} are evaluated and registration is performed independently for both the \mbox{anterior-posterior} and the lateral views.
Each set contains data from different patients, i.\,e.\xspace~no patient appears in two different sets. The sets were defined so that all sets are representative to the overall quality of the available images, i.\,e.\xspace~contain both pelvic and thoracic vertebrae, as well as images with more or less clearly visible vertebrae.
Examples of images used in the test set are shown in Fig.~\ref{fig:data}.
\subsection{Compared Methods}
\label{sec:comparedMethods}
We evaluate the performance of the registration using the PPC model in combination with the learned correspondence weighting strategy (PPC-L), which was trained using our proposed metric-driven learning method.
To show the effectiveness of the correspondence weighting, we compare PPC-L to the original PPC method. The compared methods differ in the computation of the correspondence weights $\vect{s}$ and the regularizer weight $\lambda$. For \mbox{PPC-L}\xspace, the correspondence weights $\vect{s}^\mathrm{L} = \text{M}_{{\boldsymbol{\theta}}}({\set{\mathbf{f}})}$ and $\lambda = 0.01$ are used. For PPC, we set $\lambda = 0$ and the used correspondence weights $\vect{s}^\mathrm{PPC}$ are the $\text{NGC}_i$ values of the found correspondences, where any value below $0.1$ is set to $0$, i.\,e.\xspace~the correspondence is rejected. Additionally, the MCCR is used in the PPC method only. The minimum resolution level has a scaling of 0.25 and the highest a scaling of 1.0. For the PPC method, registration is performed on the lowest resolution level without allowing motion in depth first, as this showed to increases the robustness of the method. To differentiate between the effect of the correspondence weighting and the regularized motion estimation, we also consider registration using regularized motion estimation. We use a variant where the global weighting factor, which is applied to all points, is matched to the regularizer weight automatically by using our objective function (\mbox{PPC-R}\xspace). For the different resolution levels, we obtained a data weight in the range of $[2.0 ; 2.1]$. Therefore, we use $\lambda = 0.01$ and $\vect{s}^\mathrm{R} = 2.0 \cdot \vect{s}^\mathrm{PPC}$. Additionally, we empirically set the correspondence weight to $\vect{s}^\mathrm{RM} = 0.25 \cdot \vect{s}^\mathrm{PPC}$, which increases the robustness of the registration while still allowing for a reasonable amount of motion (\mbox{PPC-RM}\xspace).
\subsection{Evaluation Metrics}
\label{sec:evalMetrics}
To evaluate the registration, we follow the standardized evaluation methodology~\cite{SEM05,ROC13}.
The following metrics are defined by van de Kraats~et~al.\xspace~\cite{SEM05}:
\begin{itemize}
\item{\it Mean Target Registration Error:}
The mTRE is defined as the mean distance of target points under $\matr{T}^{\mathrm{GT}}$ and the estimated registration $\matr{T}^\mathrm{est}\in\mathbb{R}^{4\times4}$.
\item{\it Mean Re-Projection Distance (mRPD):}
The mRPD is defined as the mean distance of target points under $\matr{T}^{\mathrm{GT}}$ and the \mbox{re-projection} rays of the points as projected under $\matr{T}^\mathrm{est}$.
\item{\it Success Rate (SR):}
The SR is the number of registrations with with a registration error below a given threshold. As we are concerned with \mbox{single-view} registration, we define the success criterion as a mRPD $\leq$ 2\,mm.
\item{\it Capture Range (CR):}
The CR is defined as the maximum initial mTRE for which at least 95\% of registrations are successful.
\end{itemize}
Additionally, we compute the gross success rate (GSR)~\cite{DFM17} as well as a gross capture range (GCR) with a success criterion of a mRPD $\leq$ 10\,mm in order to further assess the robustness of the methods in case of a low accuracy.
We define target points as uniformly distributed points inside the VOI of the registered vertebra.
For the evaluation, we generate 600 random start transformations for each vertebra in a range \mbox{of 0\,mm - 30\,mm} initial mTRE using the methodology described by van de Kraats~et~al.\xspace~\cite{SEM05}.
We evaluate the accuracy using the mRPD and the robustness using the SR, CR GSR and GCR.
\subsection{Results and Discussion}
\subsubsection{Accuracy and Robustness}
The evaluation results for the compared methods are summarized in Tab. \ref{tab:resBase}. We observe that \mbox{PPC-L}\xspace achieves the best SR of 94.3\,\% and CR of 13\,mm. Compared to PPC (SR of 79.3\,\% and CR of 3\,mm), \mbox{PPC-R}\xspace also achieves a higher SR of 88.1\,\% and CR of 6\,mm. For the regularized motion estimation, the accuracy decreases for increasing regularizer influence (0.79$\pm${0.22}\,mm for \mbox{PPC-R}\xspace and 1.18$\pm${0.42}\,mm for \mbox{PPC-RM}\xspace), compared to PPC (0.75$\pm$0.21\,mm) and \mbox{PPC-L}\xspace (0.74$\pm$0.26\,mm). A sample registration result using \mbox{PPC-L}\xspace is shown in Fig.~\ref{fig:sample:res}.
\begin{table}[b]
\centering
\caption{Evaluation results for the compared methods. The mRPD is computed for the 2\,mm success criterion and is shown as mean\,$\pm$\,standard deviation.}
\label{tab:angioRes}
\begin{tabular}{l|c|c|c|c|c}
\hline
Method & mRPD {[}mm{]} & SR {[}\%{]} & CR {[}mm{]} & GSR {[}\%{]} & GCR {[}mm{]}\\
\hline
PPC & 0.75$\pm$0.21 & 79.3 & 3 & 81.8 & 3 \\
\mbox{PPC-R}\xspace & {0.79}$\pm${0.22} & 88.1 & 6 & 90.72 & 6 \\
\mbox{PPC-RM}\xspace & {1.18}$\pm${0.42} & 59.6 & 4 & 95.1 & 20 \\
\bf{\mbox{PPC-L}\xspace} & {{0.74}}$\pm$0.26 & 94.3 & 13 & 96.3 & 22 \\
\hline
\end{tabular}
\label{tab:resBase}
\end{table}
\begin{figure}[t]
\centering
\subfloat[\label{fig:sample:td}]{%
\includegraphics[width=0.23\textwidth]{Im2D-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:NGC}]{%
\includegraphics[width=0.23\textwidth]{ImNGC-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:W}]{%
\includegraphics[width=0.23\textwidth]{ImW-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:res}]{%
\includegraphics[width=0.23\textwidth]{ImRes-3.jpg}
}
\caption{Registration example: (a) shows $I^\mathrm{FL}$ with one marked vertebra to register. Red dots depict initially extracted (b,\,c) and final aligned (d) contour points. Green lines depict the same randomly selected subset of correspondences, whose intensities are determined by $\text{NGC}_i$ (b) and learned weights (c). Final \mbox{PPC-L}\xspace registration result overlaid in yellow (d). Also see video in the supplementary material.
}
\label{fig:sample}
\end{figure}
For strongly regularized motion estimation, we observe a large difference between the GSR and the SR. While for \mbox{PPC-R}\xspace, the difference is relatively small \mbox{(88.1\% vs. 90.7\%)}, it is very high for \mbox{PPC-RM}\xspace. Here a GSR of 95.1\,\% is achieved, while the SR is 59.6\,\%. This indicates that while the method is robust, the accuracy is low. Compared to the CR, the GCR is increased for \mbox{PPC-L}\xspace (22\,mm vs. 13\,mm) and especially for \mbox{PPC-RM}\xspace (20\,mm vs. 4\,mm).
Overall, this shows that while some inaccurate registrations are present in \mbox{PPC-L}\xspace, they are very common for \mbox{PPC-RM}\xspace.
\subsubsection{Single Iteration Evaluation}
\begin{figure}[b]
\centering
\subfloat[PPC]{%
\includegraphics[width=0.32\textwidth]{BaseSingleIter.pdf}
}
\hfill
\subfloat[\mbox{PPC-R}\xspace]{%
\includegraphics[width=0.32\textwidth]{RegSingleIter.pdf}
}
\hfill
\subfloat[\mbox{PPC-L}\xspace]{%
\includegraphics[width=0.32\textwidth]{LearnedSingleIter.pdf}
}
\caption{Histograms showing initial and result projection error (PE) in pixels for a single iteration of registration on lowest resolution level (on validation set, 1024 correspondences per case). Motion estimation was performed using least squares for all methods. For PPC, no motion in depth is estimated (see Sec.~\ref{sec:comparedMethods}).}
\label{fig:singleIter}
\end{figure}
To better understand the effect of the correspondence weighting and regularization, we investigate the registration results after one iteration on the lowest resolution level. In Fig. \ref{fig:singleIter}, the PE in pixels (computed using $\set{\mathbf{q}_j}$ as target points) is shown for all cases in the validation set. As in training, 1024 correspondences are used per case for all methods. We observe that for PPC, the error has a high spread, where for some cases, it is decreased considerably, while for other cases, it is increased. For \mbox{PPC-R}\xspace, most cases are below the initial error. However, the error is decreased only marginally, as the regularization prevents large motions. For \mbox{PPC-L}\xspace, we observe that the error is drastically decreased for most cases. This shows that \mbox{PPC-L}\xspace is able to estimate motion efficiently. An example for correspondence weighting in \mbox{PPC-L}\xspace is shown in Fig.~\ref{fig:sample:W}, where we observe a set of consistent correspondences with high weights, while the remaining correspondences have low weights.
\subsubsection{Method Combinations}
\begin{figure}[t]
\centering
\subfloat[\mbox{PPC-RM+}\xspace]{%
\includegraphics[width=0.49\textwidth]{RegMPAfterFirst.pdf}
}
\hfill
\subfloat[\mbox{PPC-L+}\xspace]{%
\includegraphics[width=0.49\textwidth]{LearnedPAfterFirst.pdf}
}
\caption{Box plots for distribution of resulting mRPD on the lowest resolution level for successful registrations for different initial mTRE intervalls.}
\label{fig:boxFirstResLevel}
\end{figure}
We observed that while the \mbox{PPC-RM}\xspace method has a high robustness (GCR and GSR), it leads to low accuracy. For \mbox{PPC-L}\xspace, we observed an increased GCR compared to the CR. In both cases, this demonstrates that registrations are present with a mRPD between 2\,mm and 10\,mm. As the PPC works reliably for small initial errors, we combine these methods with PPC by performing PPC on the highest resolution level instead of the respective method. We denote the resulting methods as \mbox{PPC-RM+}\xspace and \mbox{PPC-L+}\xspace. We observe that \mbox{PPC-RM+}\xspace achieves an accuracy of 0.74$\pm$0.18\,mm, an SR of 94.6\,\% and a CR of 18\,mm, while \mbox{PPC-L+}\xspace achieves an accuracy of 0.74$\pm$0.19\,mm, an SR of 96.1\,\% and a CR of 19\,mm. While the results are similar, we note that for \mbox{PPC-RM+}\xspace a manual weight selection is necessary. Further investigations are needed to clarify the better performance of PPC compared to \mbox{PPC-L}\xspace on the highest resolution level. However, this result may also demonstrate the strength of MCCR for cases where the majority of correspondences are correct.
We evaluate the convergence behavior of \mbox{PPC-L+}\xspace and \mbox{PPC-RM+}\xspace by only considering cases which were successful. For these cases, we investigate the error distribution after the first resolution level. The results are shown in Fig. \ref{fig:boxFirstResLevel}. We observe that for \mbox{PPC-L+}\xspace, a mRPD of below 10\,mm is achieved for all cases, while for \mbox{PPC-RM+}\xspace, higher misalignment of around 20\,mm mRPD is present. The result for \mbox{PPC-L+}\xspace is achieved after an average of 7.6 iterations, while 11.8 iterations were performed on average for \mbox{PPC-RM+}\xspace using the stop criterion defined in~\cite{DRR17}. In combination, this further substantiates our findings from the single iteration evaluation and shows the efficiency of \mbox{PPC-L}\xspace and its potential for reducing the computational cost.
\section{Conclusion}
For \mbox{2-D/3-D}\xspace registration, we propose a method to learn the weighting of the local correspondences directly from the global criterion to minimize the registration error. We achieve this by incorporating the motion estimation and error computation steps into our training objective function. A modified PointNet network is trained to weight correspondences based on their geometrical properties and image similarity.
A large improvement in the registration robustness is demonstrated when using the \mbox{learning-based} correspondence weighting,
while maintaining the high accuracy. Although a high robustness can also be achieved by regularized motion estimation, registration using learned correspondence weighting has the following advantages: it is more efficient, does not need manual parameter tuning and achieves a high accuracy.
One direction of future work is to further improve the weighting strategy, e.\,g.\xspace~by including more information into the decision process and optimizing the objective function for robustness and/or accuracy depending on the stage of the registration, such as the current resolution level.
By regarding the motion estimation as part of the network and not the objective function, our model can also be understood in the framework of precision learning~\cite{PRT17} as a regression model for the motion, where we learn only the unknown component (weighting of correspondences), while employing prior knowledge to the known component (motion estimation).
Following the framework of precision learning, replacing further steps of the registration framework with learned counterparts can be investigated. One candidate is the correspondence estimation, as it is challenging to design an optimal correspondence estimation method by hand.
{\bf Disclaimer:} The concept and software presented in this paper are based on research and are not commercially available. Due to regulatory reasons its future availability cannot be guaranteed.
\bibliographystyle{splncs04}
|
\section{Introduction}
\label{sec:intro}
This paper stems from our research of finite simple connected tetravalent graphs that admit a group of automorphisms acting transitively on vertices and edges but not on the arcs of
the graph; such groups of automorphisms are said to be {\em half-arc-transitive}. Observe that the full automorphism group $\mathrm{Aut}(\Gamma)$ of such a graph $\Gamma$
is then either arc-transitive or itself half-arc-transitive. In the latter case the graph $\Gamma$ is called {\em half-arc-transitive}.
Tetravalent graphs admitting a half-arc-transitive group of automorphisms
are surprisingly rich combinatorial objects with connections to several other areas of mathematics (see, for example,
\cite{ConPotSpa15, MarNedMaps,MarNed3, MarPis99, MarSpa08, PotSpiVerBook,genlost}). One of the most fruitful tools for analysing the structure of a tetravalent graph $\Gamma$
admitting a half-arc-transitive group $G$ is to study a certain $G$-invariant decomposition of the edge set $E(\Gamma)$ of $\Gamma$ into the
{\em $G$-alternating cycles} of some even length $2r$; the parameter $r$ is then called the {\em $G$-radius} and denoted $\mathop{{\rm rad}}_G(\Gamma)$
(see Section~\ref{sec:HAT} for more detailed definitions). Since $G$ is edge-transitive and the decomposition into $G$-alternating cycles
is $G$-invariant, any two intersecting $G$-alternating cycles meet in the same number of vertices; this number is then called the {\em attachment number}
and denoted $\mathop{{\rm att}}_G(\Gamma)$. When $G=\mathrm{Aut}(\Gamma)$
the subscript $G$ will be omitted in the above notation.
It is well known and easy to see that $\mathop{{\rm att}}_G(\Gamma)$ divides $2\mathop{{\rm rad}}_G(\Gamma)$.
However, for all known tetravalent half-arc-transitive graphs the attachment number in fact divides the radius.
This brings us to the following question that we would like to propose and address in this paper:
\begin{question}
\label{que:divides}
Is it true that the attachment number $\mathop{{\rm att}}(\Gamma)$ of an arbitrary tetravalent half-arc-transitive graph $\Gamma$ divides the radius $\mathop{{\rm rad}}(\Gamma)$?
\end{question}
By checking the complete list of all tetravalent half-arc-transitive graphs on up to $1000$ vertices (see~\cite{PotSpiVer15}), we see the that answer to the above question is affirmative for the graphs in that range. Further, as was proved in \cite[Theorem~1.2]{MarWal00}, the question has an affirmative answer in the case $\mathop{{\rm att}}(\Gamma) = 2$. In Section~\ref{sec:AT}, we generalise this result by proving the following theorem.
\begin{theorem}
\label{the:AT}
Let $\Gamma$ be a tetravalent half-arc-transitive graph. If its radius $\mathop{{\rm rad}}(\Gamma)$ is odd, then $\mathop{{\rm att}}(\Gamma)$ divides $\mathop{{\rm rad}}(\Gamma)$. Consequently, if $\mathop{{\rm att}}(\Gamma)$ is not divisible by $4$, then $\mathop{{\rm att}}(\Gamma)$ divides $\mathop{{\rm rad}}(\Gamma)$.
\end{theorem}
As a consequence of our second main result (Theorem~\ref{the:main}) we see that, in contrast to Theorem~\ref{the:AT}, there exist infinitely many arc-transitive tetravalent graphs $\Gamma$ admitting a half-arc-transitive group $G$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$. In fact, in Section~\ref{sec:HAT}, we characterise these graphs completely and prove the following theorem (see Section~\ref{subsec:Dart} for the definition of the dart graph).
\begin{theorem}
\label{the:main}
Let $\Gamma$ be a connected tetravalent graph. Then $\Gamma$ is $G$-half-arc-transitive for some $G \leq \mathrm{Aut}(\Gamma)$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ if and only if $\Gamma$ is the dart graph of some $2$-arc-transitive cubic graph.
\end{theorem}
The third main result of this paper, stemming from our analysis of the situation described by Theorem~\ref{the:main}, reveals a surprising connection to the theory of covering projections of graphs. This theory has become one of the central tools in the study of symmetries of graphs. A particularly
thrilling development started with the seminal work of Malni\v{c}, Nedela and \v{S}koviera \cite{MalNedSko} who analysed the condition under which a given automorphism group of the base graph lifts along the covering projection. Recently, the question of determining the structure of the lifted group received a lot of attention (see \cite{FenKutMalMar,MaPo16,MaPo??}).
To be more precise,
let $\wp \colon \tilde{\Gamma} \to \Gamma$ be a covering projection of connected graphs and let $\mathrm{CT}(\wp)$ be the corresponding group of covering transformations (see \cite{MalNedSko}, for example, for the definitions pertaining to the theory of graph covers).
Furthermore, let $G \leq \mathrm{Aut}(\Gamma)$ be a subgroup that lifts along $\wp$. Then the lifted group $\tilde{G}$ is an extension of $\mathrm{CT}(\wp)$ by $G$.
If this extension is split then the covering projection $\wp$ is called {\em $G$-split}. The most natural way in which this can occur is that there exists a complement $\bar{G}$ of $\mathrm{CT}(\wp)$ in
$\tilde{G}$ and a $\bar{G}$-invariant subset $S$ of $V(\tilde{\Gamma})$, that intersects each fibre of $\wp$ in exactly one vertex. In such a case we say that $S$ is a {\em section} for $\bar{G}$ and that $\bar{G}$ is a {\em sectional} complement of $\mathrm{CT}(\wp)$. Split covering projections without any sectional complement are called {\em non-sectional}. These turn out to be rather elusive and hard to analyse. To the best of our knowledge, the only known infinite family of non-sectional split covers was presented in~\cite[Section 4]{FenKutMalMar}. This family of non-sectional split covers involves cubic arc-transitive graphs of extremely large order.
In this paper we show that each connected tetravalent graph $\Gamma$ admitting a half-arc-transitive group $G$ of automorphisms such that $\mathop{{\rm att}}_G(\Gamma) = 2$ and $\mathop{{\rm rad}}_G(\Gamma) = 3$
is a $2$-fold cover of the line graph of a cubic $2$-arc-transitive graph, and that in the case when $\Gamma$ is not bipartite the corresponding covering projection is non-sectional.
This thus provides a new and rather simple infinite family of the somewhat mysterious case of non-sectional split covering projections (see Section~\ref{sec:ourcover} for more details).
\section{Half-arc-transitive group actions on graphs}
\label{sec:HAT}
In the next two paragraphs we briefly review some concepts and results pertaining half-arc-transitive group actions on tetravalent graphs that we shall need in the remainder of this section. For more details see~\cite{Mar98}, where most of these notions were introduced.
A tetravalent graph $\Gamma$ admitting a {\em half-arc-transitive} (that is vertex- and edge- but not arc-transitive) group of automorphisms $G$ is said to be {\em $G$-half-arc-transitive}. The action of $G$ induces two paired orientations of the edges of $\Gamma$ and for any one of them each vertex of $\Gamma$ is the head of two and the tail of the other two of its incident edges. (The fact that the edge $uv$ is oriented from $u$ to $v$ will be denoted by $u \to v$.) A cycle of $\Gamma$ for which every two consecutive edges either have a common head or common tail with respect to this orientation is called a {\em $G$-alternating cycle}. Since the action of $G$ is vertex- and edge-transitive all of the $G$-alternating cycles have the same even length $2\mathop{{\rm rad}}_G(\Gamma)$ and any two non-disjoint $G$-alternating cycles intersect in the same number $\mathop{{\rm att}}_G(\Gamma)$ of vertices. These intersections, called the {\em $G$-attachment sets}, form an imprimitivity block system for the group $G$. The numbers $\mathop{{\rm rad}}_G(\Gamma)$ and $\mathop{{\rm att}}_G(\Gamma)$ are called the {\em $G$-radius} and {\em $G$-attachment number} of $\Gamma$, respectively. If $G = \mathrm{Aut}(\Gamma)$ we suppress the prefix and subscript $\mathrm{Aut}(\Gamma)$ in all of the above definitions.
It was shown in~\cite[Proposition~2.4]{Mar98} that a tetravalent $G$-half-arc-transitive graph $\Gamma$ has at least three $G$-alternating cycles unless $\mathop{{\rm att}}_G(\Gamma) = 2\mathop{{\rm rad}}_G(\Gamma)$ in which case $\Gamma$ is isomorphic to a particular Cayley graph of a cyclic group (and is thus arc-transitive). Moreover, in the case that $\Gamma$ has at least three $G$-alternating cycles, $\mathop{{\rm att}}_G(\Gamma) \leq \mathop{{\rm rad}}_G(\Gamma)$ holds and $\mathop{{\rm att}}_G(\Gamma)$ divides $2\mathop{{\rm rad}}_G(\Gamma)$. In addition, the restriction of the action of $G$ to any $G$-alternating cycle is isomorphic to the dihedral group of order $2\mathop{{\rm rad}}_G(\Gamma)$ (or to the Klein 4-group in the case of $\mathop{{\rm rad}}_G(\Gamma) = 2$) with the cyclic subgroup of order $\mathop{{\rm rad}}_G(\Gamma)$ being the subgroup generated by a two-step rotation of the $G$-alternating cycle in question. In addition, if $C = (v_0, v_1, \ldots , v_{2r-1})$ is a $G$-alternating cycle of $\Gamma$ with $r = \mathop{{\rm rad}}_G(\Gamma)$ and $C'$ is the other $G$-alternating cycle of $\Gamma$ containing $v_0$ then $C \cap C' = \{v_{i\ell} \colon 0 \leq i < a\}$ where $a = \mathop{{\rm att}}_G(\Gamma)$ and $\ell = 2r/a$ (see \cite[Proposition~2.6]{Mar98} and \cite[Proposition~3.4]{MarPra99}).
\medskip
As mentioned in the Introduction one of the goals of this paper is to characterize the tetravalent $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$. The bijective correspondence between such graphs and $2$-arc-transitive cubic graphs (see Theorem~\ref{the:main}) is given via two pairwise inverse constructions: the {\em graph of alternating cycles} construction and the {\em dart graph} construction. We first define the former.
\subsection{The graph of alternating cycles}
\label{subsec:Alt}
Let $\Gamma$ be a tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$. The {\em graph of $G$-alternating cycles} $\mathrm{Alt}_G(\Gamma)$ is the graph whose vertex set consists of all $G$-alternating cycles of $\Gamma$ with two of them being adjacent whenever they have at least one vertex in common. We record some basic properties of the graph $\mathrm{Alt}_G(\Gamma)$.
\begin{proposition}
\label{pro:gr_alt_cyc}
Let $\Gamma$ be a connected tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$ having at least three $G$-alternating cycles. Then the graph $\mathrm{Alt}_G(\Gamma)$ is a regular graph of valence $2\mathop{{\rm rad}}_G(\Gamma)/\mathop{{\rm att}}_G(\Gamma)$ and the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is vertex- and edge-transitive. Moreover, this action is arc-transitive if and only if $\mathop{{\rm rad}}_{G}(\Gamma)$ does not divide $\mathop{{\rm att}}_G(\Gamma)$.
\end{proposition}
\begin{proof}
To simplify notation, denote $r = \mathop{{\rm rad}}_G(\Gamma)$ and $a = \mathop{{\rm att}}_G(\Gamma)$. Since each vertex of $\Gamma$ lies on exactly two $G$-alternating cycles and the intersection of any two non-disjoint $G$-alternating cycles is of size $a$ it is clear that each $G$-alternating cycle is adjacent to $\ell = 2r/a$ other $G$-alternating cycles in $\mathrm{Alt}_G(\Gamma)$. Moreover, since $G$ acts edge-transitively on $\Gamma$ and each edge of $\Gamma$ is contained in a unique $G$-alternating cycle, the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is vertex-transitive. That this action is also edge-transitive follows from the fact that $G$ acts vertex-transitively on $\Gamma$ and that the edges of $\mathrm{Alt}_G(\Gamma)$ correspond to $G$-attachment sets of $\Gamma$.
For the rest of the proof fix one of the two paired orientations of $\Gamma$ given by the action of $G$, let $C = (v_0, v_1, \ldots , v_{2r-1})$ be a $G$-alternating cycle such that $v_0 \to v_1$ and let $C'$ be the other $G$-alternating cycle containing $v_0$, so that $C \cap C' = \{v_{i\ell}\colon 0 \leq i < a\}$. Since every other vertex of $C$ is the tail of the two edges of $C$ incident to it, the vertex $v_\ell$ is the tail of the two edges of $C$ incident to it if and only if $\ell$ is even (in which case each $v_{i\ell}$ has this property).
Now, if $\ell$ is odd, then each element of $G$, mapping $v_0$ to $v_\ell$ necessarily interchanges $C$ and $C'$, proving that in this case the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is in fact arc-transitive. We remark that this also follows from the fact, first observed by Tutte~\cite{Tutte66}, that a vertex- and edge-transitive group of automorphisms of a graph of odd valence is necessarily arc-transitive. To complete the proof we thus only need to show that the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is not arc-transitive when $\ell$ is even. Recall that in this case each vertex $v_{i\ell} \in C \cap C'$ is the tail of the two edges of $C$ incident to it. Therefore, since any element of $G$, mapping the pair $\{C, C'\}$ to itself of course preserves the intersection $C \cap C'$ it is clear that any such element fixes each of $C$ and $C'$ setwise, and so no element of $G$ can interchange $C$ and $C'$. This proves that the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is half-arc-transitive.
\end{proof}
\subsection{The dart graph and its relation to $\mathrm{Alt}_G(\Gamma)$}
\label{subsec:Dart}
The dart graph of a cubic graph was investigated in~\cite{HilWil12} (we remark that this construction can also be viewed as a special kind of the {\em arc graph} construction from~\cite{GR01book}). Of course the dart graph construction can be applied to arbitrary graphs but here, as in~\cite{HilWil12}, we are only interested in dart graphs of cubic graphs. We first recall the definition. Let $\Lambda$ be a cubic graph. Then its {\em dart graph} $\mathop{{\rm Dart}}(\Lambda)$ is the graph whose vertex set consists of all the arcs (called darts in~\cite{HilWil12}) of $\Lambda$ with $(u,v)$ adjacent to $(u', v')$ if and only if either $u' = v$ but $u \neq v'$, or $u = v'$ but $u' \neq v$. In other words, the edges of $\mathop{{\rm Dart}}(\Lambda)$ correspond to the $2$-arcs of $\Lambda$. Note that this enables a natural orientation of the edges of $\mathop{{\rm Dart}}(\Lambda)$ where the edge $(u,v)(v,w)$ is oriented from $(u,v)$ to $(v,w)$.
Clearly, $\mathrm{Aut}(\Lambda)$ can be viewed as a subgroup of $\mathrm{Aut}(\mathop{{\rm Dart}}(\Lambda))$ preserving the natural orientation. Furthermore, the permutation $\tau$ of $V(\mathop{{\rm Dart}}(\Lambda))$, exchanging each $(u,v)$ with $(v,u)$, is an orientation reversing automorphism of $\mathop{{\rm Dart}}(\Lambda)$.
\medskip
We now establish the correspondence between the $2$-arc-transitive cubic graphs and the tetravalent graphs admitting a half-arc-transitive group of automorphisms with the corresponding radius $3$ and attachment number $2$. We do this in two steps.
\begin{proposition}
\label{pro:Dart_to_Alt}
Let $\Lambda$ be a connected cubic graph admitting a $2$-arc-transitive group of automorphisms $G$ and let $\Gamma = \mathop{{\rm Dart}}(\Lambda)$. Then $\Gamma$ is a tetravalent $G$-half-arc-transitive graph such that $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ with $\mathrm{Alt}_G(\Gamma) \cong \Lambda$. Moreover, the natural orientation of $\Gamma$, viewed as $\mathop{{\rm Dart}}(\Lambda)$, coincides with one of the two paired orientations induced by the action of $G$.
\end{proposition}
\begin{proof}
That the natural action of $G$ on $\Gamma$ is half-arc-transitive can easily be verified (see also~\cite{HilWil12}). Now, fix an edge $(u,v)(v,w)$ of $\Gamma$ and choose the $G$-induced orientation of $\Gamma$ in such a way that $(u,v) \to (v,w)$. Since $G$ is $2$-arc-transitive on $\Lambda$, the other edge of $\Gamma$, for which $(u,v)$ is its tail, is $(u,v)(v,w')$, where $w'$ is the remaining neighbour of $v$ in $\Lambda$ (other than $u$ and $w$). It is now clear that for each pair of adjacent vertices $(x,y)$ and $(y,z)$ of $\Gamma$ the corresponding edge is oriented from $(x,y)$ to $(y,z)$, and so the chosen $G$-induced orientation of $\Gamma$ is the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$.
Finally, let $v$ be a vertex of $\Lambda$ and let $u,u',u''$ be its three neighbours. The $G$-alternating cycle of $\Gamma$ containing the edge $(u,v)(v,u')$ is then clearly $C_v = ((u,v),(v,u'),(u'',v),(v,u),(u',v),(v,u''))$, implying that $\mathop{{\rm rad}}_G(\Gamma) = 3$. This also shows that the $G$-alternating cycles of $\Gamma$ naturally correspond to vertices of $\Lambda$. Since the three $G$-alternating cycles of $\Gamma$ that have a nonempty intersection with $C_v$ are the ones corresponding to the vertices $u$, $u'$ and $u''$, this correspondence in fact shows that $\mathrm{Alt}_G(\Gamma)$ and $\Lambda$ are isomorphic and that $\mathop{{\rm att}}_G(\Gamma) = 2$.
\end{proof}
\begin{proposition}
\label{pro:Alt_to_Dart}
Let $\Gamma$ be a connected tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$, and let $\Lambda = \mathrm{Alt}_G(\Gamma)$. Then the group $G$ induces a $2$-arc-transitive action on $\Lambda$ and $\mathop{{\rm Dart}}(\Lambda) \cong \Gamma$. In fact, an isomorphism $\Psi\colon \mathop{{\rm Dart}}(\Lambda) \to \Gamma$ exists which maps the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to a $G$-induced orientation of $\Gamma$.
\end{proposition}
\begin{proof}
By Proposition~\ref{pro:gr_alt_cyc} the graph $\Lambda$ is cubic and the induced action of $G$ on it is arc-transitive. Since $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ it is easy to see that $\Gamma$ and $\mathop{{\rm Dart}}(\Lambda)$ are of the same order. Furthermore, let $C = (v_0, v_1, \ldots , v_5)$ be a $G$-alternating cycle of $\Gamma$ and $C', C'', C'''$ be the other $G$-alternating cycles of $\Gamma$ containing $v_0, v_1$ and $v_5$, respectively. Then $C \cap C' = \{v_0, v_3\}$, $C \cap C'' = \{v_1, v_4\}$ and $C \cap C''' = \{v_2, v_5\}$. It is thus clear that any element of $G$, fixing $v_0$ and mapping $v_1$ to $v_5$ (which exists since $C$ is $G$-alternating and $G$ is edge-transitive on $\Gamma$), fixes both $C$ and $C'$ but maps $C''$ to $C'''$. Therefore, the induced action of $G$ on $\Lambda$ is $2$-arc-transitive.
To complete the proof we exhibit a particular isomorphism $\Psi \colon \mathop{{\rm Dart}}(\Lambda) \to \Gamma$. Fix an orientation of the edges of $\Gamma$, induced by the action of $G$, and let $C$ and $C'$ be two $G$-alternating cycles of $\Gamma$ with a nonempty intersection. Then $(C,C')$ and $(C',C)$ are vertices of $\mathop{{\rm Dart}}(\Lambda)$. Let $C \cap C' = \{u,u'\}$ and observe that precisely one of $u$ and $u'$ is the head of both of the edges of $C$ incident to it. Without loss of generality assume it is $u$. Then of course $u'$ is the head of both of the edges of $C'$ incident to it. We then set $\Psi((C,C')) = u$ and $\Psi((C',C)) = u'$. Therefore, for non-disjoint $G$-alternating cycles $C$ and $C'$ of $\Gamma$ we map $(C,C')$ to the unique vertex in $C \cap C'$ which is the head of both of the edges of $C$ incident to it. Since each pair of non-disjoint $G$-alternating cycles meets in precisely two vertices and each vertex of $\Gamma$ belongs to two $G$-alternating cycles of $\Gamma$, this mapping is injective and thus also bijective. We now only need to show that it preserves adjacency and maps the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to the chosen $G$-induced orientation of $\Gamma$. To this end let $C$, $C'$ and $C''$ be three $G$-alternating cycles of $\Gamma$ such that $C$ has a nonempty intersection with both $C'$ and $C''$. Recall that then the edge $(C',C)(C,C'')$ is oriented from $(C',C)$ to $(C,C'')$ in the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$. Denote $C = (v_0,v_1, \ldots , v_5)$ and without loss of generality assume $C \cap C' = \{v_0,v_3\}$ and $C \cap C'' = \{v_1, v_4\}$.
Suppose first that $v_0 \to v_1$. Then $v_0$ is the head of both of the edges of $C'$ incident to it, and so $\Psi((C',C)) = v_0$. Similarly, $v_1$ is the head of both of the edges of $C$ incident to it, and so $\Psi((C,C'')) = v_1$. If on the other hand $v_1 \to v_0$, then $\Psi((C',C)) = v_3$ and $\Psi((C,C'')) = v_4$. In both cases, $\Psi$ maps the oriented edge $(C',C)(C,C'')$ to an oriented edge of $\Gamma$, proving that it is an isomorphism of graphs, mapping the the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to the chosen $G$-induced orientation of $\Gamma$.
\end{proof}
Theorem ~\ref{the:main} now follows directly from Propositions~\ref{pro:Dart_to_Alt} and \ref{pro:Alt_to_Dart}.
\section{Partial answer to Question~\ref{que:divides} and proof of Theorem~\ref{the:AT}}
\label{sec:AT}
In this section we prove Theorem~\ref{the:AT} giving a partial answer to Question~\ref{que:divides}. We first prove an auxiliary result.
\begin{proposition}
\label{pro:transversal}
Let $\Gamma$ be a tetravalent $G$-half-arc-transitive graph with $\mathop{{\rm att}}_G(\Gamma)$ even. Then for each vertex $v$ of $\Gamma$ and the two $G$-alternating cycles $C$ and $C'$, containing $v$, the antipodal vertex of $v$ on $C$ coincides with the antipodal vertex of $v$ on $C'$. Moreover, the involution $\tau$ interchanging each pair of antipodal vertices on all $G$-alternating cycles of $\Gamma$ is an automorphism of $\Gamma$ centralising $G$.
\end{proposition}
\begin{proof}
Denote $r = \mathop{{\rm rad}}_G(\Gamma)$ and $a = \mathop{{\rm att}}_G(\Gamma)$. Let $v$ be a vertex of $\Gamma$ and let $C$ and $C'$ be the two $G$-alternating cycles of $\Gamma$ containing $v$. Denote $C = (v_0, v_1, \ldots , v_{2r-1})$ with $v = v_0$. Recall that then $C \cap C' = \{v_{i\ell}\colon 0 \leq i < a\}$, where $\ell = 2r/a$. Since $a$ is even $v_r \in C \cap C'$. Now, take any element $g \in G_v$ interchanging $v_1$ with $v_{2r-1}$ as well as the other two neighbours of $v$ (which are of course neighbours of $v$ on $C'$). Then $g$ reflects both $C$ and $C'$ with respect to $v$. Since $v_r$ is antipodal to $v$ on $C$, it must be fixed by $g$, but since $v_r$ is also contained in $C'$, this implies that it is in fact also the antipodal vertex of $v$ on $C'$. This shows that for each $G$-alternating cycle $C$ and each vertex $v$ of $C$ the vertex $v$ and its antipodal counterpart on $C$ both belong to the same pair of $G$-alternating cycles (this implies that the $G$-transversals, as they were defined in~\cite{Mar98}, are of length $2$) and are also antipodal on the other $G$-alternating cycle containing them.
It is now clear that $\tau$ is a well defined involution on the vertex set of $\Gamma$. Since the antipodal vertex of a neighbor $v_1$ of $v = v_0$ on $C$ is the neighbor $v_{r+1}$ of the antipodal vertex $v_r$, it is clear that $\tau$ is in fact an automorphism of $\Gamma$. Since any element of $G$ maps $G$-alternating cycles to $G$-alternating cycles it is clear that $\tau$ centralises $G$.
\end{proof}
We are now ready to prove Theorem~\ref{the:AT}. Let $\Gamma$ be a tetravalent half-arc-transitive graph. Denote $r = \mathop{{\rm rad}}(\Gamma)$ and $a = \mathop{{\rm att}}(\Gamma)$, and assume $r$ is odd. Recall that $a$ divides $2r$. We thus only need to prove that $a$ is odd. Suppose to the contrary that $a$ is even, and so by assumption $a \equiv 2 \pmod{4}$. Then the graph $\Gamma$ admits the automorphism $\tau$ from Proposition~\ref{pro:transversal}. Now, fix one of the two paired orientations of the edges induced by the action of $\mathrm{Aut}(\Gamma)$ and let $C = (v_0, v_1, \ldots , v_{2r-1})$ be an alternating cycle of $\Gamma$ with $v_0$ being the tail of the edge $v_0 v_1$. Since $v_0^\tau = v_r$ and $v_1^\tau = v_{r+1}$ it follows that $v_r$ is the tail of the edge $v_rv_{r+1}$. But since $r$ is odd this contradicts the fact that every other vertex of $C$ is the tail of the two edges of $C$ incident to it. Thus $a$ is odd, as claimed.
To prove the second part of the theorem assume that $a$ is not divisible by $4$. If $r$ is even then the fact that $a$ divides $2r$ implies that $a$ divides $r$ as well. If however $r$ is odd, we can apply the first part of the theorem. This completes the proof.
\section{An infinite family of non-sectional split covers}
\label{sec:ourcover}
As announced in the introduction, tetravalent $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$
yield surprising examples of the elusive non-sectional split covers. In this section, we present this connection in some detail.
\begin{theorem}
\label{the:cover}
Let $\Gamma$ be a connected non-bipartite $G$-half-arc-transitive graph $\Gamma$ of order greater than $12$
with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$. Then there exists a $2$-fold covering projection $\wp \colon \Gamma \to \Gamma'$
and an arc-transitive group $H\le \mathrm{Aut}(\Gamma')$ which lifts along $\wp$ in such a way that $\Gamma$ is a non-sectional $H$-split cover of $\Gamma'$.
\end{theorem}
\begin{proof}
Since $\mathop{{\rm att}}_G(\Gamma)=2$, each $G$-attachment set consists of a pair of antipodal vertices on a $G$-alternating cycle of $\Gamma$.
Let $\mathcal{B}$ be the set of all $G$-attachment sets in $\Gamma$.
By Proposition~\ref{pro:transversal}, there exists an automorphism $\tau$ of $\Gamma$ centralising $G$,
which interchanges the two vertices in each element of $\mathcal{B}$. Let $\tilde{G} = \langle G, \tau\rangle$ and note that
$\tilde{G}$ acts transitively on the arcs of $\Gamma$. Since $\tau$ is an involution centralising $G$
not contained in $G$, we see that $\tilde{G} = G \times \langle \tau \rangle$.
Let $\Gamma'$ be the quotient graph with respect to the group $\langle \tau \rangle$, that is, the graph whose vertices are the orbits of $\langle \tau \rangle$
and with two such orbits adjacent whenever they are joined by an edge in $\Gamma$. Since $\tilde{G}$ is arc-transitive and $\langle \tau\rangle$ is normal in $\tilde{G}$, each $\langle \tau \rangle$-orbit is
an independent set. Moreover, if two $\langle \tau \rangle$-orbits $B$ and $C$ are adjacent in $\Gamma'$, then the induced subgraph $\Gamma[B\cup C]$
is clearly vertex- and arc-transitive and is thus either $K_{2,2}$ or $2K_2$. In the former case, it is easy to see that $\Gamma$ is isomorphic to
the lexicographic product of a cycle with the edge-less graph on two vertices. Since $\mathop{{\rm rad}}_G(\Gamma) = 3$ and the orbits of $\langle \tau\rangle$ coincide with the elements of $\mathcal{B}$, this implies that $\Gamma$
has only $6$ vertices, contradicting our assumption on the order of $\Gamma$. This contradiction implies that
$\Gamma[B\cup C] \cong 2K_2$ for any pair of adjacent $\langle \tau \rangle$-orbits $B$ and $C$, and hence the quotient projection
$\wp \colon \Gamma \to \Gamma'$ is a $2$-fold covering projection with $\langle \tau \rangle$ being its group of covering transformations.
Since $\tau$ normalises $G$, the group $\tilde{G}$ projects along $\wp$ and the quotient group
$H = \tilde{G}/ \langle \tau \rangle$ acts faithfully as an arc-transitive group of automorphisms on $\Gamma'$.
In particular, since the group of covering projection $\langle \tau \rangle$ has a complement $G$ in $\tilde{G}$, the covering projection $\wp$ is
$H$-split.
By \cite[Proposition 3.3]{FenKutMalMar}, if $\wp$ had a sectional complement with respect to $H$, then $\Gamma$ would be a canonical double cover of $\Gamma'$, contradicting the assumption that $\Gamma$ is not bipartite.
\end{proof}
{\sc Remark.} In \cite[Proposition~9]{HilWil12} it was shown that a cubic graph $\Lambda$ is bipartite if and only if $\mathop{{\rm Dart}}(\Lambda)$ is bipartite. Since there exist infinitely many connected non-bipartite cubic $2$-arc-transitive graphs, Theorem~\ref{the:main} thus implies
that there are indeed infinitely many connected non-bipartite $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$.
In view of Theorem~\ref{the:cover}, these yield infinitely many non-sectional split covers, as announced in the introduction. Furthermore, note that
the $G$-alternating $6$-cycles in the graph $\Gamma$ appearing in the proof of the above theorem
project by $\wp$ to cycles of length $3$, implying that $\Gamma'$ is a tetravalent arc-transitive graph of girth $3$. Since
it is assumed that the order of $\Gamma$ is larger than $12$ (and thus the order of $\Gamma'$ is larger than $6$), we may now use
\cite[Theorem 5.1]{girth4} to conclude that $\Gamma'$ is isomorphic to the line graph of a $2$-arc-transitive cubic graph.
\bigskip
\noindent
{\bf Acknowledgment.} The first author was supported in part by Slovenian Research Agency, program P1-0294. The second author was supported in part by Slovenian Research Agency, program P1-0285 and projects N1-0038, J1-6720 and J1-7051.
|
\section{Introduction}
In the zero temperature limit, quantum fluids behave at the macroscopic scale as a single coherent quantum state, the superfluid \cite{DonnellyLivreVortices}. Compared to classical fluids, the quantum coherence of superfluids creates a strong additional constraint on the velocity field, namely to be irrotational. Rotational motion can only appear when the macroscopic coherence of the wave function is broken by topological defects called quantum vortices. In that case, the circulation of the velocity around the quantum vortex has a fixed value ($\kappa \simeq 10^{-7}$m$^2$s$^{-1}$ in $^4$He). Turbulence in superfluids can be thought of as an intricate process of distortion, reconnection and breaking of those topological singularities \cite{BarenghiSkrbekSreenivasan_IntroPNAS2014}, but in such a way that the system seems to mimic the classical turbulence at large scales \cite{spectra:PNAS2014}. This has been particularly obvious in the velocity spectra probed with a variety of anemometers, in highly turbulent flows \cite{Maurer1998,salort2010turbulent,Salort:EPL2012,rusaouen2017intermittency} or in the measurement of vortex bundles using parietal pressure probes \cite{Rusaouen:parietalEPL2017}. In some sense, quantum turbulence is an irreducible model, or to say it differently, is a kind of "skeleton" for all types of turbulence.
At finite temperature, the quantum fluid is not a pure superfluid: it behaves as if it experienced friction with a background viscous fluid, called the ``normal fluid''. The relative mass density of the superfluid $\rho_s/\rho$ (where $\rho$ is the total mass density) decreases from one at 0~K to zero at the superfluid transition temperature ($T_\lambda \simeq 2.18$~K in $^4$He). The presence of a finite normal fluid fraction allows for propagation of temperature waves - a property referred to as ``second sound"- which opens the rare opportunity to probe directly the presence of the quantum vortices \cite{DonnellyPhysicsToday1984}.
This is done in the present article, where the statistics of superfluid vortex lines density $\mathcal{L}$ are locally measured by ``second sound tweezer" (see the description in paragraph``probes"), over one and a half decade of the inertial scales, and over a wide range of $\rho_s/\rho$ spanning from 0.16 to 0.81. Surprisingly, the result does not corroborate the widespread idea that the large scales of quantum turbulence reproduce those of classical turbulence: the measured spectra of $\mathcal{L}$ (see Fig. \ref{fig:spectres}) differs from classical-like enstrophy spectra \cite{baudet1996spatial,ishihara2003spectra}. Besides, it also differs from the only\footnote{
Literature also reports experimental \cite{Bradley:PRL2008} and numerical \cite{FujiyamaJLTP2010, BaggaleyPRB2011,Baggaley_VLD:PRL2012,BaggaleyPRL2015,tsepelin2017visualization} spectra of the vortex line density spatially integrated across the whole flow. Still, spectra of such ``integral'' quantities differ in nature from the spectra of local quantities, due to strong filtering effects of spatial fluctuations.}
previous direct measurement of $\mathcal{L}$ with second sound tweezers \cite{roche2007vortex} at $\rho_s/\rho \simeq 0.84$.
The measurement of the vortex lines density provides one of the very few constraints for the disputed modeling of the small scales of quantum turbulence. Even after intense numerical \cite{salort2011mesoscale,Baggaley_Coherentvortexstructures_EPL2012} and theoretical \cite{RocheInterpretation:EPL2008,Nemirovskii:PRB2012,boue2015energyVorticity} studies, the statistics of quantum vortices show that even the large scales of quantum flows can still be surprising.
\section{Experimental setup}
\begin{figure}
\begin{centering}
\includegraphics[height=10cm]{figure1.pdf}
\par\end{centering}
\caption{Sketch of the flow and the experimental setup with
probes. \label{fig:schema_toupie}}
\end{figure}
The experimental setup has been described in details in a previous publication
\cite{rusaouen2017intermittency},
and we only review in this section the major modifications. The setup
consists in a wind tunnel inside a cylindrical cryostat (see
Fig. \ref{fig:schema_toupie}) filled with He-II.
The flow is continuously
powered by a centrifugal pump located at the top of the tunnel. At
the bottom, an optimized 3D-printed conditioner ensures a smooth
entry of the fluid, without boundary layer detachment, inside a pipe of $\Phi=76$ mm inner diameter. Spin motion is broken by radial screens built in the conditioner. The fluid
is then ``cleaned'' again by a 5-cm-long and $3$-mm-cell honeycomb. The mean flow velocity $U$ is measured with a Pitot tube located $130$ mm upstream the pipe outlet. We allow a maximal mean velocity $U=1.3$ m/s inside the pipe to avoid any cavitation effect with the pump.
The main new element compared to the previous design
is a mono-planar grid located $177$ mm upstream the probes to generate
turbulence. The grid has a $M=17$ mm mesh with square bars of thickness $b=4$ mm, which gives a porosity of $\beta=(1-b/M)^{2}\approx0.58$.
The choice to position the probes at a distance $\sim 10M$ downstream the grid is the result of a compromise between the desire to have a ``large'' turbulence intensity, and the necessity to leave enough space for turbulence to develop between the grid and the probes. According to \cite{vita2018generating}, this distance is enough to avoid near-field effects of the grid. However, we emphasize that our main experimental results (Fig. \ref{fig:spectres}-\ref{fig:histogrammes}) do not depend on perfect turbulent isotropy and homogeneity. In-situ measurements of the mean vortex line density can be used to indirectly (via Eq. \ref{eq:tau}) give an estimation of the turbulence intensity $\tau=u^\mathrm{rms}/U \simeq 12-13\%$ (where $u^\mathrm{rms}$ is the standard deviation of longitudinal velocity component). We present the results later in Fig. \ref{fig:scalingU}.
For comparison, Vita and co. \cite{vita2018generating} report a turbulence intensity around $\tau=9\%$ percents at $10M$ in a classical grid flow of similar porosity. The difference between both values of $\tau$ could originate from a prefactor uncertainty in Eq. (\ref{eq:tau}) or from differences in flow design (e.g. the absence of a contraction behind the honeycomb). This difference has no important consequences for the measurement of quantum vortex statistics.
The longitudinal integral length scale of the flow $H\simeq 5.0$~mm is assessed by fitting velocity spectra (see bottom panel of Fig. \ref{fig:spectres}) with the von K\'arm\'an formula (eg. see \cite{vita2018generating}). For comparison, the integral scale reported for the similar grid in \cite{vita2018generating}, once rescaled by the grid size, gives a nearby estimate of $7.4$ mm.
The Reynolds number $Re$ defined with $u^\mathrm{rms} H$ and the kinematic
viscosity $1.8\times10^{-8}$~m$^2$s$^{-1}$ of liquid He just above $T_\lambda$, is $Re=3.3\times10^4$ for $U=1$ m/s. Using standard homogeneous isotropic turbulence formula, the Taylor scale Reynolds number is $R_\lambda=\sqrt{15Re}\approx 700$ (for $\tau=12\%$ and $H=5$ mm). This gives an indication of turbulence intensity of the flow below $T_\lambda$.
Temperature of the helium bath is set via pressure regulation gates.
The exceptional thermal conductivity of He-II ensures an homogeneous
temperature inside the bath for $T<T_{\lambda}$. Two Cernox
thermometers, one located just above the pump, the other one on the
side of the pipe close to the probes, allow for direct monitoring
of $T$.
\section{Probes}
Our probes are micro-fabricated second sound tweezers
of the millimeter size according to the same principle as in \cite{roche2007vortex}.
As displayed in the inset of Fig. \ref{fig:probes}, the tweezers are composed
of one heating plate and one thermometer plate facing each other and
thus creating a resonant cavity for thermal waves. The heating
plate generates a stationary thermal wave of the order of $0.1$
mK between the plates, the amplitude of which can be recorded by the
thermometer plate. Two major improvements have been done compared
to the tweezers in \cite{roche2007vortex} : first, the length of
the arms supporting the plates has been increased to \textbf{$14$}
mm to avoid blockage effects due to the stack of silicon wafers (about 1.5 mm thick) downstream the cavity. Second,
two notches are done in the arms to avoid interference due to additional
reflections of the thermal wave on the arms. Further details will be given in a future publication.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure2.png}
\par\end{centering}
\caption{Ring with probes. The inset is a zoom on the heating and the thermometer plates of a second sound tweezers. The Pitot tube is not used in the present experiment. \label{fig:probes}}
\end{figure}
In the presence of He flow, a variation of the amplitude and phase
of the thermal wave can be observed. This variation is due
to two main physical effects. The presence of quantum vortex lines
inside the cavity causes an attenuation of the wave \cite{DonnellyPhysicsToday1984,varga2019} with a very
minor phase shift \cite{miller1978velocity}. This attenuation can be very accurately modelized by
a bulk dissipation coefficient inside the cavity denoted $\xi_{L}$. The second effect is a ballistic advection
of the wave out of the cavity. It is related to both an attenuation of
the temperature oscillation and an important phase shift. Depending
on the flow mean velocity $U$, the size of the tweezers, and the
frequency of the wave, one of these two effects can overwhelm the
other. We have thus designed two models of tweezers: one model to
take advantage of the first effect to measure the vortex lines density (VLD), and the other one
to take advantage of the second effect to measure the velocity.
The two largest tweezers displayed in Fig. \ref{fig:probes} are designed to measure the quantum vortex
lines density. The plates size is $l=1$ mm and the gaps
between the plates are $D=1.32$ mm and $D=0.83$ mm respectively.
The plates face each other with positioning accuracy of a few micrometers.
The tweezers are oriented parallel to the flow (see Fig. \ref{fig:probes}, the mean flow is directed from top to bottom)
to minimize the effect of ballistic advection of the wave.
The smallest tweezers displayed in Fig. \ref{fig:probes} are designed to be mainly sensitive to the velocity fluctuations
parallel to the mean flow. The two plates have a size $l=250$ $\mu$m,
and are separated by a gap of $D=0.431$ mm. The tweezers are oriented
perpendicular to the mean flow (see Fig. \ref{fig:probes})
with an intentional lateral shift of the heater and the thermometer
of about $l/2$. This configuration is expected to maximize the sensitivity
to ballistic advection, and thus to velocity fluctuations. To second order however, the probe still keeps sensitivity to the quantum vortices produced both by turbulence and by the intense heating of the plates, that's why we were not able to calibrate it reliably. The (uncalibrated) spectrum of this probe (see bottom panel of Fig. \ref{fig:spectres}) is only used to estimate the integral length scale. The role of this probe is also to prove that the signal statistics of the largest tweezers are not due to velocity fluctuations.
\section{Method}
Figure \ref{fig:methode} displays a resonance of a large tweezers
at frequency $f_{0}=15.2$ kHz, for increasing values of the mean velocity. The temperature oscillation $T$ measured by the thermometer
is demodulated by a Lock-in amplifier NF LI5640. $T$ can be accurately fitted by a classical
Fabry-Perot formula
\begin{equation}
T=\frac{A}{\sinh\left(i\frac{2\pi(f-f_{0})D}{c_{2}}+\xi D\right)} \label{eq:FP formula}
\end{equation}
where $i^2=-1$, $f_{0}$ is the resonant frequency for which the wave locally reaches its maximal
amplitude, $c_{2}$ is the second sound velocity, $A$ is a parameter
to be fitted, and $\xi$ is related to the energy loss of
the wave in the cavity. The top panel of Fig. \ref{fig:methode}
displays the amplitude of the thermal wave (in mK) as a function
of the frequency, and the bottom panel shows the same signal in phase and quadrature. When the
frequency is swept, the signal follows a curve close to a circle crossing
the point of coordinates $(0,0)$. Fig. \ref{fig:methode} clearly shows that the
resonant peak shrinks more and more when $U$ increases, which is
interpreted as attenuation of the wave inside the cavity. The red points
display the attenuation of the signal at constant value of $f$. It can
be seen on the bottom panel that the variation of the signal is close
to a pure attenuation, that is, without phase shift.
\begin{figure}
\begin{centering}
\includegraphics[height=4.5cm]{figure3a.pdf}\\
\includegraphics[height=5cm]{figure3b.pdf}
\par\end{centering}
\caption{\textbf{Top: }second sound resonance of one of the large tweezers around $15.2$
kHz. The value of $U$ increases from top curve to bottom curve. The vertical axis gives the amplitude
of the thermal wave in K. \textbf{Bottom:} representation of the
same resonance in phase and quadrature.\label{fig:methode}}
\end{figure}
$\xi$ can be decomposed as
\begin{equation}
\xi=\xi_{0}+\xi_{L}\label{eq:xi_dec}
\end{equation}
where $\xi_{0}$ is the attenuation factor when $U=0$ m/s and $\xi_{L}$
is the additional attenuation created by the presence of quantum vortex
lines inside the cavity. $\xi_{L}$ is the signal of interest as
it can be directly related to the vortex lines density (VLD) using
the relation
\begin{eqnarray}
\xi_{L} & = & \frac{B\kappa L_{\perp}}{4c_{2}},\label{eq:VLD}\\
L_{\perp} & = & \frac{1}{\mathcal{V}}\int\sin^{2}\theta(l){\rm d}l\label{eq:VLD def}
\end{eqnarray}
where $B$ is the first Vinen coefficient, $\kappa\approx9.98\times10^{-8}$
m$^{2}$/s is the quantum of circulation, $\mathcal{V}$ is the cavity
volume, $l$ is the curvilinear absciss along the vortex line, $\theta(l)$
is the angle between the vector tangent to the line and the direction
perpendicular to the plates. We note that the summation is weighted by the distribution of the second sound nodes and antinodes inside the cavity and does not exactly corresponds to a uniform average but we neglect this effect in the following. Our aim is to measure both the average value
and the fluctuations of $L_{\perp}$, as a function of $U$ and the superfluid fraction.
The method goes as follows: first, we choose a resonant frequency
$f_{0}$ where the amplitude of the signal has a local maximum and
we fix the frequency of the heating to this value $f_{0}$. Then we
vary the mean velocity $U$ and we record the response of the thermometer
plate in phase and quadrature. The measurements
show that the velocity-induced displacement in the complex plane follows a straight line in a direction
$\overrightarrow{e}$ approximately orthogonal to the resonant curve.
Expressions (\ref{eq:FP formula}-\ref{eq:xi_dec}) give $\xi_{L}$
from the measured amplitude $T$ by\cite{roche2007vortex}
\begin{equation}
\xi_{L}=\frac{1}{D}\rm{asinh}\left(\frac{A}{T}\right)-\xi_{0}.\label{eq:displacement}
\end{equation}
The colored dots of Fig. \ref{fig:fluctuations} illustrate the fluctuations of the signal
in phase and quadrature, for different values of $U$. The average signal
moves in the direction of the attenuation axis. The figure also shows
a part of the resonant curve for $U=0$. The fluctuations have two
components in the plane, both associated with different physical
phenomena. Fluctuations in the direction tangent to the resonant curve
can be interpreted as a variation of the acoustic path $\frac{2\pi(f-f_{0})D}{c_{2}}$
without attenuation of the wave. Those fluctuations can occur for example
because the two arms of the tweezers vibrate with submicron amplitude,
or because the temperature variations modify the second sound velocity
$c_{2}$. To isolate only the fluctuations associated to attenuation by
the quantum vortices, we split the signal into a component along the attenuation axis, and another one along the acoustic path axis.
We then convert the displacement along the attenuation axis into vortex line density (VLD) using
expressions (\ref{fig:methode}-\ref{eq:displacement}).
\begin{figure}
\begin{centering}
\includegraphics[height=7cm]{figure4.pdf}
\par\end{centering}
\caption{Fluctuations of the thermal wave in phase and quadrature. The colored
clouds show the fluctuations of the signal, for different values of
$U$. The blue curve shows the resonance for $U=0$ m/s. The fluctuations tangent
to the resonant curve are created by a variation of the acoustic path.
The quantum vortices are associated to attenuation of the wave and create
a displacement along the attenuation axis. \label{fig:fluctuations}}
\end{figure}
\section{Results}
As a check of the validity of our approach, we measured the average
response of the second sound tweezers as a function of the mean velocity
$U$. According to literature\cite{Babuin:EPL2014}, we were expecting the scaling $\left\langle L_{\perp}\right\rangle^2 \propto U^{3}$, with a prefactor related to the flow main characteristics. The function $\left\langle L_{\perp}\right\rangle $ was thus
measured for a range $0.4<U<1.25$ m/s with a time averaging over
$300$ ms, at the three different temperatures $1.65$ K, $1.99$ K and $2.14$ K.
An effective superfluid viscosity $\nu_\mathrm{eff}$ is customarily defined in quantum turbulence by $\epsilon = \nu_\mathrm{eff} ( \kappa \mathcal{L} )^2$ where $\epsilon$ is the dissipation and $\mathcal{L}=3\left\langle L_{\perp}\right\rangle/2$ is the averaged VLD (we assume isotropy of the tangle)\cite{Vinen:JLTP2002}. For large $R_\lambda$ homogeneous isotropic flows, we also have $\epsilon \simeq 0.79\, U^3\tau^3/H$ (eg see \cite{pope:book} p.245), which entails
\begin{equation}
\tau^3 \simeq 2.85 \frac{ \nu_\mathrm{eff} H \kappa^2\left\langle L_{\perp}\right\rangle^2}{U^3} \label{eq:tau}
\end{equation}
Using Eq. (\ref{eq:tau}), we compute the turbulence intensity as a function of $U$, for the three considered temperatures. The result is displayed in Fig. \ref{fig:scalingU}. The figure shows that the turbulence intensity reaches a plateau of about $12\%$ above $0.8$ m/s, a value in accordance with the turbulence intensity of $9\%$ reported in \cite{vita2018generating} for a grid turbulence with similar characteristics. The figure also confirms that the expected scaling $\left\langle L_{\perp}\right\rangle^2 \propto U^{3}$ is reached in our experiment for the range of velocities $U>0.8$ m/s.
The temperature-dependent viscosity $\nu_\mathrm{eff}$ in Eq. (\ref{eq:tau}) has been measured in a number of experiments (e.g. see compilations in \cite{Babuin:EPL2014,boue2015energyVorticity,gao2018dissipation}). Still, the uncertainty on its value exceeds a factor 2. For the temperatures $1.65$ K and $1.99$ K, we used the average values $0.2 \kappa$ and $0.25\kappa$. By lack of reference experimental value of $\nu_\mathrm{eff}$ above $2.1$ K, we determined it by collapsing the $\tau (U)$ datasets obtained at $2.14$ K with the two others. We found the value $\nu_\mathrm{eff}\approx 0.5\kappa$ at $2.14$ K.
Assuming isotropy of the vortex tangle, the value of $\mathcal{L} $ gives
a direct order of magnitude of the inter-vortex spacing $\delta=1/\sqrt{\mathcal{L}}$. We find $\delta\approx5$
$\mu m$ at 1.65 K and a mean velocity of 1 m/s. This shows the large scale separation between the inter-vortex spacing and the flow integral scale $H$, a confirmation of an intense turbulent regime.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure5.pdf}
\caption{Indirect measurement of the turbulence intensity $\tau=u^{\rm{rms}}/U$ as a function of $U$ using Eq. (\ref{eq:tau}). The three different symbols correspond to three values of the mean temperature. \label{fig:scalingU}}
\par\end{centering}
\end{figure}
Fig. \ref{fig:spectres} presents the main result of this letter.
We display on the top panel the VLD power spectral density $P_L(f)$ of $L_\perp/\left\langle L_{\perp}\right\rangle$. With this definition, the VLD turbulence intensity $L_{\perp}^{\rm{rms}}/\left\langle L_{\perp} \right\rangle$ is directly given by the integral of $P_L(f)$. We have measured the VLD fluctuations at the temperatures $T=1.65$ K and
superfluid fraction $\rho_{S}/\rho=81\%$, $T=1.99$ K and $\rho_{S}/\rho=47\%$,
$T=2.14$ K and $\rho_{S}/\rho=16\%$. At each temperature, the measurement was done for
at least two different mean velocities.
The first striking result is the collapse of all the spectra
independently of the temperature, when properly rescaled
using $f/U$ as coordinate (and $P_L(f)\times U$ as power spectral density to keep the integral constant).
The VLD spectrum does not depend on the superfluid fraction
even for vanishing superfluid fractions, when $T$ comes very close to $T_{\lambda}$. Only one measurement with one of the large tweezers at $T=1.650$ K has given a slight deviation from the master curve of the VLD spectra: it is displayed as the thin grey curve in Fig. \ref{fig:spectres}. We have no explanation for this deviation but did not observe this particular spectrum with the second tweezers, and neither at any other temperature.
Second, the VLD spectrum has no characteristic
power-law decay. We only observe that the spectrum follows an exponential decay approximately above $f/U>100$ m$^{-1}$. This strongly contrasts with the velocity spectrum obtained with the small second sound tweezers anemometer (see bottom panel),
which displays all the major features expected for a velocity
spectrum in classical turbulence: it has a sharp transition from a plateau at large scale to a power law scaling close
to $-5/3$ in the inertial scales of the turbulent cascade. Actually, it can be seen that the spectral decrease is less steep than $-5/3$, which can be due either to non-perfect isotropy and homogeneity, or more likely because the signal has some second-order corrections in addition to its dependence on velocity fluctuations. A fit of the transition using the von K\'arm\'an expression (see \cite{vita2018generating}) gives the value $H=5$ mm for the longitudinal integral scale. As a side remark, the apparent cut-off above $10^3$ m$^{-1}$ is an instrumental frequency cut-off of the tweezers.
We find a value of the VLD turbulent intensity close to 20\%,
which is significantly higher than the velocity turbulence intensity. We also checked that we obtain the same VLD spectrum using different
resonant frequencies $f_{0}$.
Our measurements are limited by two characteristic frequencies. First,
the tweezers average the VLD over a cube of side $l$, which means
that our resolution cannot exceed $f/U>1/l$. For the large tweezers,
this sets a cut-off scale of $10^{3}$ m$^{-1}$, much larger than
the range of inertial scales presented in top panel of Fig. \ref{fig:spectres}.
Second, the frequency bandwidth of the resonator decreases when the
quality factor of the second sound resonance increases. This again sets a cut-off scale given by
$f/U=\xi_{0}c_{2}/(2U)$. The worst configuration corresponds to the
data obtained at 2.14 K and $U=1.2$ m/s where the cut-off scale is about
$600$ m$^{-1}$. For this reason, the VLD spectra of
Fig. \ref{fig:spectres} are conservatively restricted to $f/U<300$ m$^{-1}$ which allows to resolve about one and a half decade of inertial scales.
Figure \ref{fig:histogrammes} displays some typical PDF of the rescaled
VLD fluctuations $L_{\perp}/\left\langle L_{\perp} \right\rangle$ in semilogarithmic scale, for the three considered
temperatures. The PDF have been vertically shifted by one decade from each other
for readability. The figure shows a strong asymmetry at all temperatures,
with a nearly Gaussian left wing, and an exponential right wing. Contrary to the VLD spectra, the PDF do not accurately collapse on a single master curve at different velocities and temperatures: yet, they remain very similar when the temperature and the mean velocity are changed, and their strongly asymetric shape seems to be a robust feature.
\begin{figure}
\begin{centering}
\includegraphics[height=9cm]{figure6.pdf}\\
\par\end{centering}
\caption{\textbf{Top:} Power spectral density of the projected vortex line density (VLD) $L_{\perp}$, obtained
with the large second sound tweezers, for different values of $U$
and temperatures. All measured spectra collapse using the scaling
$f/U$ and $P_L(f)\times U$. The fluctuations have been rescaled by the mean
value of the VLD such that the integral of the above curves directly
give the VLD turbulence intensity.
\textbf{Bottom:} Power spectral density of the uncalibrated velocity signal obtained from the second
sound tweezers anemometer, for two values of $U$ at 1.65 K.
The spectra collapse using the scaling $f/U$ for the frequency
and $P_U(f)/U$ for the spectral density. The straight line displays
the $-5/3$ slope which is expected for a classical velocity spectrum
in the inertial range of the turbulent cascade. The dotted line is a fit using the von K\'arm\'an expression (see \cite{vita2018generating}) to find the integral scale $H$.
\label{fig:spectres}}
\end{figure}
By contrast, the dotted curve in Fig. \ref{fig:histogrammes} displays one PDF of the small tweezers anemometer at $1.65$ K, for which the mean has been shifted and the variance rescaled. It can be seen that the general shape of this latter PDF is much more symmetric and closer to a Gaussian as expected for a PDF of velocity fluctuations.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure7.pdf}
\par\end{centering}
\caption{Normalized probability distributions of the VLD fluctuations obtained
at three temperatures. The PDF have been shifted by one decade from
each other for readability. By comparison, the dotted black curve displays a rescaled PDF obtained with the small tweezers measuring velocity. \label{fig:histogrammes}}
\end{figure}
\section{Discussion and conclusion}
In the present paper, we have investigated the temperature dependence of the statistics of the local density of vortex lines (VLD) in quantum turbulence. About one and a half decade of inertial scales of the turbulent cascade was resolved. We measure the VLD mean value and deduce from Eq. (\ref{eq:tau}) the turbulence intensity (Fig. \ref{fig:scalingU}), we report the VLD power spectrum (Fig. \ref{fig:spectres}), and the VLD probability distribution (Fig. \ref{fig:histogrammes}). Whereas the VLD mean value at different temperatures confirms previous numerical \cite{salort2011mesoscale,Babuin:EPL2014} and experimental studies \cite{Babuin:EPL2014}, the spectral and PDF studies are completely new. Only one measurement of the VLD fluctuations had been done previously around 1.6K \cite{roche2007vortex} but in a wind tunnel with a very specific geometry and a non-controlled turbulence production. In the present work, we have used a grid turbulence, which is recognized as a reference flow with well-documented turbulence characteristics.
To conclude, we discuss below the three main findings:
\begin{enumerate}
\item A master curve of the VLD spectra, independent of temperature and mean velocity.
\item The observed master curve does not correspond to previously reported spectra in the context of highly turbulent classical flows.
\item A global invariant shape of the strongly skewed PDF.
\end{enumerate}
The mean VLD gives the inter-vortex spacing, and thus tells how many quantum vortices are created in the flow, whereas the PDF and spectra tell how those vortices are organized in the flow. From 2.14K to 1.65K, our results confirm that the inter-vortex spacing only weakly decreases, by less than 23\% for a 5-times increase of the superfluid fraction. In other words, the superfluid fraction has a limited effect on the creation of quantum vortices. The current understanding of the homogeneous isotropic turbulence in He-II is that the superfluid and normal fluid are locked together at large and intermediate scales where they undergo a classical Kolmogorov cascade \cite{spectra:PNAS2014}. The experimental evidences are based on the observation of classical velocity statistics
using anemometers measuring the barycentric velocity of the normal and superfluid components. Here, the temperature-independence of (normalized) VLD spectra supports this general picture, by reminiscence of a similar property of He-II velocity spectra.
In contrast to velocity, the observed VLD master curve has an unexpected shape in the inertial range, at odd with the spectra reported as ``compatible with'' a $f^{-5/3}$ scaling in \cite{roche2007vortex}. The probe is sensitive to the total amount of vorticity in the scales smaller than the probe spatial resolution, and thus keeps track of the small scales fluctuations. A close classical counterpart of VLD is enstrophy, because its 1-D spectrum is also related to the velocity spectrum at smaller scales (eg. see \cite{antonia1996note}). However, the experimental \cite{baudet1996spatial} and numerical (e.g. \cite{ishihara2003spectra}) enstrophy spectra reported so far in three-dimensional classical turbulence strongly differ from the present VLD spectra. We have no definite explanation for this difference. It could originate from remanent quantum vortices pinned on the grid, that cause additional energy injection in the inertial range, in which case the peculiarity of our spectra would be specific to the type of forcing. Otherwise, it could be a more fundamental property associated with the microscopic structure of the vortex tangle that, together with the observed temperature-independence of the spectra, would be very constraining to develop mathematical closures for the continuous description of He-II (eg. see \cite{nemirovskii2020closure}).
As a discussion of the third statement,
we compare the PDF with those of numerical simulations done in classical turbulence. The absolute value of vorticity can be seen as a classical counterpart to the VLD. The work of Iyer and co. \cite{yeung2015extreme} for example, displays some enstrophy PDF from high resolution DNS, that can be compared to the PDF of Fig. \ref{fig:histogrammes}.
At small scale, the enstrophy PDF are strongly asymmetric and will ultimately converge to a Gaussian distribution when averaged over larger and larger scales. Although our tweezers average the VLD over a size much larger than the inter-vortex spacing, they are small enough to sense short-life intense vortical events, typical of small scale phenomenology in classical turbulence. Thus, the strong asymmetry of the PDF supports the analogy between VLD and enstrophy (or its square root) and shows the relevance of VLD statistics to explore the small scales of quantum turbulence.
A side result of the present work is to obtain the relative values of
the empirical coefficient $\nu_\mathrm{eff}=\epsilon(\kappa \mathcal{L})^{-2}$ at the three considered temperatures. Models and simulations predict that $\nu_\mathrm{eff}$ should steeply increase close to $T_\lambda$ (see \cite{Babuin:EPL2014,boue2015energyVorticity,gao2018dissipation} and ref. within),
in apparent contradiction with the only systematic experimental exploration \cite{stalp2002}. We found in Fig. \ref{fig:scalingU} that the effective viscosity $\nu_\mathrm{eff}$ is twice larger at 2.14K than at 1.99K.
To the best of our knowledge, our estimate $\nu_\mathrm{eff}(2.14 K) \simeq 2\,(\pm 0.25)\times \nu_\mathrm{eff}(1.99K)$ is the first experimental hint of such an effective viscosity increase.
\acknowledgments
We warmly thank B. Chabaud for support in upgrading the wind-tunnel and P. Diribarne, E. Lévêque and B. Hébral for their comments.
We thank K. Iyer with his co-authors for sharing data on the statistics of spatially averaged enstrophy analyzed in \cite{iyerNJP2019}.
Financial support from grants ANR-16-CE30-0016 (Ecouturb) and ANR-18-CE46-0013 (QUTE-HPC).
\bibliographystyle{eplbib}
|
\section{Introduction}
\label{sec:introduction}
We study {\sc greedy}\xspace routing over uni-dimensional metrics\footnote{The
principles of this work can be extended to higher dimensional
spaces. We focus on one-dimension for simplicity.} defined over
$n$ nodes lying in a ring. {\sc greedy}\xspace routing is the strategy of
forwarding a message along that out-going edge that minimizes the
{\it distance} remaining to the destination:
\begin{mydefinition}{Greedy Routing}
In a graph $(V,E)$ with a given distance function $\delta: V \times
V \rightarrow \mathcal{R}^+$, {\sc greedy}\xspace routing entails the following
decision: Given a target node $t$, a node $u$ with neighbors $N(u)$
forwards a message to its neighbor $v \in N(u)$ such that
$\delta(v,t) = \min_{x \in N(u)} \delta(x,t)$.
\end{mydefinition}
\noindent
Two {\it natural} distance metrics over $n$ nodes placed in a circle
are the clockwise-distance and the absolute-distance between pairs of
nodes:
\begin{eqnarray*}
\delta_{clockwise}(u, v) & = &
\begin{cases}
v-u & v\geq u\\
n+v-u & \text{otherwise}
\end{cases}\\
\delta_{absolute}(u, v) & = &
\begin{cases}
\min \{ v-u, n+u-v \} & v\geq u\\
\min \{ u-v, n+v-u \} & \text{otherwise}
\end{cases}
\end{eqnarray*}
\noindent
In this paper, we study the following related problems for the
above distance metrics:
\begin{center}
\begin{minipage}{0.95\textwidth} \it
\squishlist
\item[I.] Given integers $d$ and $\Delta$, what is the largest
graph that satisfies two constraints: the out-degree of any node
is at most $d$, and the length of the longest {\sc greedy}\xspace route is
at most $\Delta$ hops?
\item[II.] Given integers $d$ and $n$, design a network in which
each node has out-degree at most $d$ such that the length of the
longest {\sc greedy}\xspace route is minimized.
\squishend
\end{minipage}
\end{center}
\subsection*{Summary of results}
\begin{enumerate}
\item We construct a family of network topologies, the {\em
Papillon\xspace}\footnote{Our constructions are variants of the
well-known butterfly family, hence the name Papillon\xspace.}, in which
{\sc greedy} routes are asymptotically optimal. For both
$\delta_{clockwise}$ and $\delta_{absolute}$, Papillon\xspace has
{\sc greedy}\xspace routes of length $\Delta = \Theta(\log n / \log d)$ hops
in the worst-case when each node has $d$ out-going links.
Papillon\xspace is the first construction that achieves asymptotically
optimal worst-case {\sc greedy}\xspace routes.
\item Upon further investigation:, two properties of Papillon\xspace emerge:
(a) {\sc greedy}\xspace routing does not send messages along shortest paths,
and (b) Edge congestion with {\sc greedy}\xspace routing is not uniform --
some edges are used more often than others. We exhibit the first
property by identifying routing strategies that result in paths
shorter than those achieved by {\sc greedy} routing. In fact,
one of these strategies guarantees uniform edge-congestion.
\item Finally, we consider another distance function
$\delta_{xor}(u, v)$, defined as the number of bit-positions in
which $u$ and $v$ differ. $\delta_{xor}$ occurs naturally, e.g., in
hypercubes, and {\sc greedy}\xspace routing with $\delta_{xor}$ routes along
shortest paths in them. We construct a variant of Papillon\xspace that
supports asymptotically optimal routes of length $\Theta(\log n /
\log d)$ in the worst-case, for {\sc greedy}\xspace routing with distance
function $\delta_{xor}$.
\end{enumerate}
\section{Related Work}
\label{sec:related}
{\sc greedy}\xspace routing is a fundamental strategy in network theory. It
enjoys numerous advantages. It is completely decentralized, in that
any node takes routing decisions locally and independently. It is
oblivious, thus message headers need not be written along the
route. It is inherently fault tolerant, as progress toward the target
is guaranteed so long as some links are available. And it has good
locality behavior in that every step decreases the distance to the
target. Finally, it is simple to implement, yielding robust
deployments. For these reasons, {\sc greedy} routing has long
attracted attention in the research of network design. Recently,
{\sc greedy}\xspace routing has witnessed increased research interest in the
context of decentralized networks. Such networks arise in modeling
social networks that exhibit the ``small world phenomenon'', and in
the design of overlay networks for peer-to-peer (P2P) systems. We now
summarize known results pertaining to {\sc greedy}\xspace routing on a circle.
\subsection*{The Role of the Distance Function}
Efficient graph constructions are known that support {\sc greedy}\xspace routing
with distance function other than $\delta_{clockwise}$,
$\delta_{absolute}$ and $\delta_{xor}$.
For de Bruijn networks, the traditional
routing algorithm (which routes almost always along shortest paths)
corresponds to {\sc greedy}\xspace routing with $\delta(u, v)$ defined as the
longest suffix of $u$ that is also the prefix of $v$. For a 2D grid,
shortest paths correspond to {\sc greedy}\xspace routing with $\delta(u, v)$
defined as the Manhattan distance between nodes $u$ and $v$.
For {\sc greedy}\xspace routing on a circle, the best-known constructions have $d
= \Theta(\log n)$ and $\Delta = \Theta(\log n)$. Examples include:
Chord~\cite{chord:sigcomm01} with distance-function
$\delta_{clockwise}$, a variant of Chord with ``bidirectional
links''~\cite{ganesan:soda04} and distance-function
$\delta_{absolute}$, and the hypercube with distance function
$\delta_{xor}$. In this paper, we improve upon all of these
constructions by showing how to route in
$\Theta(\log n / \log d)$ hops in the worst case with
$d$ links per node.
\subsection*{{\sc greedy}\xspace Routing in Deterministic Graphs}
The \textsf{Degree-Diameter Problem}, studied in extremal graph
theory, seeks to identify the largest graph with diameter $\Delta$,
with each node having out-degree at most $d$ (see Delorme~\cite{ddp}
for a survey). The best constructions for large $\Delta$ tend to be
sophisticated~\cite{bermond:92,comellas:92,exoo:01}. A well-known
upper bound is $N(d, \Delta) = 1 + d + d^2 + \cdots + d^\Delta =
\frac{d^{\Delta+1} - 1}{d-1}$, also known as the Moore bound. A
general lower bound is $d^\Delta + d^{\Delta-1}$, achieved by Kautz
digraphs~\cite{kautz:68,kautz:69}, which are slightly superior to de
Bruijn graphs~\cite{debruijn:46} whose size is only
$d^\Delta$. Thus it is possible to route
in $O(\log n / \log d)$ hops in the worst-case with $d$ out-going
links per node. Whether {\sc greedy}\xspace routes with distance functions
$\delta_{clockwise}$ or $\delta_{absolute}$ can achieve the same
bound, is the question we have addressed in this paper.
{\sc greedy}\xspace routing with distance function $\delta_{absolute}$ has been
studied for Chord~\cite{ganesan:soda04}, a popular topology for P2P
networks. Chord has $2^b$ nodes, with out-degree $2b-1$ per node.
The longest {\sc greedy}\xspace route takes $\Floor{b/2}$ hops. In terms of $d$
and $\Delta$, the largest-sized Chord network has $n = 2^{2\Delta +
1}$ nodes. Moreover, $d$ and $\Delta$ cannot be chosen independently
-- they are functionally related. Both $d$ and $\Delta$ are
$\Theta(\log n)$. Analysis of {\sc greedy}\xspace routing of Chord leaves open
the following question:
\smallskip
\centerline{\it For {\sc greedy}\xspace routing on a circle, is
$\Delta = \Omega(\log n)$ when $d = O(\log n)$?}
\smallskip
Xu {\it et al.}\xspace~\cite{xu:infocom03} provide a partial answer to the above question
by studying {\sc greedy}\xspace routing with distance function
$\delta_{clockwise}$ over \emph{uniform} graph topologies. A graph
over $n$ nodes placed in a circle is said to be uniform if the set of
clockwise offsets of out-going links is identical for all
nodes. Chord is an example of a uniform graph. Xu {\it et al.}\xspace show that
for any uniform graph with $O(\log n)$ links per node, {\sc greedy}\xspace
routing with distance function $\delta_{clockwise}$ necessitates
$\Omega(\log n)$ hops in the worst-case.
Cordasco {\it et al.}\xspace~\cite{fchord:sirocco04} extend the result of Xu
{\it et al.}\xspace~\cite{xu:infocom03} by showing that {\sc greedy}\xspace routing with
distance function $\delta_{clockwise}$ in a uniform graph over $n$
nodes satisfies the inequality $n \leq F(d + \Delta + 1)$, where $d$
denotes the out-degree of each node, $\Delta$ is the length of the
longest {\sc greedy}\xspace path, and $F(k)$ denotes the $k^{th}$ Fibonacci
number. It is well-known that $F(k) = [\phi^k / \sqrt{5}]$, where
$\phi = 1.618\ldots$ is the Golden ratio and $[x]$ denotes the
integer closest to real number $x$. It follows that $1.44 \log_2 n
\leq d + \Delta + 1$. Cordasco {\it et al.}\xspace show that the inequality is
strict if $|d - \Delta| > 1$. For $|d - \Delta| \leq 1$, they
construct uniform graphs based upon Fibonacci numbers which achieve
an optimal tradeoff between $d$ and $\Delta$.
\medskip
The results in ~\cite{ganesan:soda04,xu:infocom03,fchord:sirocco04}
leave open the question whether there exists any graph construction
that permits {\sc greedy}\xspace routes of length $\Theta(\log n / \log d)$ with
distance function $\delta_{clockwise}$ and/or $\delta_{absolute}$.
Papillon\xspace provides an answer to the problem by constructing a
non-uniform graph --- the set of clockwise offsets of out-going links
is different for different nodes.
\subsection*{{\sc greedy}\xspace Routing in Randomized Graphs}
{\sc greedy}\xspace routing over nodes arranged in a ring with distance
function $\delta_{clockwise}$ has recently been studied for certain
classes of {\it randomized} graph constructions. Such graphs arise in
modeling social networks that exhibit the ``small world phenomenon'',
and in the design of overlay networks for P2P systems.
In the seminal work of Kleinberg \cite{kleinberg:stoc00}, a
randomized graph was constructed in order to explain the ``small
world phenomenon'', first identified by
Milgram~\cite{milgram:pt67}. The phenomenon refers to the observation
that individuals are able to route letters to unknown targets on the
basis of knowing only their immediate social contacts. Kleinberg
considers a set of nodes on a uniform two-dimensional grid. It
proposes a link model in which each node is connected to its
immediate grid neighbors, and in addition, has a single long range
link drawn from a normalized harmonic distribution with power $2$.
In the resulting graph, {\sc greedy}\xspace routes have length at most $O(\log^2
n)$ hops in expectation; this complexity was later shown to be tight
by Barri{\`e}re {\it et al.}\xspace in \cite{barriere:disc01}.
Kleinberg's construction has found applications in the design of
overlay routing networks for Distributed Hash Tables.
Symphony~\cite{symphony:usits03} is an adaptation of Kleinberg's
construction in a single dimension. The idea is to place $n$ nodes
in a virtual circle and to equip each node with $d \geq 1$ out-going
links. In the resulting network, the average path length of {\sc greedy}\xspace
routes with distance function $\delta_{clockwise}$ is
$O(\frac{1}{d}\log^2 n)$ hops. Note that unlike Kleinberg's network,
the space here is virtual and so are the distances and the sense of
{\sc greedy}\xspace routing. The same complexity was achieved with a slightly
different Kleinberg-style construction by Aspnes
{\it et al.}\xspace~\cite{aspnes:podc02}. In the same paper, it was also shown
that any symmetric, randomized degree-$d$ network has
$\Omega(\frac{\log^2 n}{d\log\log n})$ {\sc greedy}\xspace routing complexity.
Papillon outperforms all of the above randomized constructions, using
degree $d$ and achieving $\Theta(\log n/\log d)$ routing. It should
be possible to randomize Papillon along similar principles to the
Viceroy\cite{viceroy:podc02} randomized construction of the butterfly
network, though we do not pursue this direction here.
\subsection*{Summary of Known Results}
With $\Theta(\log n)$ out-going links per node, several graphs over
$n$ nodes in a circle support {\sc greedy}\xspace routes with $\Theta(\log n)$
{\sc greedy} hops. Deterministic graphs with this property include:
(a) the original Chord~\cite{chord:sigcomm01} topology with distance
function $\delta_{clockwise}$, (b) Chord with edges treated as
bidirectional~\cite{ganesan:soda04} with distance function
$\delta_{absolute}$. This is also the known lower bound on any
uniform graph with distance function $\delta_{clockwise}$
\cite{xu:infocom03}. Randomized graphs with the same tradeoff
include randomized-Chord~\cite{gummadi:sigcomm03,zhang:sigmetrics03}
and Symphony~\cite{symphony:usits03} -- both with distance function
$\delta_{clockwise}$. With degree $d \le \log n$, Symphony
\cite{symphony:usits03} has {\sc greedy}\xspace routes of length $\Theta((\log^2
n)/ d)$ on average. The network of \cite{aspnes:podc02} also
supports {\sc greedy}\xspace routes of length $O((\log^2 n)/d)$ on average , with
a gap to the known lower bound on their network of
$\Omega(\frac{\log^2 n}{d\log\log n})$.
The above results are somewhat discouraging, because routing that is
\textbf{non}-{\sc greedy} can achieve much better results. In
particular, networks of degree $2$ with hop complexity $O(\log n)$
are well known, e.g., the Butterfly and the de Bruijn (see for
example \cite{leighton:92} for exposition material). And networks of
logarithmic degree can achieve $O(\log n/ \log\log n)$ routing
complexity (e.g., take the degree-$\log_2 n$ de Bruijn). Routing in
these networks is non-{\sc greedy} according to any one of our
metrics ($\delta_{clockwise}$, $\delta_{absolute}$, and
$\delta_{xor}$).
The Papillon\xspace\ construction demonstrates that we can indeed design
networks in which {\sc greedy} routing along these metrics has
asymptotically optimal routing complexity. Our contribution is a
family of networks that extends the Butterfly network family, so as
to facilitate efficient {\sc greedy} routing. With $d$ links per
node, {\sc greedy} routes are $\Theta(\log n/\log d)$ in the
worst-case, which is asymptotically optimal. For $d = o(\log n)$,
this beats the lower bound of \cite{aspnes:podc02} on symmetric,
randomized greedy routing networks (and it meets it for $d=O(\log
n$). In the specific case of $d=\log n$, our greedy routing achieves
$O(\log n/\log \log n)$ average route length.
\subsection*{{\sc greedy}\xspace with {\sc lookahead}\xspace}
Recent work~\cite{manku:stoc04} explores the surprising advantages of
{\sc greedy}\xspace with {\sc lookahead}\xspace in randomized graphs over $n$ nodes in a
circle. The idea behind {\sc lookahead}\xspace is to take neighbor's neighbors
into account to make routing decisions. It shows that greedy with
{\sc lookahead}\xspace achieves $O(\log^2 n/ d \log d)$ expected route length in
Symphony~\cite{symphony:usits03}. For other networks which have
$\Theta(\log n)$ out-going links per node, e.g.,
randomized-Chord~\cite{gummadi:sigcomm03,zhang:sigmetrics03},
randomized-hypercubes~\cite{gummadi:sigcomm03},
skip-graphs~\cite{aspnes:soda03} and SkipNet~\cite{skipnet:usits03},
average path length is $\Theta(\log n / \log \log n)$ hops. Among
these networks, Symphony and randomized-Chord use {\sc greedy}\xspace routing with
distance function $\delta_{clockwise}$. Other networks use a different
distance function (none of them uses $\delta_{xor}$). For each of
these networks, with $O(\log n)$ out-going links per node, it was
established that plain {\sc greedy}\xspace (\emph{without} {\sc lookahead}\xspace) is
sub-optimal and achieves $\Omega(\log n)$ expected route lengths. The
results suggest that {\sc lookahead} has significant impact on {\sc greedy}\xspace
routing.
Unfortunately, realizing {\sc greedy}\xspace routing with {\sc lookahead}\xspace on a
degree-$k$ network implies that $O(k^2)$ nodes need to be considered
in each hop, while plain {\sc greedy}\xspace needs to consider only $k$ nodes.
For $k= \log_2 n$, this implies a $O(\log n)$ overhead for {\sc lookahead}\xspace
routing in every hop.
Papillon\xspace demonstrates that it is possible to construct a graph in
which each node has degree $d$ and in which {\sc greedy}\xspace \emph{without}
1-{\sc lookahead}\xspace has routes of length $\Theta(\log n / \log d)$ in the
worst case, for the metrics $\delta_{clockwise}$, $\delta_{absolute}$
and $\delta_{xor}$. Furthermore, for all $d = o(\log n)$, plain {\sc
greedy} on our network design beats even the results obtained in
\cite{manku:stoc04} with $1$-{\sc lookahead}.
\subsection*{Previous Butterfly-based Constructions}
\noindent
Butterfly networks have been used in the context of routing networks
for DHTs as follows:
\begin{enumerate}
\item Deterministic butterflies have been proposed for DHT routing by
Xu {\it et al.}\xspace~\cite{xu:infocom03}, who subsequently developed their
ideas into Ulysses~\cite{ulysses:icnp03}. Papillon\xspace for distance
function $\delta_{clockwise}$ has structural similarities with
Ulysses -- both are butterfly-based networks. The key differences
are as follows: (a) Ulysses does not use $\delta_{absolute}$ as
its distance function, (b) Ulysses does not use {\sc greedy}\xspace routing,
and (c) Ulysses uses more links than Papillon\xspace for distance
function $\delta_{clockwise}$ -- additional links have been
introduced to ameliorate non-uniform edge congestion caused by
Ulysses' routing algorithm. In contrast, the {\sc congestion-free}
routing algorithm developed in \S\ref{sec:improved} obviates the
need for any additional links in Papillon\xspace (see
Theorem~\ref{thm:congestion_free_clockwise}).
\item Viceroy~\cite{viceroy:podc02} is a \emph{randomized} butterfly
network which routes in $O(\log n)$ hops in expectation with
$\Theta(1)$ links per node. Mariposa (see
reference~\cite{dipsea:2004} or~\cite{manku:podc03}) improves upon
Viceroy by providing routes of length $O(\log n / \log d)$ in the
worst-case, with $d$ out-going links per node. Viceroy and
Mariposa are different from other randomized networks in terms of
their design philosophy.
The Papillon\xspace\ topology borrows elements of the geometric embedding of the
butterfly in a circle from Viceroy \cite{viceroy:podc02} and from
\cite{manku:podc03}, while extending them for {\sc greedy} routing.
\end{enumerate}
\section{Papillon\xspace}
\label{sec:papillon}
We construct two variants of butterfly networks, one each for
distance-functions $\delta_{clockwise}$ and $\delta_{absolute}$. The
network has $n$ nodes arbitrarily positioned on a ring. We label the
nodes from $0$ to $n-1$ according to their order on the ring. For
convenience, $x \bmod n$ always represents an element lying in the
range $[0, n-1]$ (even when $x$ is negative, or greater than $n-1$).
\begin{mydefinition}{Papillon\xspace for $\delta_{clockwise}$}
${\mathcal B}_{clockwise}(\kappa,m)$ is a directed graph, defined
for any pair of integers $\kappa,m \geq 1$
\begin{enumerate}
\item Let $n = \kappa^m m$.
\item Let $\ell(u) \equiv (m-1) - (u \bmod m)$. Each node has
$\kappa$ links. For node $u$, these directed links are to nodes
$(u + x) \bmod n$, where
$ x \in \{1 + im\kappa^{\ell(u)}\ |\ i \in [0, \kappa -1 ] \}$.
We denote the link with node $(u+1) \bmod n$ as $u$'s
\emph{``short link''}. The other $\kappa-1$ links are called
$u$'s \emph{``long links''}.
\end{enumerate}
\end{mydefinition}
\begin{mydefinition}{Papillon\xspace for $\delta_{absolute}$}
${\mathcal B}_{absolute}(k,m)$ is a directed graph, defined for
any pair of integers $k,m \geq 1$,
\begin{enumerate}
\item Let $n =(2k+1)^m m$.
\item Let $\ell(u) \equiv (m-1) - (u \bmod m)$. Each node has
$2k+2$ out-going links. Node $u$ makes $2k+1$ links with nodes
$(u + x) \bmod n$, where
$ x \in \{1 + im(2k+1)^{\ell(u)}\ |\ i \in [-k, +k] \}$.
Node $u$ also makes an out-going link with node $(u+x) \bmod n$,
where $x = -m+1$.
We denote the link with node $(u+1) \bmod n$
as $u$'s \emph{``short link''}. The other $2k+1$ links are
called $u$'s \emph{``long links''}.
\end{enumerate}
\end{mydefinition}
In both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$, all
out-going links of node $u$ are incident upon nodes with level
$(\ell(u) - 1) \bmod m$. In ${\mathcal B}_{clockwise}$, the short
links are such that each hop diminishes the remaining
\emph{clockwise} distance by at least one. Therefore, {\sc greedy}\xspace routing
is guaranteed to take a finite number of hops. In ${\mathcal
B}_{absolute}$, not every {\sc greedy}\xspace hop diminishes the remaining
\emph{absolute} distance. However, {\sc greedy}\xspace routes are still finite
in length, as we show in the proof of Theorem~\ref{thm:absolute}.
\begin{theorem} \label{thm:clockwise}
{\sc greedy}\xspace routing in ${\mathcal B}_{clockwise}$ with distance
function $\delta_{clockwise}$ takes $3m-2$ hops in
the worst-case. The average is less than $2m-1$ hops.
\end{theorem}
\begin{proof}
For any node $u$, we define
$
\text{SPAN}(u)\equiv \{ v \ |\ 0 \leq \delta_{clockwise}(u, v) < m
\kappa^{\ell(u)+1} \}.
$
Let $t$ and $u$ denote the target node and the current node,
respectively. Routing
proceeds in (at most) three phases:
\begin{center}
\begin{tabular}{lll}
Phase I: & $t \not\in \text{SPAN}(u)$ & (at most $m-1$ hops)\\
Phase II: & $t \in \text{SPAN}(u)$ and
$\delta_{clockwise}(u,t) \ge m$ & (at most $m$ hops)\\
Phase III: & $t \in \text{SPAN}(u)$ and
$\delta_{clockwise}(u,t) < m$ & (at most $m-1$ hops)
\end{tabular}
\end{center}
We now prove upper bounds on the number of hops in each phase.
\begin{enumerate}
\item[I.]
The out-going links of $u$ are incident upon nodes at
level $(\ell(u) - 1) \bmod m$. So eventually, the level of the
current node
$u$ will be $m-1$. At this point,
$t \in \text{SPAN}(u)$ because $\text{SPAN}(u)$ includes
\emph{all} the
nodes. Thus Phase 1 lasts for at most $m-1$ hops
($\frac{m-1}{2}$ hops on
average).
\item[II.]
{\sc greedy}\xspace will forward the
message to some node $v$ such that $t \in \text{SPAN}(v)$ and
$\ell(v)=\ell(u)-1$. Eventually, the current node $u$ will
satisfy the property $\ell(u) = 0$. This node will forward
the message to some node
$v$ with $\ell(v) = m-1$ such that $\delta_{clockwise}(v,t) < m$,
thereby terminating this phase of routing. There are at most $m$
hops in this phase (at most $m$ on average as well).
\item[III.]
In this phase, {\sc greedy}\xspace will decrease the clockwise
distance by exactly one in each hop by following the
short-links. Eventually, target $t$ will be reached. This phase
takes at most $m-1$ hops ($\frac{m-1}{2}$ hops on
average).
\end{enumerate}
The worst-case route length is $3m-2$.
On average, routes are at most $2m-1$ hops long.
\end{proof}
\begin{theorem} \label{thm:absolute}
{\sc greedy}\xspace routing in ${\mathcal B}_{absolute}$ with distance
function $\delta_{absolute}$ takes $3m-2$ hops in
the worst-case. The average is less than $2m-1$ hops.
\end{theorem}
\begin{proof}
For any node $u$, we define
\[
\text{SPAN}(u)\equiv \{ v \ |\ \delta_{absolute}(u, v) = | c +
m\sum_{i=0}^{\ell(u)} (2k+1)^i d_i |,\ c \in [0, m-1],\ d_i \in [-k,
+k] \,\}.
\]
Let $t$ and $u$ denote the target node and the current node,
respectively.
Routing proceeds in (at most) three phases:
\begin{center}
\begin{tabular}{lll}
Phase I: & $t \not\in \text{SPAN}(u)$ & (at most $m-1$ hops)\\
Phase II: & $t \in \text{SPAN}(u)$ and
$\delta_{absolute}(u,t) \ge m$ & (at most $m$ hops)\\
Phase III: & $t \in \text{SPAN}(u)$ and
$\delta_{absolute}(u,t) < m$ & (at most $m-1$ hops)
\end{tabular}
\end{center}
We now prove upper bounds on the number of hops in each phase.
\begin{enumerate}
\item[I.] All out-going links of node $u$ are incident upon nodes at
level $(\ell(u) - 1) \bmod m$. So eventually, the current node
$u$ will satisfy the property $\ell(u) = m - 1$. At this point,
$t \in \text{SPAN}(u)$ because $\text{SPAN}(u)$ includes
\emph{all} nodes. Thus Phase I lasts at most $m-1$ hops
(at most $\frac{m-1}{2}$ hops on average).
\item[II.]
Phase 2 terminates if target node $t$ is reached, or if
$\delta_{absolute}(u, t) < m$.
Node
$u$ always forwards the message to some node $v$ such that
$t \in \text{SPAN}(v)$ and $\ell(v) = \ell(u) - 1$. So
eventually, either target $t$ is reached, or the current node
$u$ satisfies the property
$\ell(u) = 0$. At this point, if node $u$ forwards the message to
node $v$, then it is guaranteed that $\ell(v) =
m-1$ and $\delta_{absolute}(v,t) < m$,
thereby terminating Phase II. There are at most $m$
hops in this phase (at most $m$ on average as well).
\item[III.] The target node $t$ is reached in at most $m-1$
hops (the existence of the ``back
edge'' that connects node $u$ to node $(u + 1 - m) \bmod n$
guarantees this). This phase takes at most $m-1$ hops (at most
$\frac{m-1}{2}$ hops on average).
\end{enumerate}
The worst-case route length is $3m-2$.
On average, routes are at most $2m-1$ hops long.
\end{proof}
\bigskip
Routes in both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$
are at most $3m-2$ hops, which is $O(\log (\kappa^m m) / \log
\kappa)$ and $O(\log ((2k+1)^m m) / \log (2k+2))$, respectively.
Given degree $d$ and diameter $\Delta$, the size of Papillon\xspace is $n =
2^{O(\Delta)}\Delta$ nodes. Given degree $d$ and network size $n$,
the longest route has length $\Delta = O(\log n / \log d)$.
\section{Improved Routing Algorithms for Papillon\xspace}
\label{sec:improved}
{\sc greedy}\xspace routing does not route along shortest-paths in ${\mathcal
B}_{clockwise}$ and ${\mathcal B}_{absolute}$. We demonstrate this
constructively below, where we study a routing strategy called {\sc
hypercubic-routing} which achieves shorter path lengths than {\sc greedy}\xspace.
\subsection*{Hypercubic Routing}
\begin{theorem} \label{thm:fast_clockwise}
There exists a routing strategy for ${\mathcal B}_{clockwise}$ in
which routes take $2m-1$ hops in the worst-case. The average is at
most $1.5m$ hops.
\end{theorem}
\begin{proof}
Consider the following {\sc hypercubic-routing} algorithm on
${\mathcal B}_{clockwise}$. Let $s$ be the source node, $t$ the
target, and let $dist = \delta_{clockwise}(s, t) = c + m +
m\sum_{i=0}^{i=m-1} \kappa^i d_i$ with $0 \leq c <m$ and $0 \leq
d_i < \kappa$ ($dist$ has exactly one such representation, unless
$dist \leq m$ in which case routing takes $< m$ hops).
Phase I: Follow the short-links to ``fix'' the $c$-value to zero.
This takes at most $m-1$ hops (at most $0.5m$ hops on average).
Phase II: In exactly $m$ hops, ``fix'' the $d_i$'s in succession to
make them all zeros: When the current node is $u$, we fix
$d_{\ell(u)}$ to zero by following the appropriate long-link, i.e.,
by shrinking the clockwise distance by $d_{\ell(u)}
\kappa^{\ell(u)} m + 1$. The new node $v$ satisfies $\ell(v) =
(\ell(u)+m-1) (\bmod~m)$. When each $d_i$ is zero, we have reached
the target.
Overall, the worst-case route length is $2m-1$. Average route
length is at most $1.5m$.
\end{proof}
\begin{theorem} \label{thm:fast_absolute}
There exists a routing strategy for ${\mathcal B}_{absolute}$ in
which routes take $2m-1$ hops in the worst-case. The average is at
most $1.5m$ hops.
\end{theorem}
\begin{proof}
Let $s$ be the source node, $t$ the target.
Phase I: Follow the short-links in the clockwise direction, to
reach a node $s'$ such that $\ell(s') = \ell(t)$. This takes at
most $m-1$ hops (at most $0.5m$ hops on average). The remaining
distance can be expressed as $m + m\sum_{i=0}^{i=m-1} (2k+1)^i d_i$
where $-k \leq d_i \leq k$. There is a unique such representation.
Phase II: In exactly $m$ hops, ``fix'' the $d_i$'s in succession to
make them all zeros: When the current node is $u$, we fix
$d_{\ell(u)}$ by following the appropriate long-link, i.e.,
by traveling distance $1 + d_{\ell(u)} (2k+1)^{\ell(u)} m$ along
the circle (this distance is positive or negative, depending upon
the sign of $d_{\ell(u)}$). The new node $v$ satisfies $\ell(v) =
(\ell(u)-1) (\bmod~m)$. When each $d_i$ is zero, we have reached
the target.
Overall, the worst-case route length is $2m-1$. Average route
length is at most $1.5m$.
\end{proof}
\bigskip
Note that the edges that connect node $u$ to node $(u+1-m) \bmod n$
are redundant for {\sc hypercubic-routing} since they are never used.
However, these edges play a crucial role in {\sc greedy}\xspace routing in
${\mathcal B}_{absolute}$ (to guide the message to the target in
Phase 3).
\subsection*{Congestion-Free Routing}
Theorems~\ref{thm:fast_clockwise} and ~\ref{thm:fast_absolute} prove
that {\sc greedy}\xspace routing is sub-optimal in the constants. {\sc
hypercubic-routing}, as described above, is faster than
{\sc greedy}\xspace. However, it causes {\it edge-congestion}
because short-links are used more often than long-links. Let $\pi$
denote the ratio of maximum and minimum loads on edges caused by all
$n \choose 2$ pairwise routes. {\sc hypercubic-routing} for
${\mathcal B}_{clockwise}$ consists of two phases (see Proof of
Theorem~\ref{thm:fast_clockwise}). The load due to Phase II is
uniform -- all edges (both short-links and long-links) are used
equally. However, Phase I uses only short-links, due to which $\pi
\not= 1$. We now modify the routing scheme slightly to obtain $\pi =
1$ for both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$.
\begin{theorem} \label{thm:congestion_free_clockwise}
There exists a congestion-free routing strategy in ${\mathcal
B}_{clockwise}$ that takes $2m - 1$ hops in the worst-case and at
most $1.5m$ hops on average, in which $\pi = 1$.
\end{theorem}
\begin{proof}
The theorem is proved constructively, by building a new routing
strategy called {\sc congestion-free}. This routing strategy is
exactly the same as {\sc hypercubic-routing}, with a small change.
Let $s$ be the source node, $t$ the target. Let $c = (t+m-s) \bmod
m$, the difference in levels between $\ell(s)$ and $\ell(t)$.
Phase I: For $c$ steps, follow any out-going link, chosen uniformly
at random. We thus reach a node $s'$ such that $\ell(s') =
\ell(t)$.
Phase II: The remaining distance is $dist = \delta_{clockwise}(s',
t) = m+ m\sum_{i=0}^{i=m-1} \kappa^i d_i$ with $0 \leq d_i <
\kappa$. Continue with Phase II of the {\sc hypercubic-routing}
algorithm for ${\mathcal B}_{clockwise}$ (see
Theorem~\ref{thm:fast_clockwise}).
It is easy to see that in this case, all outgoing links (short- and
long-) are used with equal probability along the route. Hence,
$\pi = 1$.
\end{proof}
\begin{theorem} \label{thm:congestion_free_absolute}
There exists a congestion-free routing strategy in ${\mathcal
B}_{absolute}$ that takes $2m - 1$ hops in the worst-case and at
most $1.5m$ hops on average, in which $\pi = 1$.
\end{theorem}
\begin{proof}
We will ignore the edges that connect node $u$ to node $(u+1-m)
\bmod n$ (recall that these edges are not used in {\sc
hypercubic-routing} described in
Theorem~\ref{thm:fast_absolute}). We will ensure $\pi = 1$ for the
remainder of the edges.
{\sc congestion-free} routing follows the same idea as that for
${\mathcal B}_{clockwise}$
(Theorem~\ref{thm:congestion_free_clockwise}): Let $s$ be the
source node, $t$ the target. Let $c = (t+m-s) \bmod m$, the
difference in levels between $\ell(s)$ and $\ell(t)$. In Phase I,
for $c$ steps, we follow any out-going link, chosen uniformly at
random. We thus reach a node $s'$ such that $\ell(s') = \ell(t)$.
In Phase II, we continue as per Phase II of the {\sc
hypercubic-routing} algorithm for ${\mathcal B}_{absolute}$
(Theorem~\ref{thm:fast_absolute}).
\medskip
An alternate {\sc congestion-free} routing algorithm for ${\mathcal
B}_{absolute}$ that routes deterministically is based upon the
following idea: We express any integer $a \in [-k, +k]$ as the sum
of two integers: $a' = \Floor{(k+a)/2}$ and $a'' =
-\Floor{(k-a)/2}$. It is easy to verify that $a = a' + a''$. Now
if we list all pairs $\langle a', a''\rangle$ for $a \in [-k, +k]$,
then each integer in the range $[-k, +k]$ appears exactly twice as
a member of some pair.
Let $s$ be the source node, $t$ the target. Let $c = (t+m-s) \bmod
m$, the difference in levels between $\ell(s)$ and $\ell(t)$. The
remaining distance is $dist = c + m+ m\sum_{i=0}^{i=m-1} (2k+1)^i
d_i$ with $-k \leq d_i \leq k$ (there is a unique way to represent
$dist$ in this fashion).
Phase I: For $c$ steps, if the current node is $u$, then we follow
the edge corresponding to $d_{\ell(u)}'$, i.e., the edge that
covers distance $1 + md_{\ell(u)}'(2k+1)^{\ell(u)}$ (in the
clockwise or the anti-clockwise direction, depending upon the sign
of $d_{\ell(u)}'$). At the end of this phase, we reach a node $s'$
such that $\ell(s') = \ell(t)$.
Phase II: Continue with Phase II of the {\sc hypercubic-routing}
algorithm for ${\mathcal B}_{absolute}$
(Theorem~\ref{thm:fast_absolute}), for exactly $m$ steps.
Due to the decomposition of integers in $[-k, +k]$ into pairs, as
defined above, all outgoing links (short- and long-) are used
equally. Hence, $\pi = 1$.
\end{proof}
\bigskip
{\bf Notes}: In the context of the current Internet, out-going links
correspond to full-duplex TCP connections. Therefore, the undirected
graph corresponding to ${\mathcal B}_{absolute}$ is of interest. In
this undirected graph, it is possible to devise congestion-free
routing with $\pi = 1$, maximum path length $m + \Floor{m/2}$ and
average route-length at most $1.25m$. This is achieved by making at
most $\Floor{m/2}$ initial random steps either in the down or the up
direction, whichever gets to a node with level $\ell(t)$ faster.
\section{Papillon\xspace with Distance Function $\delta_{xor}$}
\label{sec:xor}
In this Section, we define a variant of Papillon\xspace in which {\sc greedy}\xspace
routing with distance function $\delta_{xor}$ results in worst-case
route length $\Theta(\log n / \log d)$, with $n$ nodes, each having
$d$ out-going links. For integers $s$ and $t$, $\delta_{xor}(s, t)$
is defined as the number of bit-positions in which the binary
representations of $s$ and $t$ differ.
\begin{mydefinition}{Papillon\xspace for $\delta_{xor}$}
${\mathcal B}_{xor}(\lambda, m)$ is a directed graph, defined
for any pair of integers $\lambda, m \geq 1$ where $\lambda$ is a
power of two.
\begin{enumerate}
\item The network has $n = m\lambda^m$ nodes labeled from $0$ to
$n-1$.
\item Let $u$ denote a node. Let $\ell(u)$ denote the unique
integer $x \in [0, m-1]$ that satisfies $x\lambda^m \leq u <
(x+1)\lambda^m$. The node $u$ makes links with nodes with labels
\[
((\ell(u) + 1) \bmod m)\lambda^m + i\lambda^{\ell(u)},
\quad \mathrm{where}\ \
0 \leq i < \lambda.
\]
Thus, if $(u, v)$ is an edge, then $\ell(v) =
(\ell(u) + 1) \bmod m$.
\end{enumerate}
\end{mydefinition}
\begin{theorem} \label{thm:xor}
{\sc greedy}\xspace routing in ${\mathcal B}_{xor}$ with distance function
$\delta_{xor}$ takes $2m-1$ hops in
the worst-case. The average is at most $1.5m$ hops.
\end{theorem}
\begin{proof}
Let the current node be $s$. Let $t$ denote the target node.
Then $s \oplus t$, the bit-wise exclusive-OR of $s$ and $t$, can
uniquely be expressed as $c + \sum_{i=0}^{i=m-1}
\lambda^i d_i$, where $c \geq 0$ and $0 \leq d_i < \lambda$.
Routing proceeds in two phases. In Phase I, each of the $d_i$ is
set to zero. This takes at most $m$ steps (at most $m$ on
average). In Phase II, the most significant
$\Ceiling{\log_2 m}$ bits of $s \oplus t$ are set to zero,
thereby
reaching the target. This phase takes at most $m-1$ hops (at most
$\frac{m-1}{2}$ on average).
\end{proof}
\section{Summary}
\label{summary}
We presented Papillon\xspace, a variant of multi-butterfly networks which supports
asymptotically optimal {\sc greedy}\xspace routes of length $O(\log n / \log d)$
with distance functions $\delta_{clockwise}$, $\delta_{absolute}$ and
$\delta_{xor}$,
when each node makes $d$ out-going links, in an $n$-node network.
Papillon\xspace is the first construction with this property.
\medskip
Some questions that remain unanswered:
\begin{enumerate}
\item {\it Is it possible to devise graphs in which {\sc greedy}\xspace routes
with distance function $\delta_{clockwise}$ and
$\delta_{absolute}$ are along shortest-paths? } As Theorems
~\ref{thm:fast_clockwise} and ~\ref{thm:fast_absolute} illustrate,
{\sc greedy}\xspace routing on Papillon\xspace do not route along shortest-paths.
Is this property inherent in {\sc greedy}\xspace routes?
\item {\it What is the upper-bound for the Problem of Greedy Routing
on the Circle? } Papillon\xspace furnishes a lower-bound, which is
asymptotically optimal. However, constructing the
largest-possible graph with degree $d$ and diameter $\Delta$, is
still an interesting combinatorial problem.
\end{enumerate}
\newcommand{\etalchar}[1]{$^{#1}$}
|
\section{Introduction}
Over the past few years, deep reinforcement learning has gained much popularity as it has been shown to perform better than previous methods on domains with very large state-spaces.
In one of the earliest deep reinforcement learning papers (hereafter the DQN paper), \citet{mnih2015human} presented a method for learning to play Atari 2600 video games, using the Arcade Learning Environment (ALE)~\citep{bellemare13arcade}, from image and performance data alone using the same deep neural network architecture and hyper-parameters for all the games.
DQN outperformed previous reinforcement learning methods on nearly all of the games and recorded better than human performance on most.
As many researchers tackle reinforcement learning problems with deep reinforcement learning methods and propose alternative algorithms, the results of the DQN paper are often used as a benchmark to show improvement.
Thus, implementing the DQN algorithm is important for both replicating the results of the DQN paper for comparison and also building off the original algorithm.
One of the main contributions of the DQN paper was finding ways to improve stability in their artificial neural networks during training.
There are, however, a number of other areas in the implementation of this method that are crucial to its success, which were only mentioned briefly in the paper.
We implemented a Deep Q-Network (DQN) to play the Atari games and replicated the results of \citet{mnih2015human}.
Our implementation, available freely online,\footnote{\url{www.github.com/h2r/burlap_caffe}} runs around 4x faster than the original implementation.
Our implementation is also designed to be flexible to different neural net network architectures and problem domains outside of ALE.
In replicating these results, we found a few key insights into the process of implementing such a system.
In this paper, we highlight key techniques that are essential for good performance and replicating the results of \citet{mnih2015human}, including termination conditions and gradient descent optimization algorithms, as well as expected results of the algorithm, namely the fluctuating performance of the network.
\section{Related Work}{}
The Markov Decision Process (MDP) \citep{bellman1957markovian} is the typical formulation used for reinforcement learning problems.
An MDP is defined by a five-tuple $(\mathcal{S, A, T, R, E})$;
$\mathcal{S}$ is the agent's state-space;
$\mathcal{A}$ is the agent's action-space;
$\mathcal{T}(s, a, s')$ represents the transition dynamics, which returns the probability that taking action $a$ in state $s$ will result in the state $s'$;
$\mathcal{R}(s, a, s')$ is the reward function, which returns the reward received when transitioning to state $s'$ after taking action $a$ in state $s$;
and $\mathcal{E} \subset \mathcal{S}$ is the set of terminal states, which once reached prevent any future action or reward.
The goal of planning in an MDP is to find a policy $\pi : S \rightarrow A$, a mapping from states to actions, that maximizes the expected future discounted reward when the agent chooses actions according to $\pi$ in the environment. A policy that maximizes the expected future discounted reward is an optimal policy and is denoted by $\pi^*$.
A key concept related to MDPs is the Q-function, $Q^\pi : S \times A \rightarrow \mathbb{R}$, that defines the expected future discounted reward for taking action $a$ in state $s$ and then following policy $\pi$ thereafter. According to the Bellman equation, the Q-function for the optimal policy (denoted $Q^*$) can be recursively expressed as:
\begin{equation}
Q^*(s, a) = \sum_{s' \in S} T(s, a, s') \left [ R(s, a, s') + \gamma \max_{a'} Q^*(s', a') \right ]
\end{equation}
where $0 \leq \gamma \leq 1$ is the discount factor that defines how valuable near-term rewards are compared to long-term rewards.
Given $Q^*$, the optimal policy, $\pi^*$, can be trivially recovered by greedily selecting the action in the current state with the highest Q-value: $\pi^*(s) = \argmax_a Q^*(s, a)$. This property has led to a variety of learning algorithms that seek to directly estimate $Q^*$, and recover the optimal policy from it. Of particular note is Q-Learning~\citep{watkins1989learning}.
In Q-Learning, an agent begins with an arbitrary estimate ($Q_0$) of $Q^*$ and iteratively improves its estimate by taking arbitrary actions in the environment, observing the reward and next state, and updating its Q-function estimate according to
\begin{equation}
Q_{t+1}(s_t, a_t) \gets Q_t(s_t, a_t) + \alpha_t \left[ r_{t+1} + \gamma \max_{a'} Q_t(s_{t+1}, a') - Q_t(s_t, a_t) \right],
\end{equation}
where $s_t$, $a_t$, $r_t$ are the state, action, and reward at time step $t$, and $\alpha_t \in (0, 1]$ is a step size smoothing parameter.
Q-Learning is guaranteed to converge to $Q^*$ under the following conditions: the Q-function estimate is represented tabularly (that is, a value is associated with each unique state-action pair), the agent visits each state and action infinitely often, and $\alpha_t \rightarrow 0$ as $t \rightarrow \infty$.
When the state-space of a problem is large (or infinite), Q-learning's $Q^*$ estimate is often implemented with function approximation, rather than a tabular function, which allows generalization of experience.
However, estimation errors in the function approximation can cause Q-learning, and other ``off policy'' methods, to diverge~\citep{baird1995residual}, requiring careful use of function approximation.
\section{Deep Q-Learning}
\begin{algorithm}[t]
\begin{algorithmic}
\State Initialize replay memory $D$ to capacity $N$
\State Initialize action-value function $Q$ with random weights $\theta$
\State Initialize target action-value function $\hat Q$ with weights $\theta^{-} = \theta$
\For{episode 1, $M$}
Initialize sequence $s_1 = \{ x_1 \}$ and preprocessed sequence $\phi_1 = \phi(s_1)$
\For{$t = 1, T$}
\State With probability $\varepsilon$ select a random action $a_t$
\State otherwise select $a_t = \argmax_a Q(\phi(s_t), a; \theta)$
\State Execute action $a_t$ in the emulator and observe reward $r_t$ and image $x_{t+1}$
\State Set $s_{t+1} = s_t, a_t, x_{t+1}$ and preprocess $\phi_{t+1} = \phi(s_{t+1})$
\State Store experience $(\phi_t, a_t, r_t, \phi_{t+1})$ in $D$
\State Sample random minibatch of experiences $(\phi_j, a_j, r_j, \phi_{j+1})$ from $D$
\State Set $y_j = \begin{cases}
r_j & \text{if episode terminates at step $j+1$}\\
r_j + \gamma \max_{a'} \hat Q(\phi_{j+1}, a'; \theta^{-}) & \text{otherwise}
\end{cases}$
\State Perform a gradient descent step on $(y_j - Q(\phi_j, a_j ; \theta))^2$ with respect to the weights $\theta$
\State Every $C$ steps reset $\hat Q = Q$
\EndFor
\EndFor
\end{algorithmic}
\caption{Deep Q-learning with experience replay}
\label{alg:dqn}
\end{algorithm}
Deep Q-Learning (DQN)~\citep{mnih2015human} is a variation of the classic Q-Learning algorithm with 3 primary contributions: (1) a deep convolutional neural net architecture for Q-function approximation; (2) using mini-batches of random training data rather than single-step updates on the last experience; and (3) using older network parameters to estimate the Q-values of the next state.
Pseudocode for DQN, copied from \citet{mnih2015human}, is shown in Algorithm~\ref{alg:dqn}.
The deep convolutional architecture provides a general purpose mechanism to estimate Q-function values from a short history of image frames (in particular, the last 4 frames of experience). The latter two contributions concern how to keep the iterative Q-function estimation stable.
In supervised deep-learning work, performing gradient descent on mini-batches of data is often used as a means to efficiently train the network. In DQN, it plays an additional role.
Specifically, DQN keeps a large history of the most recent experiences, where each experience is a five-tuple $(s, a, s', r, T)$, corresponding to an agent taking action $a$ in state $s$, arriving in state $s'$ and receiving reward $r$; and $T$ is a boolean indicating if $s'$ is a terminal state.
After each step in the environment, the agent adds the experience to its memory.
After some small number of steps (the DQN paper used 4), the agent randomly samples a mini-batch from its memory on which to perform its Q-function updates.
Reusing previous experiences in updating a Q-function is known as {\em experience replay}~\citep{lin1992self}.
However, while experience replay in RL was typically used to accelerate the backup of rewards, DQN's approach of taking fully random samples from its memory to use in mini-batch updates helps decorrelate the samples from the environment that otherwise can cause bias in the function approximation estimate.
The final major contribution is using older, or ``stale,'' network parameters when estimating the Q-value for the next state in an experience and only updating the stale network parameters on discrete many-step intervals. This approach is useful to DQN, because it provides a stable training target for the network function to fit, and gives it reasonable time (in number of training samples) to do so. Consequently, the errors in the estimation are better controlled.
Although these contributions and overall algorithm are straightforward conceptually, there are number of important details to achieving the same level of performance reported by \citet{mnih2015human}, as well as important properties of the learning process that a designer should keep in mind. We describe these details next.
\subsection{Implementation Details}
Large systems, such as DQN, are often difficult to implement since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution.
Consequently, some important low-level details of the algorithm are not explicitly mentioned or fully clarified in the DQN paper.
Here we highlight some of these key additional implementation details, which are provided in the original DQN code.\footnote{\url{www.github.com/kuz/DeepMind-Atari-Deep-Q-Learner}}
Firstly, every episode is started with a random number of ``No-op'' low-level Atari actions (in contrast to the agent's actions which are repeated for $4$ frames) between $0$ and $30$ in order to offset which frames the agent sees, since the agent only sees every $4$ Atari frames.
Similarly, the $m$ frame history used as the input to the CNN is the last $m$ frames that the agent sees, not the last $m$ Atari frames.
Additionally, before any gradient descent steps, a random policy is run for $\num{50000}$ steps to fill in some experiences in order to avoid over-fitting to early experiences.
Another parameter worth noting is the network update frequency.
The original DQN implementation only chose to take a gradient descent step every $4$ environment steps of the algorithm as opposed to every step, as Algorithm \ref{alg:dqn} might suggest.
Not only does this greatly increase the training speed (since learning steps on the network are far more expensive than forward passes), it also causes the experience memory to more closely resemble the state distribution of the current policy (since 4 new frames are added to the memory between training steps as opposed to 1) and may prevent the network from over-fitting. \stnote{Are there results for this? Could there be? (I know this is a lot of work and probably not worth the tiem but wanted to ask the question...)}
\subsection{The Fluctuating Performance of DQN}
A common belief for new users of DQN is that performance should fairly stably improve as more training time is given. Indeed, average Q-learning learning curves in tabular settings are typically fairly stable improvements and supervised deep-learning problems also tend have fairly steady average improvement as more data becomes available. However, it is not uncommon in DQN to have ``catastrophic forgetting'' in which the agent's performance can drastically drop after a period of learning. For example, in Breakout, the DQN agent may reach a point of averaging a high score over $400$, and then, after another large batch of learning, it might be averaging a score of only around $200$. The solution \citet{mnih2015human} propose to this problem is to simply save the network parameters that resulted in the best test performance.
One of the reasons this forgetting occurs is the inherent instability of approximating the Q-function over a large state-space using these Bellman updates.
One of the main contributions of \citet{mnih2015human} was fighting this instability using experience replay and stale network parameters, as mentioned above.
Additionally, \citet{mnih2015human} found that clipping the gradient of the error term to be between $-1.0$ and $1.0$ further improved the stability of the algorithm by not allowing any single mini-batch update to change the parameters drastically.
These additions, and others, to the DQN algorithm improve its stability significantly, but the network still experiences catastrophic forgetting.
\jmnote{I think I might suggest moving the gradient clipping out as its own unique thing to helping the stabilization. So whereas the main contributions to stabilization are the experience replay and stale network parameters, gradient clipping is also an unsung hero.}
Another reason this catastrophic forgetting occurs is that the algorithm is learning a proxy, the Q-values, for a policy instead of approximating the policy directly.
A side effect of this method of policy generation is a learning update could increase the accuracy of a Q-function approximator, while decreasing the performance of the resulting policy.
For example, say the true Q-value for some state, $s$, and actions, $a_1$ and $a_2$, are $Q^*(s, a_1) = 2$ and $Q^*(s, a_2) = 3$, so the optimal policy at state $s$ would be to choose action $a_2$.
Now say the Q-function approximator for these values using current parameters, $\theta$, estimates $\hat Q(s, a_1; \theta) = 0$ and $\hat Q(s, a_2; \theta) = 1$, so the policy chosen by this approximator will also be $a_2$
But, after some learning updates we arrive at a set of parameters $\theta'$, where $\hat Q(s, a_1; \theta) = 2$ and $\hat Q(s, a_2; \theta) = 1$.
These learning updates clearly decreased the error of the Q-function approximator, but now the agent will not choose the optimal action at state $s$. \jmnote{Doesn't the nature paper show something like the plots of error in Q-function over time showing that it's steadily improving? If so you may want to call that out as evidence that even though the Q-function improves, you might get a really bad result in performance.}
Furthermore Q-values for different actions of the same state can be very similar if any of these actions does not have a significant effect on near-term reward.
These small differences are the result of longer-term rewards and are therefore critical to the optimal policy.
The consequence of trying to learn an approximator for this type of function is that very small errors in the Q-values can result is very different policies, making it difficult to learn long-term policies.
As an example of this, we will consider Breakout.
Breakout is an Atari game where the player controls a paddle with the goal of bouncing a ball to destroy all the bricks on the screen without dropping the ball.
There is an optimal strategy which is to destroy the bricks on the side of the screen so that the ball can be bounced above the bricks.
When the ball is above the bricks, the Q-values are much higher than they are when the ball is below the bricks, so we would expect a policy which follows the true Q-values to quickly exploit this policy.
Every time your paddle bounces the ball, the direction does not affect short-term rewards as a brick will be broken every time you bounce the ball
But, the ball direction will affect the distant reward of bouncing the ball above the bricks by breaking the bricks on the side of the screen.
Thus, it is difficult for a Q-function approximator to learn this long-term optimal policy.
Figure \ref{fig:q_values} shows Q-values approximated by the best network and a network that performed poorly very late into training on the same inputs near the beginning of a Breakout game.
The first frame illustrates a scenario where any action could be made and the agent could still prevent the ball from falling a few actions into the future.
But the actions made before bouncing the ball also allow the agent to aim the ball.
The Q-values in this case are very similar for both networks, but the chosen actions are different.
In the second scenario, if the agent does not take the left action, the ball will be dropped, which is a terminal state.
In this case, the Q-values are much more distinct.
Thus, this fluctuating performance is to be expected while running this algorithm.
\begin{figure}
\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{breakout_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_best_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_worst_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{breakout_unambiguous}
\caption{Example Frames}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_best_unambiguous}
\caption{Best Network}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_worst_unambiguous}
\caption{Worst Network}
\end{subfigure}
\caption{
A comparison of calculated Q-values by the networks that received the best and worst performance during testing that had been trained for at least 30,000,000 steps.
The lighter cross-hatched bar indicates the action with the highest Q-value.
The top frame corresponds to a situation where the actions don't have a significant affect on the near-future reward, while the bottom one shows a situation where the left action must be made to not loose a life.
The ``Release'' action releases the ball at the beginning of every round or does nothing (the same as ``No-op'') if the ball is already in play.
}
\label{fig:q_values}
\end{figure}
\section{Machine Learning Libraries}
Our implementation uses the Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java code library \citep{burlap}.
This library makes it easy to define a factored representation of an MDP and offers many well-known learning and planning algorithms as well as the machinery for creating new ones.
For running and interacting with the Atari video games, we used the Arcade Learning Environment (ALE) \citep{bellemare13arcade}.
ALE is a framework that provides a simple way to retrieve the screen and reward data from the Atari games as well as interact with the game through single actions.
We used ALE's FIFO Interface to interact ALE through Java.
To run and train our convolutional neural net, we used the Berkeley's Caffe (Convolutional Architecture for Fast Feature Embedding) library \citep{jia2014caffe}.
Caffe is a fast deep learning framework for constructing and training neural network architectures.
To interact with Caffe through our Java library, we used the JavaCPP library provided by Bytedeco.\footnote{\url{www.github.com/bytedeco/javacpp}}
\section{Results}
To measure our performance against that of \citet{mnih2015human}, we followed the same evaluation procedure as their paper on three games: Pong, Breakout, and Seaquest.
We trained the agent for $\num{50000000}$ steps (each step is 4 Atari frames) and tested performance every $\num{250000}$ steps.
We saved the network parameters that resulted in the best test performance.
We then evaluated the trained agent with the best performing network parameters on 30 games with and $\varepsilon$-greedy policy where $\varepsilon = 0.05$.
Each game was also initialized with a random number of ``No-op'' low-level Atari actions between $0$ and $30$.
We then took the average score of those games.
The comparison of our results and those of the DQN paper on Pong, Breakout, and Seaquest are shown in Table \ref{results}.
Each training process took about 3 days for our implementation and about 10 and a half days for the original implementation on our setup.
The differences in performance stem from the differences in gradient descent optimization algorithm and learning rate.
These differences are covered in more detail in section \ref{RMS}.
\begin{table}[t]
\captionsetup{skip=8pt}
\caption{Comparison of average game scores obtained by our DQN implementation and the original DQN paper.}
\label{results}
\centering
\begin{tabular}{lll}
\toprule
Game & Our implementation & The original implementation \\
\midrule
Pong & $19.7 \ (\pm 1.1)$ & $18.9 \ (\pm 1.3)$ \\
Breakout & $339.3 \ (\pm 86.1)$ & $401.2 \ (\pm 26.9)$ \\
Seaquest & $6309 \ (\pm 1027)$ & $5286 \ (\pm 1310)$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Key Training Techniques}
While implementing our DQN, we found there were a couple methods that were only mentioned briefly in the DQN paper, but critical to the overall performance of the algorithm.
Here we present these methods and explain why they have such a strong impact on training the network.
\subsection{Termination on the Loss of Lives}
In most of the Atari games, there is a notion of ``lives'' for the player, which correspond to the number of times the player can fail (such as dropping the ball in Breakout or running into a shark in Seaquest) before the game is over.
To increase performance, \citet{mnih2015human} chose to count the loss of a life (in the games involving lives) as a terminal state in the MDP during training.
This termination condition was not mentioned in much detail in the DQN paper, but is essential for achieving their performance.
Figure \ref{fig:lives-vs-no-lives} illustrates the difference between training with and without counting losing lives as terminal states in both Breakout and Seaquest.
In Breakout, the average score of the learner that uses end of lives as terminal states increases much faster than the other learner.
However, around halfway through training, the other learner is achieving similar performance, but with much higher variance.
Seaquest is a much more complex game with many more moving sprites and longer episode lengths.
In this game the learner that uses lives as terminal states performs significantly better than the other learner throughout training.
These figures illustrate that this additional prior information greatly benefits early training and stability and, in the more complex games, significantly improves the overall performance.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{lives-vs-no-lives}
\caption{Breakout}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{seaquest_lives-vs-no-lives}
\caption{Seaquest}
\end{subfigure}
\caption{The average training test score received for Breakout and Seaquest at each test set when using lost lives as terminal states and when using the end of a game as terminal states (epoch = 250,000 steps).}
\label{fig:lives-vs-no-lives}
\end{figure}
A terminal state in an MDP, as mentioned above, signifies to the agent that no more reward can be obtained.
Almost all the Atari games give positive rewards (Pong is a notable exception where a reward of $-1$ is received when the enemy scores a point), and thus, this addition essentially informs the agent that losing a life should be avoided at all costs.
This additional information given to the agent does seem reasonable: many human players know that loosing a life in an Atari game is bad the first time they play and it is difficult to imagine situations where the optimal policy would be to lose a life.
There are, however, a few theoretical issues with enforcing this
constraint. The first being that this process is no longer Markovian
as the initial state distribution depends on the current policy. An
example of this is in Breakout: If the agent performed well and broke many bricks before
losing a life, the new initial state for the next life will have many
fewer bricks remaining than if the agent performed poorly and broke very few bricks in the
previous life. The other issue is that this signal gives strong
additional information to the DQN, making it challenging to extend to
domains where such strong signals are not available (e.g., real-world
robotics or more open-ended video games.)
Although ALE stores the number of lives remaining for each game, it does not provide this information to all the interfaces.
To work around this limitation, we modified ALE's FIFO Interface to provide the number of lives remaining along with the screen, reward, and terminal state boolean.
Our fork that provides this data to the FIFO interface is available freely online.\footnote{\url{www.github.com/h2r/arcade-learning-environment}}
\subsection{Gradient Descent Optimization} \label{RMS}
One potential issue in using the hyper-parameters provided by \citet{mnih2015human} is that they are not using the same RMSProp definition that many deep learning libraries (such as Caffe) provide.
The RMSProp gradient descent optimization algorithm was originally proposed by Geoffrey Hinton.\footnote{\url{www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf}}
Hinton's RMSProp keeps a running average of the gradient with respect to each parameter.
The update rule for this running average can be written as:
\begin{equation}
MeanSquare(w, t) = \gamma \cdot MeanSquare(w, t-1) + (1 - \gamma) \cdot (\frac{\partial E}{\partial w}(t))^2
\end{equation}
Here, $w$ corresponds to a single network parameter, $\gamma$ is the decay factor, and $E$ is the loss.
The parameters are then updated by:
\begin{equation}
w_t = w_{t-1} - \frac{\alpha}{\sqrt{MeanSquare(w, t) + \varepsilon}} \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
where $\alpha$ corresponds to the learning rate, and $\varepsilon$ is a small constant to avoid division by $0$.
Although \citet{mnih2015human} cite Hinton's RMSProp, they use a
slight variation on the algorithm. The implementation of this can be
seen in their GitHub
repository\footnote{\url{www.github.com/kuz/DeepMind-Atari-Deep-Q-Learner}}
in the NeuralQLearner.lua file on lines 266-273. \stnote{Nice!} This variation adds
a momentum factor to the RMSProp algorithm that is updated as follows:
\begin{equation}
Momentum(w, t) = \eta \cdot Momentum(w, t-1) + (1 - \eta) \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
Here, $\eta$ is the momentum decay factor.
The parameter update rule is then modified to:
\begin{equation}
w_t = w_{t-1} - \frac{\alpha}{\sqrt{MeanSquare(w, t) - (Momentum(w, t))^2 + \varepsilon}} \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
To account for this change in change in optimization algorithm, we had to modify the learning rate to something much lower than that of the \citet{mnih2015human} implementation (we used $0.00005$ as opposed to their $0.00025$).
We did not choose to implement this variant of RMSProp as it was not trivial to implement with the Java-Caffe bindings and Hinton's version produced similar results.
\section{Speed Performance}
Our implementation is training a bit less than 4x faster than the original implementation written in Lua using Torch.
The setup on which we tested these implementations is using two NVIDIA GTX 980 TI graphics cards along with an Intel i7 processor.
Our implementation runs at around $985$ Atari frames per second (fps) during training and $1584$fps during testing, while the Lua implementation runs at $271$fps during training and $586$fps during testing on our hardware (note that the algorithm only looks at every 4 frames, and so only 1 fourth of this number of frames are processed by the algorithm per second).
We attribute a large portion of this performance increase to cuDNN.
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a proprietary NVIDIA library for running forward and backward passes on common neural network layers optimized specifically for the NVIDIA GPUs.
For both Torch and Caffe, cuDNN is available, but not used by default.
We compiled Caffe using cuDNN for our experiments, while the Lua implementation did not use this feature in Torch.
For comparison, when using Caffe without cuDNN, our implementation runs at around $268$fps during training and $485$fps during testing, which is a bit slower than the Lua implementation.
Another area that significantly increased the speed performance of our implementation was preallocating memory before training, which was also done in the original DQN implementation.
Allocating large amounts of memory is an expensive operation, so preallocating memory for large vectors, such as the experience memory and mini-batch data, and reusing it at each iteration significantly decreases this overhead.
\section{Conclusion}
In this paper we have presented a few key areas in the implementation of the DQN proposed by \citet{mnih2015human} that were essential to the overall performance of the algorithm, but were not covered in great detail in the original paper, in order to make it easier for researchers to implement their own versions of this algorithm.
We also highlighted some of the difficulties in approximating a Q-function with a CNN in such large state-spaces, namely the catastrophic forgetting.
We also have our implementation available freely online\footnote{\url{www.github.com/h2r/burlap_caffe}} and encourage researchers to use this as a tool for implementing novel algorithms as well as comparing performance with that of \citet{mnih2015human}.
\subsubsection*{Acknowledgments}
This material is based upon work supported by the National Science Foundation under grant numbers IIS-1637614 and IIS-1426452, and DARPA under grant numbers W911NF-10-2-0016 and D15AP00102.
\medskip
{
\bibliographystyle{plainnat}
|
\section{Experiments} \label{sec:exp}
We conduct experiments on various kinds of tasks to demonstrate the effectiveness of RGP. We first examine the utility of models trained by RGP. To this end, we apply RGP on the wide ResNet \citep{zagoruyko2016wide} and the BERT \citep{devlin2018bert} models, which are representative models for computer vision and natural language modeling. The results are presented in Section~\ref{subsec:exp_resnet} and~\ref{subsec:exp_bert}. The source code of our implementation is publicly available\footnote{\url{https://github.com/dayu11/Differentially-Private-Deep-Learning}}.
\begin{table}
\small
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Validation accuracy (in \%) of WRN28-4 on vision tasks . } \label{tbl:tbl_resnet}
\begin{tabular}{l|l|l}
\hline
\hline
Method & SVHN & CIFAR10 \\
\hline
Full (N.P.) & 97.2 & 93.3 \\\cline{1-3}
Linear (N.P.) & 41.1 & 39.8 \\\cline{1-3}
RGP (N.P.) & 97.1 & 91.2 \\\cline{1-3}
PowerSGD (N.P.) & 97.1 & 91.9 \\\cline{1-3}
DP-SGD ($\epsilon=8$) & 91.6 & 55.9 \\\cline{1-3}
DP-PowerSGD ($\epsilon=8$) & 91.9 & 57.1 \\\cline{1-3}
RGP-random ($\epsilon=8$) & 91.7 & 51.0 \\\cline{1-3}
RGP ($\epsilon=8$)& 94.2 & 63.4 \\\hline \hline
\end{tabular}
\end{table}
\begin{table}
\small
\centering
\caption{Validation accuracy (in \%) of RGP on vision tasks with varying $\epsilon$. The model architecture is WRN28-4. Numbers in brackets denote the improvements compared to DP-SGD. } \label{tbl:vision_vary_eps}
\begin{tabular}{l|l|l|l}
\hline
\hline
Dataset & $\epsilon=2$ & $\epsilon=4$ & $\epsilon=6$ \\\hline
SVHN & 87.3 (+4.1) & 89.7 (+3.4) & 92.3 (+3.9) \\\hline
CIFAR10 & 44.0 (+6.6) & 53.3 (+6.4) & 59.6 (+7.9) \\\hline \hline
\end{tabular}
\end{table}
Moreover, we empirically evaluate the privacy risk of the models via the success rate of \emph{membership inference (MI) attack} \citep{shokri2017membership,sablayrolles2019white,yu2021how}. The results are presented in Section~\ref{subsec:exp_mi}.
\textbf{Implementation.} The number of iterations for power method is $1$. We use an open-source tool of moments accountant to compute the privacy loss\footnote{\url{https://github.com/tensorflow/privacy}}. For a given setting of hyperparameters, we set $\sigma$ to be the smallest value so that the privacy budget is allowable to run desired epochs. All experiments are run on a node with four Tesla V100 GPUs.
\textbf{Baselines.} We implement several baseline algorithms for comparison. For differentially private learning, the first baseline is \emph{DP-SGD} in \citet{abadi2016deep} and the second one is RGP with gradient carriers consisting of random orthonormal vectors, referred to as \emph{RGP-random}. We also include several non-private baselines, i.e., \textbf{(\romannumeral 1)} \emph{Full (N.P.)}: training the full model, \textbf{(\romannumeral 2)} \emph{Linear (N.P.)}: training only the linear classification layer, \textbf{(\romannumeral 3)} \emph{RGP (N.P.)}: training the model with reparametrization but without gradient clipping or adding noise.
We consider differentially private \emph{PowerSGD} \citep{vogels2019powersgd} as another baseline for vision tasks. PowerSGD approximates full gradients with low-rank matrices to reduce the communication cost. It first aggregates the individual gradients and then runs power iterations to find approximations of the principle components of the averaged gradient. Hence for DP-powerSGD, it is necessary to first perturb the aggregated gradient and then project it into low-rank subspace otherwise the sensitivity is hard to track after projection. As a consequence, DP-powerSGD needs to compute the individual gradients explicitly, which costs huge memory as DP-SGD does. In Section~\ref{subsec:exp_resnet}, we add a DP-powerSGD baseline with the same setting as that of RGP.
Additionally, some ablation experiments are conducted to study the influence of the residual weight and reparametrization ranks, which are relegated to the Appendix~\ref{app:sec:add-exp}.
\subsection{Experiments on Vision Tasks}\label{subsec:exp_resnet}
\textbf{Model.} We use wide ResNet models \citep{zagoruyko2016wide} for the vision tasks. The architecture is WRN28-4 with $\sim$1.5M parameters. All batch normalization layers are replaced with group normalization layers to accommodate private learning.
\textbf{Tasks.} We use two vision datasets: SVHN \citep{netzer2011reading} and CIFAR10 \citep{cifar}. SVHN contains images of $10$ digits and CIFAR10 contains images of 10 classes of real-world objects.
\textbf{Hyperparameters.} We follow the hyperparameters in \citet{zagoruyko2016wide} except using a mini-batch size 1000. This mini-batch size is larger than the default because the averaging effect of large mini-batch reduces the noise variance. The reparametrization rank $r$ is chosen from $\{1, 2, 4, 8, 16\}$. We choose the privacy parameter $\delta<\frac{1}{n}$, and set $\delta=10^{-6}$ for SVHN and $\delta=10^{-5}$ for CIFAR10. We repeat each experiment 3 times and report the average.
\textbf{Results.} The prediction accuracy with $\epsilon=8$ is presented in Table~\ref{tbl:tbl_resnet}. We can see that RGP (N.P.) achieves comparable performance with training the full model (N.P.). When trained with DP, RGP outperforms DP-SGD by a considerable margin while enjoying a much lower memory cost. We also compare RGP with DP-SGD using different privacy budgets ($\epsilon=2/4/6$) and report the results Table~\ref{tbl:vision_vary_eps}.
\subsection{Experiments on the Downstream Tasks of BERT}\label{subsec:exp_bert}
\textbf{Model.} We use the BERT\textsubscript{BASE} model in \citet{devlin2018bert}, which is pre-trained on a massive corpus collected from the Web. The BERT\textsubscript{BASE} model has $\sim$110M parameters.
\textbf{Tasks.} We use four tasks from the General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue}, including MNLI, QQP, QNLI, and SST-2. The other tasks from GLUE are excluded because their datasets are of small sizes (<10K) while differentially private learning requires large amount of data \citep{tramer2021differentially}.
\textbf{Hyperparameters.} We follow the hyperparameters in \citet{devlin2018bert}
except for the mini-batch size and training epochs. The reparametrization rank $r$ is chosen from $\{1, 2, 4, 8\}$. The mini-batch size is 500 for SST-2/QNLI and 1000 for QQP/MNLI. To construct an update with desired mini-batch size, we accumulate the gradients of multiple micro-batches. We choose $\delta = 10^{-5}$ for QNLI/SST-2 and $\delta =10^{-6}$ for QQP/MNLI. The privacy parameter $\epsilon$ is chosen from $\{1, 2, 4, 6, 8\}$. The number of training epochs is 50 for $\epsilon>2$ and $20$ for $\epsilon\leq 2$. We run all experiments 5 times with different random seeds and report the average.
\textbf{Results.} The prediction accuracy of RGP and other baselines is presented in Table~\ref{tbl:tbl_bert}. The results with varying DP parameter $\epsilon$ is plotted in Figure~\ref{fig:fig_bert}. When trained without privacy guarantee, RGP (N.P.) achieves test accuracy comparable with fine-tuning the full model. When trained with differential privacy, RGP achieves the best performance. Its accuracy loss compared to non-private baselines is within $5\%$. The performance of RGP-random is worse than that of RGP because the random subspace does not capture gradient information as effectively as the subspace of historical updates. DP-SGD achieves the worst performance because high-dimensional noise overwhelms the useful signal in gradients. We note that DP-SGD runs the lowest because it needs to compute and store 110M floating-point numbers for each individual gradient.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{imgs/bert_varying_eps.pdf}
\caption{Prediction accuracy of BERT on downstream tasks with varying $\epsilon$. For MNLI, we plot the average score of two test datasets. }
\label{fig:fig_bert}
\end{figure*}
\begin{table}
\small
\centering
\caption{Prediction accuracy of BERT on downstream tasks (in \%). For DP-SGD, RGP, and RGP-random, a same $\epsilon=8$ is used.}
\begin{tabular}{l|l|l|l|l|l}
\hline
\hline
Method & MNLI & QQP & QNLI & SST-2 & Avg. \\
\hline
Full (N.P.) & 84.8/83.7 & 90.2 & 91.6 & 93.4 & 88.7 \\\cline{1-6}
Linear (N.P.) & 51.9/50.8 & 73.2 & 63.0 & 82.1 & 64.2 \\\cline{1-6}
RGP (N.P.) & 83.6/83.2 & 89.3 & 91.3 & 92.9 & 88.1 \\\cline{1-6}
DP-SGD\tablefootnote{As shown in \citet{li2021large}, DP-SGD performs better when large batchsizes and full precision are used.} & 54.6/53.4 & 74.5 & 63.6 & 82.3 & 65.7 \\\cline{1-6}
RGP-random & 74.6/73.3 & 81.7 & 82.1 & 87.8 & 79.9 \\\cline{1-6}
RGP\tablefootnote{The performance of RGP is also better in the above setup. More details are in \url{https://github.com/dayu11/Differentially-Private-Deep-Learning}. } & 79.1/78.0 & 84.8 & 86.2 & 91.5 & 83.9
\\\hline \hline
\end{tabular}
\label{tbl:tbl_bert}
\end{table}
\subsection{Defense Against Membership Inference Attack}\label{subsec:exp_mi}
\textbf{Setup.} We use membership inference (MI) attack to empirically evaluate the privacy risk of models trained with/without RGP. Following the membership decision in \citet{sablayrolles2019white}, we predict a sample from the training data if its loss value is smaller than a chosen threshold. To evaluate the MI success rate, we construct a \emph{MI dataset}, which consists of the same number of training and test samples. Specifically, the MI dataset contains the whole test set and a random subset of the training set. We further divide the MI dataset evenly into two subsets. One is used to find the optimal loss threshold and the other one is used to evaluate the final attack success rate.
\textbf{Results.}
The MI success rates are presented in Table~\ref{tbl:mi_bert}. For MNLI, QQP, QNLI, and SST-2 datasets, we conduct MI attacks on fine-tuned BERT\textsubscript{BASE} models. For SVHN and CIFAR10 datasets, we conduct MI attacks on trained WRN28-4 models. The MI attack on the models trained with RGP ($\epsilon=8$) is no better than random guessing ($50\%$ success rate), which empirically demonstrate the effectiveness of RGP in protecting privacy. Moreover, interestingly, the models trained with low-rank reparametrization alone also achieve much lower MI success rate than the fully trained model, which indicates the benefit of low-rank reparametrization in terms of privacy protection.
\section*{Acknowledgements}
Jian Yin is supported by NSFC (U1711262, U1711261, U1811264, U1811261, U1911203, U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R\&D Program of Guangdong Province (2018B010107005).
\newpage
\section{Introduction}
A recent line of works \citep{shokri2017membership,carlini2019secret,carlini2020extracting} have exposed the potential privacy risks of trained models, e.g., data extraction from language model. Theoretically, learning with \emph{differential privacy} \citep{dwork2006calibrating} is guaranteed to prevent such information leakage because differential privacy imposes an upper bound on the influence of any individual sample. Empirically, differential privacy also makes learning more resistant to attacks \citep{rahman2018membership,bernau2019assessing, zhu2019deep, carlini2019secret, ma2019data,lecuyer2019certified}.
To learn with differential privacy, many algorithms have been proposed under different settings over the past decade, e.g., \citet{chaudhuri2009privacy,song2013stochastic,agarwal2018cpsgd,wang02019differentially,wang2019differentially,yu2020gradient,phan2020scalable,vietri2020private}, to name a few. Among them, \emph{gradient perturbation} is a popular choice because of its simplicity and wide applicability \cite{abadi2016deep}. In terms of simplicity, gradient perturbation only makes two simple modifications to the standard learning process. It first clips the gradients of individual samples, referred to as individual gradients, to bound the sensitivity and then perturbs the aggregated gradient with random noise. In terms of wide applicability, it does not assume the objective to be convex and hence applies to deep neural networks.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{imgs/repara.pdf}
\caption{The proposed reparametrization scheme. The residual weight makes the reparametrized output the same as the normal output and $\partial{\bm{L}}$, $\partial{\bm{R}}$ naturally connected with the normal gradient. }
\label{fig:repara}
\end{figure}
Despite its advantages, there are two challenges when applying gradient perturbation to cutting-edge deep models. First, one needs to compute and store individual gradients. Recent works \citep{dangel2019backpack,Opacus} have developed toolkits to compute individual gradients for a mini-batch of data through a single forward/backward pass, but storing individual gradients consumes a huge amount of memory as each individual gradient requires the same amount of memory as the model itself. Second, both theoretical and empirical utilities of gradient perturbation suffer from bad dependence on the model size \citep{bassily2014differentially, papernot2020tempered,tramer2021differentially} because the intensity of the added noise scales proportionally with the model size.
To tackle these challenges, we reparameterize each weight matrix ${\bm{W}}$ of a deep neural network with a pair of low-rank \emph{gradient carriers} $\{{\bm{L}},{\bm{R}}\}$ and a \emph{residual weight} $\tilde{{\bm{W}}}$, as illustrated in Figure~\ref{fig:repara}. With this reparametrization, the forward signal and the backward signal propagate the same as before. We show that the gradients on ${\bm{L}}$ and ${\bm{R}}$ are naturally connected with the gradient on ${\bm{W}}$. Especially if the gradient carriers consist of orthonormal vectors, we can construct a projection of the gradient of ${\bm{W}}$ from the gradients of ${\bm{L}}$ and ${\bm{R}}$ that are of low dimension. In other words, we can compute the projection of the gradient without computing the gradient itself. This property could save a huge amount of memory in DP-SGD where a large batch of individual gradients are computed and stored. We note that this could be also useful in other problems involving statistics of individual gradients, e.g. computing the gradient variance \citep{zhao2015stochastic,balles2016coupling,mahsereci2017probabilistic,balles2018dissecting}, which is out of our scope.
Based on the above framework, we propose \emph{reparametrized gradient perturbation (RGP)} for differentially private learning. Specifically, after the backward process, RGP clips and perturbs the gradients of ${\bm{L}}$ and ${\bm{R}}$, which gives a certain level of privacy guarantee. Then RGP uses the noisy gradients to construct an update for the original weight. We note that because the gradient-carrier matrices are of much smaller dimension than the original weight matrix, the total intensity of the added noises is significantly smaller, which helps us break the notorious dimensional dependence of the utility of differentially private learning.
The key of the reparameterization scheme is how well the gradient projection approximates the original gradient. We argue that the approximation is good if 1) the original gradient of ${\bm{W}}$ itself is indeed low-rank and 2) its principal subspace aligns with ${\bm{L}}$ and ${\bm{R}}$. The first condition is empirically verified by showing the gradient of each layer is of low stable rank when training deep neural networks, which has also been exploited for gradient compression in distributed optimization \citep{vogels2019powersgd}. The second condition is guaranteed if ${\bm{L}}$ and ${\bm{R}}$ consists of the principal singular vectors of the original gradient, which, however, violates the differential privacy. Instead, in RGP, we approximately compute a few of principal vectors of the historical updates that are already published and free to use because of the post-processing property of differential privacy, and use them as gradient carriers. We theoretically prove that the optimality of using the historical update substitution for linear regression and empirically verify its efficacy for deep neural networks.
With RGP, we can easily train large models with differential privacy and achieve good utility on both the vision and language modeling tasks. For example, we use RGP to train the BERT model \citep{devlin2018bert} on downstream language understanding tasks. We establish rigorous differential privacy guarantee for such large model with a modest drop in accuracy. With a privacy budget $\epsilon=8$, we achieve an average accuracy $83.9\%$ on downstream tasks, which is within $5\%$ loss compared to the non-private baseline. We also use \emph{membership inference attack} \citep{shokri2017membership,sablayrolles2019white} to evaluate the empirical privacy risks and demonstrate that the models trained with RGP are significantly more robust to membership inference attack than the non-private ones.
Overall, our contribution can be summarized as follows.
\begin{enumerate}[itemsep=0mm]
\item We propose reparametrized gradient perturbation (RGP) that reduces the memory cost and improves the utility when applying DP on large models.
\item We give a detailed analysis on the property of RGP. We propose using the historical update to find the principal subspace and give theoretical arguments.
\item Empirically we are able to efficiently train BERT with differential privacy on downstream tasks, and achieve both good accuracy and privacy protection.
\end{enumerate}
\subsection{Notations}
We introduce some basic notations. Vectors and matrices are denoted with bold lowercase letters, e.g., ${\bm{v}}$, and bold capital letters, e.g., ${\bm{M}}$, respectively. Sets are denoted with double-struck capital letters, e.g., ${\mathbb{S}}$. We use $[n]$ to denote the set of positive numbers $\{1,...,n\}$. Some preliminaries on differential privacy are presented in Appendix \ref{app:sec:preliminary}.
\section{Two Properties of the Gradient Matrix} \label{sec:grad_property}
We show two properties of the gradients of modern deep neural networks to justify the design choices of Algorithm~\ref{alg:dp_lrk_repara}. The first property is that the gradient of each weight matrix is naturally low-rank, which motivates us to use low-rank reparameterization. The second property is that the gradient of a weight matrix along the optimization path could stay in the same subspace, which motivates us to use the historical updates to generate the gradient-carrier matrices.
\subsection{Gradient Matrix Is of Low Stable Rank}
\label{subsec:grad_is_lrk}
Recent works have used the low-rank approximation to compress the gradients and reduce the communication cost in distributed optimization \citep{yurtsever2017sketchy, wang2018atomo, karimireddy2019error, vogels2019powersgd}. These existing works set up a good motivation to exploit the low stable rank property of the gradients of weight matrices.
We further verify this low-rank property which may give a hint about how to set the reparameterization rank $r$ in practice. We empirically compute the stable rank ($\|\cdot\|_F^2/\|\cdot\|^2_{2}$) of the gradient of the weight matrices in a BERT model and a wide ResNet model. The dataset for the BERT model is SST-2 from the GLUE benchmark \citep{wang2018glue}. The dataset for the wide ResNet model is CIFAR-10 \cite{cifar}. The experimental setup can be found in Section~\ref{sec:exp}. We plot the gradient stable rank in Figure~\ref{fig:stbl_rank}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{imgs/grad_stable_rank.pdf}
\caption{Gradient stable rank ($\|\cdot\|_F^2/\|\cdot\|^2_{2}$). For ResNet, we plot the gradient rank of the classification layer and the first residual block. For BERT, we plot the gradient rank of the first fully-connected block and the first attention block.}
\label{fig:stbl_rank}
\end{figure}
As shown in Figure~\ref{fig:stbl_rank}, both the gradients of BERT and ResNet models are naturally of low stable rank over the training process. Hence, low-rank gradient-carrier matrices would have a small approximation error if we find the right gradient subspace. In Section \ref{subsec:historical_grad}, we argue that historical update is a good choice to identify the gradient subspace.
\subsection{Historical Gradients Are Correlated}
\label{subsec:historical_grad}
Suppose that ${\bm{W}}_t$ is a weight matrix at step $t$, and $\partial {\bm{W}}_t$ is the gradient with a batch of data ${\mathbb{D}}$ with a $r$-SVD $\partial {\bm{W}}_t = {\bm{U}}_t \Sigma_t {\bm{V}}_t^T$. For another step ${t'}$ with $t'>t$ and the same data ${\mathbb{D}}$, we have ${\bm{W}}_{t'}, \partial {\bm{W}}_{t'}$ and a $r$-SVD: $\partial {\bm{W}}_{t'} = {\bm{U}}_{t'} \Sigma_{t'} {\bm{V}}_{t'}^T$. We can project $\partial {\bm{W}}_{t'}$ onto the principal subspace of $\partial {\bm{W}}_t$ or $\partial {\bm{W}}_{t'}$ and measure the projection residual
\begin{flalign}
&\|({\bm{I}} - {\bm{U}}_t{\bm{U}}_t^T)\partial {\bm{W}}_{t'}({\bm{I}}-{\bm{V}}_t{\bm{V}}_t^T)\|_F/\|\partial {\bm{W}}_{t'}\|_F,\label{eq:proj_res}
\\
&\|({\bm{I}} - {\bm{U}}_{t'}{\bm{U}}_{t'}^T)\partial {\bm{W}}_{t'}({\bm{I}}-{\bm{V}}_{t'}{\bm{V}}_{t'}^T)\|_F/\|\partial {\bm{W}}_{t'}\|_F,\label{eq:self_proj_res}
\end{flalign}
where Eq~(\ref{eq:proj_res}) is the projection residual using historical gradient, referred to as \emph{historical projection residual}, and Eq~(\ref{eq:self_proj_res}) is the projection residual using current gradient, referred to as \emph{self projection residual}. A small difference between Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}) indicates that the principal subspace of the current gradient aligns with that of the historical gradient.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{imgs/proj_residual.pdf}
\caption{Projection residual with reparametrization rank $8$. We use a fixed mini-batch with $500$ samples. For ResNet, we use the input convolution layer. For BERT, we use the second matrix of the FC layer in the first encoder block. The definition of historical/self projection residual is in Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}). }
\label{fig:proj_res}
\end{figure}
We empirically examine the projection residual of a BERT model and a wide ResNet model. The tasks are the same as in Section~\ref{subsec:grad_is_lrk}. At the beginning of each epoch, we evaluate the projection residual between the current gradient and the gradient of the previous epoch. The results are plotted in Figure~\ref{fig:proj_res}. We can see that the difference between Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}) is small for both models.
To understand why historical gradients are correlated, we next use a linear regression problem to rigorously show that the gradients over time could live in the same subspace. Suppose we have a set of observations $\{({\bm{x}}_i, {\bm{y}}_i)\}_{i=1}^n$, where ${\bm{x}}_i \in {\mathbb{R}}^d$ is the feature vector and ${\bm{y}}_i\in {\mathbb{R}}^{p}$ is the target vector for all $i \in [n]$. The least-squares problem is given by
\begin{flalign}
\argmin_{\bm{W}} \frac{1}{n}\sum_{i=1}^n \|{\bm{y}}_i - {\bm{W}} {\bm{x}}_i\|^2. \label{eq:least-squares}
\end{flalign}
\begin{restatable}{proposition}{gradalign}\label{prop:grad-align}
For the least squares problem (\ref{eq:least-squares}), if the model is updated by gradient descent with step size $\eta$
\begin{flalign}
{\bm{W}}_{t+1} \leftarrow {\bm{W}}_t - \eta \cdot \partial{\bm{W}}_t, \label{eq:gd}
\end{flalign}
then the gradients $\{\partial {\bm{W}}_t\}_{t\ge 1}$ share the same range and null space. That is to say, if $\partial {\bm{W}}_1$ is rank $r$ and has $r$-SVD $\partial {\bm{W}}_1 = {\bm{U}}_1 \Sigma_1 {\bm{V}}_1^T$, then for all $t\ge 1$, we have
\begin{flalign}
({\bm{I}} - {\bm{U}}_1{\bm{U}}_1^T) \partial {\bm{W}}_t = 0,\; \partial {\bm{W}}_t ({\bm{I}} - {\bm{V}}_1{\bm{V}}_1^T)= 0.
\end{flalign}
\end{restatable}
\begin{proof}
The proof is relegated to Appendix \ref{apd:subsec:proof_sec4}.
\end{proof}
Hence we can use the historical updates ${\bm{W}}_t-{\bm{W}}_0$ to identify gradient row/column subspaces as in Algorithm \ref{alg:dp_lrk_repara}.
That indicates that for the weight matrix ${\bm{W}}\in {\mathbb{R}}^{p \times d}$, if the gradient turns out to be low-rank $r$ due to the data $\{{\bm{x}}_i, {\bm{y}}_i\}$, we can possibly first identify the intrinsic subspace which is of $r(p+d)$ dimension instead of the original $p\cdot d$ number of parameters. Then we can work within this subspace for differentially private empirical risk minimization. This can both reduce the effect of noise and save the memory cost of gradient perturbation due to the small intrinsic dimension.
We note that identifying the low-rank subspace can be done approximately as in the algorithm, or by using some auxiliary public data as in \citet{zhou2021bypassing, yu2021do}.
\begin{remark}\label{rem:lst-sqr}
Suppose that the least-squares objective $L({\bm{W}}):=\frac{1}{n}\sum_{i=1}^n \|{\bm{y}}_i - {\bm{W}} {\bm{x}}_i\|^2$ is $\beta$-smooth and the gradient subspace is rank $r$ and can be exactly identified. Let the optimizer of RGP be gradient descent and $\sigma$ be set as in Proposition~\ref{prop:privacy}. If $\eta=\frac{1}{\beta}$, $T=\frac{n\beta\epsilon}{\sqrt{p}}$, and $\bar{{\bm{W}}}=\frac{1}{T}\sum_{t=1}^{T}{\bm{W}}_t$, then
\[\mathbb{E}[L(\bar{{\bm{W}}})]-L({\bm{W}}_*)\leq \mathcal{O}\left(\frac{\sqrt{(p+d)r\log(1/\delta)}}{n\epsilon}\right),\]
where ${\bm{W}}_*$ is the optimal point, ${\bm{W}}_{t}$ is the output of Algorithm~\ref{alg:dp_lrk_repara} at step $t$.
\end{remark}
The proof of Remark \ref{rem:lst-sqr} can be adapted from \cite{yu2020gradient}. Although the exact low-rank property of the gradient cannot be rigorously proved for deep neural network because of the co-adaption across layers, we have empirically verified that the gradient matrices are still of low stable rank and stay in roughly the same subspace over iterations (see Figure \ref{fig:stbl_rank} \& \ref{fig:proj_res}). Our algorithm exploits this fact to reparameterize weight matrices, which achieves better utility and reduces the memory cost compared with DP-SGD.
\section{Related Work}
\begin{table}
\renewcommand{\arraystretch}{1.3}
\centering
\caption{Success rates of membership inference attack against fine-tuned BERT models (in \%). The closer to 50, the better.} \label{tbl:mi_bert}
\begin{adjustbox}{max width=0.45\textwidth}
\begin{tabular}{l|l|l|l|l|l|l}
\hline
\hline
Method & MNLI & QQP & QNLI & SST-2 & SVHN & CIFAR10 \\
\hline
Full (N.P.) & 60.3 & 56.1 & 55.8 & 57.7 & 56.4 & 58.1 \\\cline{1-7}
RGP (N.P.) & 52.3 & 51.5 & 51.8 & 52.6 & 52.8 & 53.3 \\\cline{1-7}
RGP ($\epsilon=8$) & 49.9 & 50.0 & 50.4 & 50.1 & 50.1 & 50.3 \\
\hline \hline
\end{tabular}
\end{adjustbox}
\end{table}
Differentially private learning has a poor dimensional dependency, i.e., the utility degrades dramatically when the model dimension gets large. In the high-dimensional setting, related works usually assume the sparse structure \citep{thakurta2013differentially, talwar2015nearly, wang2019sparse, wang2019differentially, cai2019cost} or specific problem structure \cite{chen2020locally,zheng2020locally}. However, these assumptions or specific structures do not hold for the gradient of deep neural networks. Here we emphasize the difference from our low-rank assumption. For the sparsity assumption, the bases are canonical and not private while for the low-rank assumption, it is ``sparse'' under certain bases but the bases are unknown and private. Hence the previous algorithms for sparsity cannot apply here.
Very recently, several works \citep{zhou2020bypassing,kairouz2020dimension, yu2021do} exploit the redundancy of gradients of samples and suggest projecting the gradients into a low dimensional subspace that is identified by some public data points or historical gradients, in order to reduce the noise effect when training large models. However, they all require storing and clipping whole individual gradients and hence are hard to train extremely large models. Our work is orthogonal with theirs, i.e., we exploit the low-rank property of the gradient of each weight matrix, which truly breaks the barrier of applying DP in large models.
Another recent approach of training non-convex models with differential privacy is based on the knowledge transfer of machine learning models \emph{Private Aggregation of Teacher Ensembles (PATE)} \citep{papernot2016semi, papernot2018scalable, jordon2019pate}. They first train independent teacher models on disjoint shards of private data and then tune a student model with privacy by distilling noisy predictions of teacher models on some public samples, whose performance suffers from the data splitting \cite{yu2021do}. It is not clear how to apply PATE to train large language models like BERT. In contrast, our algorithms do not require public data and can be used in different settings with little change.
The phenomenon that the gradients of deep models live on a very low dimensional manifold has been widely observed \citep{gur2018gradient, vogels2019powersgd, gooneratne2020low, li2020hessian, martin2018implicit, li2018algorithmic}. People have also used this fact to compress the gradient with low-rank approximation in the distributed optimization scenario \citep{yurtsever2017sketchy, wang2018atomo, karimireddy2019error, vogels2019powersgd}.
\section{Conclusion}
In this paper, we present the reparametrized gradient perturbation (RGP) for applying DP on large models. The key design of RGP exploits two properties of gradients in deep neural network, which are 1) the gradient of each weight matrix is of low stable rank, 2) the principal components of historical gradients align well with that of the current gradient. We also justify the designs with both theoretical and empirical evidence. Thanks to RGP, we are able to train BERT on several downstream tasks with DP guarantee and achieve small accuracy loss.
\vspace{-1mm}
\section{A Reparametrization Scheme}\label{sec:lrk}
In this section, we introduce a reparametrization scheme for the neural network weight matrices so that computing and storing individual gradients are affordable for large models. Specifically, during each forward/backward process, for a layer with weight matrix ${\bm{W}}\in {\mathbb{R}}^{p\times d}$, we reparametrize it as follows (see Figure~\ref{fig:repara} for an illustration),
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}} + \tilde{{\bm{W}}}.{stop\_gradient()}, \label{eq:repara}
\end{flalign}
where ${\bm{L}}\in{\mathbb{R}}^{p\times r}, {\bm{R}}\in{\mathbb{R}}^{r\times d}$ are two low-rank gradient carriers with $r\ll p \text{ or } d$, $\tilde{{\bm{W}}} = {\bm{W}}-{\bm{L}}{\bm{R}}$ represents the residual weight and $.{stop\_gradient()}$ means that we do not collect the gradient on $\tilde{{\bm{W}}}$.
Hence, such reparametrization does not change the forward signal and the backward signal, but only changes the gradient computation. Now we obtain the gradients on ${\bm{L}}$ and ${\bm{R}}$. We then unveil the connection between the gradient on ${\bm{W}}$ and the gradients on ${\bm{L}}$ and ${\bm{R}}$.
\begin{restatable}{theorem}{gradlr}\label{thm:grad_lr}
For a layer with weight matrix ${\bm{W}}$, suppose that $\partial{\bm{W}}$ is the gradient computed by back-propagation with a mini-batch data ${\mathbb{D}}$. Given two matrices ${\bm{L}}, {\bm{R}}$, we reparametrize ${\bm{W}}$ as in Eq~(\ref{eq:repara}) and compute the gradients $\partial{\bm{L}}$ and $\partial{\bm{R}}$ by running the forward and backward process with the same mini-batch ${\mathbb{D}}$, then
\begin{flalign}
\partial {\bm{L}} = (\partial{\bm{W}}){\bm{R}}^{T},\;\; \partial {\bm{R}} = {\bm{L}}^{T}(\partial{\bm{W}}).
\end{flalign}
\end{restatable}
Based on the above understanding, we can construct an update for ${\bm{W}}$ by using $\partial{\bm{L}}$ and $\partial{\bm{R}}$.
\begin{restatable}{corollary}{corogradlr}\label{corollary:grad_lr}
If the columns of ${\bm{L}}$ and the rows of ${\bm{R}}$ are orthonormal, respectively, and we use
\begin{flalign}
\label{eq:grad_lrk}
(\partial {\bm{L}}) {\bm{R}} + {\bm{L}}(\partial{\bm{R}}) - {\bm{L}}\mL^{T}(\partial{\bm{L}}){\bm{R}},
\end{flalign}
as the update for ${\bm{W}}$, then the update is equivalent to projecting $\partial{\bm{W}}$ into the subspace of matrices whose row/column spaces are spanned by ${\bm{L}}$ and ${\bm{R}}$.
\end{restatable}
\begin{proof}
The proofs of Theorem~\ref{thm:grad_lr} and Corollary~\ref{corollary:grad_lr} are relegated to Appendix~\ref{apd:subsec:proof_sec2}.
\end{proof}
We note that if ${\bm{L}}$ and ${\bm{R}}$ consist of orthonormal bases, Corollary~\ref{corollary:grad_lr} states that we can obtain the projection of $\partial{\bm{W}}$ without explicitly computing and storing $\partial{\bm{W}}$! The size of gradient on ${\bm{L}}$ or ${\bm{R}}$ is much smaller than the size of $\partial{\bm{W}}$ if the gradient carriers are chosen to be low-rank. Therefore, this reparametrization provides a convenient way to compute and store projected gradients of a large matrix. This is extremely beneficial for the scenarios where individual gradients $\{\partial_i {\bm{W}}\}_{i=1}^{m}$ are required, e.g., approximating the variance of gradients and controlling the gradient sensitivity.
It is natural to ask how to choose ${\bm{L}}$ and ${\bm{R}}$ so that the update in Corollary~\ref{corollary:grad_lr} contains the most information of $\partial{\bm{W}}$. Ideally, we can first compute the aggregated gradient $\partial{\bm{W}}$ and run \emph{singular value decomposition} (SVD) $\partial{\bm{W}}={\bm{U}}{\bm{\Sigma}}{\bm{V}}^{T}$. Then we can choose the top few columns of ${\bm{U}}$ and ${\bm{V}}$ to serve as the gradient carriers. In this case, the update in Corollary~\ref{corollary:grad_lr} is equivalent to approximating $\partial{\bm{W}}$ with its top-$r$ principal components.
However, in the context of differential privacy, we can not directly decompose $\partial {\bm{W}}$ as it is private. In the sequel, we give a practical reparametrization scheme for differentially private learning, where we use the historical update to find ${\bm{L}}$ and ${\bm{R}}$ and argue the optimality under certain conditions.
One may wonder why not just replace ${\bm{W}}$ with ${\bm{L}}$ and ${\bm{R}}$ instead of doing the reparametrization. We note that the forward and the backward process remain the same as before if doing the reparametrization, and the only change is the gradient computation of ${\bm{W}}$. In contrast, if using ${\bm{L}}$ and ${\bm{R}}$ to replace the weight ${\bm{W}}$, this would not only reduce the expressive power but also hurt the optimization as the width varies dramatically across layers and the forward/backward signals cannot propagate well by common initialization strategies \cite{glorot2010understanding, he2016deep}.
\subsection{Reparametrization for Convolutional Layers}
\label{sec:lrk_conv}
In the above, we have described how to reparametrize a weight matrix, which covers the usual fully-connected layer and the attention layer in language models. In this subsection, we show the reparametrization of convolutional layers. Let ${\bm{x}}\in\mathbb{R}^{d\times w' \times h'}$ be the input feature maps of one sample and ${\bm{h}}\in\mathbb{R}^{p\times w \times h}$ be the output feature maps. We describe how to compute the elements at one spatial position ${\bm{h}}_{:,i,j}\in\mathbb{R}^{p}$ where $i\in [0,w]$ and $j\in [0,h]$.
Let ${\bm{W}}\in \mathbb{R}^{p\times d\times k\times k}$ be the convolution kernels and ${\bm{x}}^{(i,j)}\in\mathbb{R}^{d\times k\times k}$ be the features that we need to compute ${\bm{h}}_{:,i,j}$. The output feature ${\bm{h}}_{:,i,j}$ can be computed as
${\bm{h}}_{:,i,j}=\bar{\bm{W}} {\bm{x}}^{(i,j)}$,
where $\bar{\bm{W}}\in\mathbb{R}^{p\times dk^{2}}$ is obtained by flattening the channel and kernel dimensions. Hence, we can use the same way as in Eq~(\ref{eq:repara}) to reparametrize $\bar{\bm{W}}$:
\begin{flalign}
{\bm{h}}_{:,i,j} = {\bm{L}}{\bm{R}}{\bm{x}}^{(i,j)} + (\bar{\bm{W}}-{\bm{L}}{\bm{R}}){\bm{x}}^{(i,j)}.
\end{flalign}
Specifically, the operation of ${\bm{R}}$ and ${\bm{L}}$ are implemented by two consequent convolutional layers with kernel sizes $r\times d\times k\times k$ \ and $p\times r\times 1\times 1$, respectively, where $r$ is the reparametrization rank. The residual weight is implemented by a convolutional layer of the original kernel size.
\section{Private Deep Learning with Reparametrized Gradient Perturbation}
\label{sec:dp_learning_lrk}
The above reparametrization strategy can significantly reduce the gradient dimension, which could help us circumvent the difficulties of applying differential privacy on large machine learning models. In this section, we propose a procedure ``reparametrized gradient perturbation (RGP)'' to train large neural network models with differential privacy. Specifically, Section \ref{subsec:dp_learning_lrk_algo} introduces the whole procedure of RGP, Section \ref{subsec:privacy_rgp} gives the privacy guarantee of RGP, and Section \ref{subsec:complexity} presents the complexity analysis.
\begin{algorithm}[tb]
\caption{Reparametrized Gradient Perturbation (RGP)}
\label{alg:dp_lrk_repara}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} NN with weight matrices $\{{\bm{W}}^{(l)}\}_{l=1}^{H}$, steps $T$, probability $q$, variance $\sigma^2$, clipping threshold $C$, warm-up steps $T_{\text{warm-up}}$, Algorithm \ref{alg:decompose_pi} input $\{r, K\}$.
\STATE Randomly initialize the weights and obtain $\{{\bm{W}}^{(l)}_{0}\}_{l=1}^H$;
\FOR{$t=1$ {\bfseries to} $T$}
\medskip
\STATE Sample a minibatch $\{{\bm{x}}_{i}\}_{i\in S_t}$ with probability $q$;
\medskip
\STATE For all $l\in [H]$, compute historical updates
$$\Delta_t^{(l)} \leftarrow {\bm{W}}^{(l)}_t - {\bm{W}}^{(l)}_0 \cdot 1_{\{t>T_{\text{warm-up}}\}};$$
and run Alg.~\ref{alg:decompose_pi} with $\{\Delta_t^{(l)}, r, K\}$ to get ${\bm{L}}_t^{(l)},{\bm{R}}_t^{(l)}$;
\medskip
\STATE \textsl{//Forward/backward process with reparametrization.}
\STATE Run reparametrized forward process with Eq~(\ref{eq:repara});
\STATE Run backward process and compute individual gradients $\{\partial_i{\bm{L}}_t^{(l)},\partial_i{\bm{R}}_t^{(l)}\}_{l\in[H], i\in S_t}$;
\medskip
\STATE \textsl{//Bound gradient sensitivity and add noise.}
\STATE Clip individual gradients with $L_{2}$ norm threshold $C$;
\FOR{$l=1$ {\bfseries to} $H$}
\STATE Sum individual gradients and get $\{\partial{\bm{L}}_t^{(l)},\partial{\bm{R}}_t^{(l)}\}$;
\STATE Perturbation with Gaussian noise ${\bm{z}}_{L,t}^{(l)},{\bm{z}}_{R,t}^{(l)}$ whose elements are independently from $\mathcal{N}(0,\sigma^{2}C^{2})$:
$$\tilde{\partial}{\bm{L}}_t^{(l)} \leftarrow \partial{\bm{L}}_t^{(l)} + {\bm{z}}_{L,t}^{{(l)}}, \quad \tilde{\partial}{\bm{R}}_t^{(l)} \leftarrow \partial{\bm{R}}_t^{(l)}+{\bm{z}}_{R,t}^{(l)};$$
\STATE Use $\tilde{\partial}{\bm{L}}_t^{(l)}$, $\tilde{\partial}{\bm{R}}_t^{(l)}$, and Eq~(\ref{eq:grad_lrk}) to construct $\tilde{\partial} {\bm{W}}_t^{(l)}$;
\STATE Use off-the-shelf optimizer to get ${\bm{W}}_{t+1}^{(l)}$;
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Reparametrized Gradient Perturbation Algorithm}
\label{subsec:dp_learning_lrk_algo}
The pseudocode of RGP is presented in Algorithm~\ref{alg:dp_lrk_repara}. The RGP proceeds for all the layers and we ignore the layer index for simplicity in the following discussion. At each update, for a layer with weight matrix ${\bm{W}}$, RGP consists of four steps: 1) generate the gradient-carrier matrices ${\bm{L}}$ and ${\bm{R}}$, 2) run the reparametrized forward/backward process and obtain the individual gradients $\{\partial_i {\bm{L}}\}_{i=1}^{m}$ and $\{\partial_i {\bm{R}}\}_{i=1}^{m}$, 3) clip and perturb the gradients, 4) reconstruct an approximated gradient on the original weight matrix.
In the RGP procedure, \textbf{step 1)}, which is also the core challenge, is to choose ``good" gradient-carrier matrices so that the reconstructed gradient can approximate the original gradient as well as possible. First, this requires for a given rank $r$, the generated gradient-carrier matrices should align with the principal components of the original gradient well. Moreover, to reconstruct the gradient in step 4), it requires the gradient carriers have orthonormal columns/rows.
For the first requirement, we use historical updates to find the gradient carriers. The historical update is not sensitive because of the post-processing property of differential privacy. In Section~\ref{subsec:historical_grad}, we give both empirical and theoretical arguments to demonstrate that the principal subspace of the current gradient aligns with that of the historical update. In our implementation, we use a warm-up phase in which the decomposition is directly done on the weight. We approximate the principal components via the power method (Algorithm~\ref{alg:decompose_pi}) instead of the time-consuming full SVD. For the second requirement, we apply the Gram-Schmidt process to orthonormalize ${\bm{L}}$ and ${\bm{R}}$.
\begin{algorithm}[tb]
\caption{Decomposition via Power Method.}
\label{alg:decompose_pi}
\begin{algorithmic}
\STATE {\bfseries Input:} Historical update $\Delta$, reparametrization rank $r$, number of iterations $K$.
\STATE {\bfseries Output:} Gradient carriers ${\bm{L}}\in\mathbb{R}^{p\times r}$, ${\bm{R}}\in\mathbb{R}^{r\times d}$.
\medskip
\STATE Initialize ${\bm{R}}$ from standard Gaussian distribution.
\FOR{$k=1$ {\bfseries to} $K$}
\STATE ${\bm{L}} \leftarrow \Delta {\bm{R}}^{T}$
\STATE Orthonormalize the columns of ${\bm{L}}$.
\STATE ${\bm{R}}={\bm{L}}^{T}\Delta$
\ENDFOR
\STATE Orthonormalize the rows of ${\bm{R}}$.
\STATE Return ${\bm{L}}$, ${\bm{R}}$
\end{algorithmic}
\end{algorithm}
\textbf{Step 2)} of RGP is the reparametrization and a round of forward/backward propagations, as presented in Section \ref{sec:lrk}.
\textbf{Step 3)} is for differential privacy guarantee. The individual gradients $\{\partial_i {\bm{L}}, \partial_i {\bm{R}}\}_{i=1}^{m}$ are first clipped by a pre-defined threshold so that the sensitivity is bounded. Then, Gaussian noise is added to the aggregated gradient to establish a differential privacy bound. The energy of added noise is proportional to the dimension, i.e., the rank $r$ of the carrier matrices. Hence, in order to make the noise energy small, it encourages us to use smaller rank $r$. However, smaller rank would increase the approximation error in the \textbf{step 1)}. In practice, we trade off these two factors to choose a proper $r$.
In \textbf{step 4)}, we use the noisy aggregated gradients of gradient-carrier matrices to reconstruct the gradients of original weights, as depicted in Corollary~\ref{corollary:grad_lr}. The reconstructed gradients can then be used by any off-the-shelf optimizer.
\subsection{Privacy Analysis of RGP}
\label{subsec:privacy_rgp}
The privacy bound of Algorithm~\ref{alg:dp_lrk_repara} is given by Proposition~\ref{prop:privacy}. The derivation of Proposition~\ref{prop:privacy} is based on the \emph{moments accountant} that is proposed in \citet{abadi2016deep}. Moments accountant has tighter composition bound than the strong composition theorem in \citet{algofound}. Moments accountant first tracks the privacy budget spent at each update. Then, it composes the spent budget of all updates and cast the final privacy cost into the classic $(\epsilon,\delta)$-differential privacy.
\begin{restatable}[\citet{abadi2016deep}]{proposition}{privacy}\label{prop:privacy}
There exist constants $c_1$ and $c_2$ so that given running steps $T$, for any $\epsilon<c_{1}q^{2}T$, Algorithm~\ref{alg:dp_lrk_repara} is $\left(\epsilon,\delta\right)$-differentially private for any $\delta>0$ if we choose \[\sigma\geq c_2\frac{q\sqrt{Tlog\left(1/\delta\right)}}{\epsilon}.\]
\end{restatable}
\begin{proof}
The proof outline is relegated to Appendix~\ref{apd:subsec:proof_sec3}.
\end{proof}
The value of $\sigma$ in Proposition~\ref{prop:privacy} is based on an asymptotic bound on the moments of the privacy loss random variable. In practice, one can use the numerical tools \citep{wang2019subsampled,mironov2019renyi} to compute a tighter bound. So far we have depicted the overall picture of RGP. We next analyze the computational and memory costs of RGP and compare them with that of DP-SGD.
\subsection{Complexity Analysis of RGP}
\label{subsec:complexity}
For the simplicity of notations, we only give the costs of one fully connected layer at one update (including forward and backward) and assume that the weight matrix is square. The shape of weight matrix, size of minibatch, number of power iterations, and rank of reparametrization are denoted by $(d\times d)$, $m$, $K$, and $r$, respectively.
The computational overhead of RGP consists of three parts. The first part is induced by matrix multiplication of power iteration, whose complexity is $\mathcal{O}(Krd^{2})$. The second part is induced by the Gram–Schmidt process, whose complexity is $\mathcal{O}(Kr^{2}d)$. The third part of overhead is the computational cost induced by gradient carriers during the forward/backward process, which is on the order of $\mathcal{O}(mrd)$.
RGP uses much less memory than DP-SGD in the practice. Although RGP needs some extra memory to store the activation produced by the gradient carriers, it has a significant advantage over DP-SGD on the memory cost of storing individual gradients, which is one of the main challenges of learning with differential privacy. For RGP, the memory cost of individual gradients only scales linearly with model width $d$ in contrast with $d^2$ for DP-SGD. We summarize the computational cost of one update and the memory cost of storing individual gradients in Table~\ref{tbl:complexity}.
\begin{table}
\caption{Computation and memory costs of RGP (Algorithm~\ref{alg:dp_lrk_repara}) and DP-SGD \citep{abadi2016deep}, where $m$ is the size of mini-batch, $d$ is the model width, $r$ is the reparametrization rank, and $K$ is the number of power iterations.}
\label{tbl:complexity}
\centering
\small
\renewcommand{\arraystretch}{1.85}
\begin{tabular}{ P{2.45cm}|P{1.15cm}|P{3.4cm} }
\hline \hline
\backslashbox{Cost}{Method} & DP-SGD & RGP \\
\hline
Computational cost & $\mathcal{O}(md^{2})$ & $\mathcal{O}(md^{2}+Krd^2+Kr^{2}d)$ \\\hline
Memory cost & $\mathcal{O}(md^{2})$ & $\mathcal{O}(mrd)$ \\
\hline
\hline
\end{tabular}
\end{table}
The low-rank nature of gradient permits us to choose a small $r$ without destroying utility (see Section~\ref{subsec:grad_is_lrk}). In practice, we typically choose the rank $r$ smaller than $10$. For the number of power iterations in Algorithm~\ref{alg:decompose_pi}, we find that setting $K=1$ is sufficient to get good performance. Hence, in practice, we always choose small $r$ and $K$ for efficiency while not hurting the performance.
\section{Preliminary on Differential Privacy} \label{app:sec:preliminary}
Differential privacy (DP) \cite{dwork2006calibrating,dwork2014algorithmic} is widely recognized as a gold standard of privacy protection due to its mathematical rigor. It controls the maximum influence that any individual sample can produce. The definition of $(\epsilon,\delta)$-DP is given in Definition~\ref{def:dp}.
\begin{definition}[$(\epsilon,\delta)$-DP]
\label{def:dp}
A randomized mechanism $\mathcal{M}$ guarantees $(\epsilon,\delta)$-differential privacy if for any two neighboring input datasets ${\mathbb{D}}\sim {\mathbb{D}}^{'}$ and for any subset of outputs ${\mathbb{S}}$ it holds that $\text{Pr}[\mathcal{M}({\mathbb{D}})\in {\mathbb{S}}]\leq e^{\epsilon}\text{Pr}[\mathcal{M}({\mathbb{D}}^{'})\in {\mathbb{S}}]+\delta$.
\end{definition}
Two datasets are said to be neighboring datasets if they only differ in a single sample. When being applied to learning problems, DP requires the learned models on neighboring datasets have approximately indistinguishable distributions.
\section{Missing Proofs} \label{app:sec:proof}
\subsection{Missing Proofs in Section \ref{sec:lrk}}
\label{apd:subsec:proof_sec2}
\gradlr*
\begin{proof}
The proof is based on the chain rule of back-propagation. Since the reparametrization does not change the forward and backward signals, we assume the layer inputs are ${\mathbb{D}}=\{{\bm{x}}_{i}\}_{i=1}^{m}$, the corresponding outputs are $\{{\bm{h}}_i\}_{i=1}^{m}$ with ${\bm{h}}_i = {\bm{W}} {\bm{x}}_i$ and the backward signals on the layer output are $\{\partial {\bm{h}}_i\}_{i=1}^{m}$. By back-propagation, we have
\begin{flalign*}
&\partial {\bm{W}} = \sum_{{\bm{x}}_{i}\in {\mathbb{D}}} (\partial {\bm{h}}_i) {\bm{x}}_i^T, \\
&\partial {\bm{L}} =\sum_{{\bm{x}}_{i}\in {\mathbb{D}}}\partial {\bm{h}}_i ({\bm{R}} {\bm{x}}_i)^T,\;\; \partial {\bm{R}} =\sum_{{\bm{x}}_{i}\in {\mathbb{D}}} ({\bm{L}}^{T}\partial {\bm{h}}_i) {\bm{x}}_i^T.
\end{flalign*}
Proof is completed by the multiplication associativity.
\end{proof}
\corogradlr*
\begin{proof}
If the columns of ${\bm{L}}$ and the rows of ${\bm{R}}$ are orthonormal, the projection of $\partial {\bm{W}}$ onto ${\bm{L}}$ and ${\bm{R}}$ is defined as,
\begin{flalign}
{\bm{L}}\mL^T (\partial {\bm{W}}) + (\partial {\bm{W}}) {\bm{R}}^T{\bm{R}} - {\bm{L}}\mL^T (\partial {\bm{W}}){\bm{R}}^T{\bm{R}}.
\end{flalign}
Substituting the above formula with $\partial {\bm{L}} = (\partial {\bm{W}}) {\bm{R}}^T$ and $\partial {\bm{R}} = {\bm{L}}^T (\partial {\bm{W}})$ in Theorem \ref{thm:grad_lr}, completes the proof.
\end{proof}
\subsection{Missing Proofs in Section \ref{sec:dp_learning_lrk}}
\label{apd:subsec:proof_sec3}
\privacy*
\begin{proof}
Although RGP releases projected gradient instead of releasing the whole gradient as in \citet{abadi2016deep}, moments accountant is still applicable because it applies to vectorized function output.
Moments accountant tracks a bound on the moments of the privacy loss random variable, which is built on the ratio of the probability density functions of the output distributions of two neighboring datasets. \citet{abadi2016deep} show the log moments of the privacy loss random variable composes linearly. Therefore one can compute the overall privacy cost by adding the log moments at every update. When the training is done, moments accountant casts the accumulated log moments into $(\epsilon,\delta)$-DP via tail bound. Detailed proof can be found in Appendix B of \citet{abadi2016deep}.
\end{proof}
\subsection{Missing Proofs in Section \ref{sec:grad_property}}
\label{apd:subsec:proof_sec4}
\gradalign*
\begin{proof}
We can compute the gradient at step $t$
\begin{flalign*}
\partial {\bm{W}}_t &= \frac{1}{n}\sum_{i=1}^n ({\bm{W}}_t {\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T.
\end{flalign*}
Given the gradient descent update \eqref{eq:gd}, we can compute the gradient at ${\bm{W}}_{t+1}$ as follows
\begin{flalign*}
\partial {\bm{W}}_{t+1}
&= \frac{1}{n}\sum_{i=1}^n (({\bm{W}}_t - \eta \cdot\partial{\bm{W}}_t){\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T\\
&= \frac{1}{n}\sum_{i=1}^n ({\bm{W}}_t{\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T - \eta\cdot \partial{\bm{W}}_t \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T \\
& = \partial {\bm{W}}_t \left ({\bm{I}} - \eta \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T\right ).
\end{flalign*}
Hence we have $\partial {\bm{W}}_t = \partial {\bm{W}}_0 \left ({\bm{I}} - \eta \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T\right )^t$. The $\partial {\bm{W}}_t$ lives in the same subspace for all $t\ge 1$ as they have the same row/column spaces.
\end{proof}
\section{Additional Experiments}\label{app:sec:add-exp}
\begin{figure*} [t]
\centering
\includegraphics[width=0.9\linewidth]{imgs/show_influ_rank.pdf}
\caption{Prediction accuracy of BERT on four downstream tasks (in \%) with difference choices of reparametrization rank. We plot the average score of two test datasets for MNLI. }
\label{fig:fig_bert_rank}
\end{figure*}
We present some ablation studies in this section to verify the effect of residual weight and reparametrization rank. In Section~\ref{subsec:apd_rank}, we try RGP with difference rank choices. In Section~\ref{subsec:apd_residual}, we give a variant of RGP that simply discards the residual weight.
\subsection{On the Influence of Different Rank Choices} \label{subsec:apd_rank}
We present the results (see Figure \ref{fig:fig_bert_rank}) with different choices of reparametrization rank. We consider four algorithms. The first one is fine-tuning the full model that serves as the baseline. The second one is RGP (N.P.) that trains the model with reparametrization but without gradient clipping or adding noise. The third one is RGP (Algorithm~\ref{alg:dp_lrk_repara}) and the last one is RGP-random, which uses random orthogonal vectors as gradient-carrier matrices. The privacy parameter $\epsilon$ is $8$ and other settings are the same as those in Section~\ref{sec:exp}. The results are plotted in Figure~\ref{fig:fig_bert_rank}. When the models are trained without noise, increasing the reparametrization rank makes the performance of RGP (N.P.) approach the performance of baseline. When the models are trained with privacy guarantee, increasing the rank sometimes decreases the performance because a larger rank induces more trainable parameters and hence higher noise dimension.
\subsection{On the Importance of Residual Weight}\label{subsec:apd_residual}
Recall that our reparametrization scheme reparametrizes the weight matrix as follows:
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}} + \tilde{{\bm{W}}}.{stop\_gradient()}. \label{eq:apd_repara}
\end{flalign}
We have shown that the residual weight $\tilde{{\bm{W}}}$ keeps the forward/backward signals unchanged and makes the gradients of ${\bm{L}}$ and ${\bm{R}}$ naturally connected with the original gradient. To empirically examine the effect of $\tilde{{\bm{W}}}$, we test the following scheme:
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}}. \label{eq:apd_repara_nores}
\end{flalign}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{imgs/show_influ_residual.pdf}
\caption{Prediction accuracy of BERT on two downstream tasks. All methods are trained without privacy guarantee. }
\label{fig:fig_bert_residual}
\end{figure}
We still use the historical update to generate ${\bm{L}}$ and ${\bm{R}}$. Other settings are the same as those in Section~\ref{sec:exp}. The results on two downstream tasks of the BERT model are presented in Figure~\ref{fig:fig_bert_residual}. Without residual weight, the model learns almost nothing from the re-constructed update and the final accuracy is close to the accuracy at initialization.
\end{appendix}
|
\section{Introduction}
The evolution of the mobile and wireless communication networks into the fifth generation (5G) will play a significant role in improving the global economy. With the internet of things (IoT) dictating the way in which people communicate through information sharing and knowledge dissemination, internet coverage neesd to be improved. The capacity to provide radio coverage over a wide geographic area is a \mbox{pre-requisite} towards meeting the \mbox{ultra-low} latency requirements demanded by mobile subscribers~\cite{ericssonreport}\cite{energymanagershow}. Through the installation of a \acp{BS} and the development of the mobile and wireless communications, continuous communications can be achieved. This constitutes a gigantic step towards solving the rural/remote connectivity problem since electricity might be unreliable and it is very costly to extend grid connection to remote areas. Therefore, the provisioning of communication services in remote areas entails the use of renewable energy. Using renewable energy, coupled with sustainable energy storage solutions is a promising solution towards resolving the remote area energy predicament. \\
\indent Despite the use of green energy as a potential solution, many rural and remote areas in developed or undeveloped countries around the world are facing the challenge of unreliable \mbox{high-quality} Internet connectivity~\cite{remote6g}. This is because \ac{MN} operators are still skeptical towards making information \& communications technology (ICT) infrastructure investments in remote areas - hence the digital divide. One of the essential reasons is low expected revenue, calculated as \ac{ARPU}, which reduces companies' willingness to invest in these areas. However, with the current trends in battery and solar module costs showing a decrease, MN operators might be motivated to make investments in remote and rural areas and deploy connectivity networks. Moreover, the advent of open, programmable, and virtualized $5$G networks, will enable MN operators to overcome the limitations presented by the current \acp{MN}~\cite{energymanagershow}\cite{open5G} and make the ease of deploying open and programmable \acp{MN} a possibility. \\
\indent To extend network coverage to remote/rural areas, the use of terrestrial or \mbox{non-terrestrial} networks is proposed in~\cite{antenna_for}. In parallel, Sparse Terrestrial Networks (STN) using high towers and large antenna arrays are being developed to deliver very long transmission ranges. Here, the systems are equipped with the latest emerging antenna technologies and designs such as reconfigurable phased/inflatable/fractal antennas realized with metasurface material. Towards this, the works of~\cite{antenna_for}study the feasibility of providing connectivity to sparse areas utilizing \mbox{massive-MIMO} where the existing infrastructure of TV towers was used. In that work, it is observed that higher frequencies provide larger area coverage, provided that the antenna array area is the same. Another strategy for achieving good coverage as well as high capacity in remote/rural areas is to utilize two frequency bands, one low band and one high band, in an aggregated configuration. Following this strategy, the authors of~\cite{5g_rural_nr} combine the New Radio (NR) $3.5$ GHz and LTE $800$ MHz on a GSM grid. In addition, along the lines of long range systems, the NR is expected to support high data rates with low average network energy consumption through its lean design and massive MIMO utilization. Also, the authors of~\cite{deep_rural} extend rural coverage with STNs. Here, the large cells are created by using \mbox{long-range} links between \acp{BS} and \ac{UE}, where the long range is achieved by high towers combined with large antenna arrays and efficient antenna techniques creating narrow beams with high gain with a \mbox{line-of-sight} (LoS) or \mbox{near-LoS} connection to the UE.\\
\indent In order to end this digital divide, MNs have to \mbox{re-look} the way in which they are operating and make the necessary adjustments. One workable solution is making use of the softwarization technologies such as: \ac{SDN}, \ac{NFV}, \ac{MEC}, to be enablers for \textit{resource sharing} and \textit{edgefication}~\cite{open5G}\cite{online_pimrc}. Furthermore, the emergence of network slicing further avails new market opportunities~\cite{interdigital} for \acp{MN} to explore. In network slicing, the BS site infrastructure (\textit{resource blocks, bandwidth, computing resources}) can be shared {\it fairly} by two or more mobile operators in \mbox{real-time}. This is to effectively maximize the use of existing network resources while simultaneously minimizing the operational costs in remote sites. Also, the open and accessible shared infrastructure can enable more MN operators and Internet service providers to expand their footprint into \mbox{low-income} areas, increasing the availability of connectivity in these areas and contributing to bridging the digital divide. For continuous operation in the rural/remote communication sites, the BS empowered with computing capabilities can be \mbox{co-located} with \ac{EH} systems for harvesting energy from the environment, storing it in \acp{EB} (storage devices), and then powering the site.\\
\indent There are several forms of infrastructure sharing cases already in existence~\cite{mobilesharing}, such as the \mbox{roaming-based} sharing where the MN operators share the cell coverage for a prenegotiated time period. For example, using this \mbox{roaming-based} sharing, a \ac{UE} can employ the roaming procedure in order to connect to a foreign network. In these \say{classical} forms of sharing generally one MN operator still retains ownership of the mobile network.
Under shared infrastructure, new entrants no longer need to incur the \mbox{often-significant} upfront cost of building their own infrastructure and can save time and resources that would otherwise be dedicated to administrative authorization and licensing. However, potential risks to
competition, governance, and implementation need to be managed to achieve the greatest benefit from infrastructure sharing.
In this article, the BS infrastructure sharing and its \mbox{co-located} computing platform (\ac{MEC} server) is done only for handling \mbox{delay-sensitive} workloads in remote/rural areas. Here, MN operators still have control of the delay-tolerant workloads to their remote clouds. This entails bringing the notion of \mbox{\it co-ownership} of the communication sites in remote/rural areas, within the \ac{MEC} paradigm, in which \textit{two} MN operators pull together their capital expenditure in order to share the deployed infrastructure, thus saving precious (already limited) economic resources for other types of expenses. Then, in order to effectively manage the BS sites deployed in remote/rural areas, procedures for dynamic network control (\textit{managing network resources when MN operators share fairly their network infrastructure}) and agile management are required. This will assist in efficiently delivering a comparable \ac{QoS} in remote/rural areas to that of urban areas.\\%, taking into account the future workloads and green energy to be harvested.
\indent The work done in this article is an extension of~\cite{online_pimrc}, where \ac{BS} sleep modes and \ac{VM} \mbox{soft-scaling} procedures were employed towards energy saving in remote sites. In \cite{online_pimrc}, energy savings were obtained through \mbox{short-term} traffic load and harvested energy predictions, along with energy management procedures. However, the considered energy cost model does not take the caching process, tuning of transmission drivers, and the use of \mbox{container-based} virtualization into account. In addition, the considered communication site belongs to \textit{one} MN operator, i.e., the site infrastructure was not shared between multiple operators. Therefore, the \mbox{computing-plus-communication} energy cost model is the main motivation for this article, where the BS site is shared among multiple operators in order to handle \mbox{delay-sensitive} workloads only.
One application of our model (strategy) corresponds to the current situation that has been caused by the new coronavirus (COVID-19) pandemic. The pandemic has reshaped our living preferences such that rural (remote) areas are now becoming more and more attractive. This can motivate MN operators to deploy networks in such areas and then share their communication infrastructure and the computing resources that are \mbox{co-located}. The contributions of this article are summarized as follows:
\begin{itemize}
\item [1)] A \ac{BS} empowered with computing capabilities \mbox{co-located} with a \ac{EH} system is considered, whereby the MN operators share the BS site infrastructure (i.e., \textit{bandwidth, computing resources}) for handling \mbox{delay-sensitive} workloads within a remote/rural area.
\item [2)] In order to enable foresighted optimization, a \mbox{short-term} future communications site workload and harvested energy is forecasted using a \ac{LSTM} neural network~\cite{lstmlearn}.
\item [3)] An online \mbox{controller-based} algorithm {\it called} \ac{DRC-RS} for handling infrastructure sharing and managing the communication site located in remote/rural areas is developed. The proposed algorithm is based on the \ac{LLC} approach and resource allocation procedures with the objective of enabling for infrastructure sharing (BS and its \mbox{co-located} computing platform) and resource management within remote and rural communication sites.
\item [4)] \mbox{Real-world} harvested energy and traffic load traces are used to evaluate the performance of the proposed optimization strategy. The numerical results obtained through simulation show that the proposed optimization strategy is able to efficiently manage the remote/rural site and also allows the sharing of the network infrastructure.
\end{itemize}
\begin{table*} [h!]
\caption{Comparison with existing works.}
\label{tab_opt1}
\begin{threeparttable}
\center
\begin{tabular} {|l|l|l|l|l|}
\hline
{\bf Feature} & {\bf Edge computing} & {\bf Method Used} & {\bf Forecasting} & {\bf Objective}\\
\hline
RAN sharing~\cite{sharingRAN} & No & Linear programming & No & Max. QoS\\ \hline
Traffic load exploitation~\cite{gamebasedsharing} & No & Game theory & No & Min. spending cost\\ \hline
Contractual backup~\cite{strategicsharing} & No & Contract design under & No & Max. resource utilization\\
& & symmetric information & & and profits\\ \hline
Multiple-seller single-buyer~\cite{sanguanpuak} & No & Stochastic geometry & No & Cost minimization\\
& & & & Guarantee of QoS \\ \hline
Communication and & Yes & \ac{LSTM} & Yes & Min. energy consumption\\
Computation [\textbf{Proposed}] & & \ac{LLC} & & Guarantee of QoS\\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item Yes: considered; No: not considered
\end{tablenotes}
\end{threeparttable}
\end{table*}
In order to achieve these, the remainder of this article is organized as follows: Section~\ref{sec:rel} discusses previous research works related to the one undertaken in this article. Section~\ref{sec:sys} describes the proposed system model using detailed explanation on the operation of each network element. The mathematical problem formulation is given in Section~\ref{sec:prob} together with the details of the optimization problem and the proposed \ac{DRC-RS} online algorithm. In Section~\ref{sec:eval}, a performance evaluation of the proposed online algorithm is presented using simulation results and statistical discussions. The conclusions of this article are then given in Section~\ref{sec:concl}.
\section{Related Work}\label{sec:rel}
\noindent MN operators generally have complete ownership and control of their network and their networks are characterized by an inflexible and monolithic infrastructure. Such a rigid status quo incapacitates networks of the required versatility, hence they cannot cope with the dynamically changing requirements. As a result, in their current state, meeting the heterogeneity and variability of future MNs is an impossible task. As mobile and wireless networks evolve, MN operators are faced with the daunting task of keeping up and coping with the accelerated \mbox{roll-out} of new technologies. Due to this fast-paced technological advancements, large and frequent investments are made in order to cope with the new services and network management phases. This proactive network operation and management consequently increases the network operating costs, which reduces the intended profits. Thus, in order to reduce the \mbox{per-MN} operator investment cost, the sharing of network infrastructure between mobile operators is an attractive solution. To this effect, the authors in~\cite{sharingRAN} proposed a \ac{RAN} sharing scheme where MN operators share a single radio infrastructure while maintaining separation and full control over the backhauling and their respective core networks. In that paper, a mixed integer linear programming (MILP) formulation is proposed for determining the sharing configurations that maximize the \ac{QoS}, and a cooperative game theory concept is used to determine stable configurations as envisioned by the MN operator. The regulatory enforcement towards offering the best service level for the users and the greedy approach considered in that paper reduce the effectiveness of infrastructure sharing, as both approaches do not promote fairness among \ac{MN} operators.
In addition, the work of~\cite{gamebasedsharing} employs an infrastructure sharing algorithm towards energy savings by exploiting the under utilization of the network during \mbox{low-traffic} periods. In their work, a \mbox{game-theoretic} framework was proposed in order to enable the MN operators to individually estimate the \mbox{switching-off} probabilities that reduce their expected financial cost. Apart from the energy efficiency benefits, the proposed scheme allows the participating MN operators to minimize their spending costs independently of the strategies of the coexisting MN operators. Despite of the presented benefits, it is worth noting that infrastructure sharing should be considered for both low- and high-traffic periods, which is the focus of this paper. However, due to the existence of competition between the different MNs, collaboration in this infrastructure sharing is a primary requisite. In order to enforce such a collaboration between competitors, the authors in~\cite{strategicsharing} proposed a strategic network infrastructure sharing framework for contractual backup reservation between a small/local network operator of limited resources and uncertain demands, and one resourceful operator with potentially redundant capacity. Here, one MN operator pays for network resources reserved for use by its subscribers in another MN operator, while in turn the payee guarantees the availability of the resources. Then, in~\cite{sanguanpuak}, the problem of infrastructure sharing among MN operators is presented as a \mbox{multiple-seller} \mbox{single-buyer} business. In their contribution, each \ac{BS} is utilized by subscribers from other operators and the owner of the BS is considered as a seller of the BS infrastructure while the owners of the subscribers utilizing the BS are considered as buyers. In the presence of multiple seller MN operators, it is assumed that they compete with each other to sell their network infrastructure resources to potential buyers. \\
\indent The aforementioned works consider BS infrastructure sharing towards lowering operational cost, either by switching on/off the BSs, while maintaining the network control. In addition, infrastructure sharing is treated as a business case instead of a cooperative effort towards boosting connectivity in remote/rural areas. If one MN operator is treated as a seller while the other one as a buyer if it uses its network resources, this becomes a business venture. For example, one MN operator might be using the resource reservation technique, whereby it reserves resources for other small operators. Again, here the other party has to pay in order to use those facilities. However, it is worth mentioning that the works done in~\cite{sharingRAN}\cite{gamebasedsharing}\cite{strategicsharing}\cite{sanguanpuak} do not consider infrastructure sharing with the \ac{MEC} paradigm and the consideration of green energy has been overlooked. Those that are within the \ac{MEC} paradigm they share their \textit{own} network resources, among themselves in order to handle spatially uneven computation workloads in the network. Their objective being to avoid large computation latency at overloaded small BSs as well as to provide high quality of service (QoS) to end users. The details of how internal infrastructure sharing is conducted cannot be covered in this article, interested readers are referred to~\cite{chen2018computation}. Table~\ref{tab_opt1} above summarizes the differences of the infrastructure sharing strategy from existing works.
\section{System Model}\label{sec:sys}
\begin{figure}[h!]
\centering
\includegraphics[width = \columnwidth]{remotesite.eps}
\caption{The remote/rural BS site infrastructure consisting of the BS co-located with the MEC server both powered by green energy obtained from solar radiation and wind turbine.}
\label{fig:remotesite}
\end{figure}
In this paper, we consider a remote/rural site network scenario as illustrated in Fig.~\ref{fig:remotesite} above. Each network apparatus (BS, MEC server) in the figure is mainly powered by renewable energy harvested from wind and solar radiation, and it is equipped with an \ac{EB} for energy storage. The stored energy is shared by the edge server and the BS system. The \ac{EM} is an entity responsible for selecting the appropriate energy source to fulfill the \ac{EB}, and also for monitoring the energy level of the \ac{EB}. Then, the intelligent \mbox{electro-mechanical} switch (I-SW) aggregates the energy sources to fulfill the \ac{EB} level.
\noindent The proposed model in Fig.~\ref{fig:remotesite} above is \mbox{cache-enabled}, TCP/IP offload capable (i.e., enables {\it partial} offloading in the server's \ac{NIC} such as checksum computation~\cite{sohan2010characterizing}). The virtualized MEC server, which is \mbox{co-located} with the \ac{BS}, is assumed to be hosting $C$ containers (see C1, C2 in Fig.~\ref{fig:remotesite}). Also, it has an input and output buffer for holding the workloads. It is assumed that some of the BS functions are virtualized as pointed in~\cite{BS_virtualization} as the \ac{MEC} node is composed of a virtualized access control router which acts as an access gateway for admission control. The virtualized access control router (ACR) is responsible for local and remote routing, and it is locally hosted as an application. Here, it is assumed that the remote/rural site infrastructure is shared between {\it two} MN operators through a \mbox{pre-existing} agreement, where a common microwave backhaul or a \mbox{multi-hop} wireless backhaul relaying is used for accessing remote clouds or the Internet. Moreover, a \mbox{discrete-time} model is considered, whereby the time is discretized as \mbox{$t = 1,2,\dots$} time slots of a fixed duration $\tau$.
\subsection{Input Traffic and Queue Model}
\noindent In the communication site, the BS is the connection point anchor and the computing platform processes the currently assigned \mbox{delay-sensitive} tasks by \mbox{self-managing} its own local virtualized storage/computing resources. Also shown in Fig.~\ref{fig:remotesite} above is an input buffer of size $L_{\rm in}$, a reconfigurable computing platform and the related switched virtual LAN, an output queue of size $L_{\rm out}$; and a controller that \mbox{re-configures} the \mbox{computing-plus-communication} resources and also performs the control of input/output traffic flows. Since the workload demand exhibits a diurnal behavior in remote/rural areas, forecasting the mobile operator's workload can help towards network infrastructure sharing. Thus, in order to emulate the remote site traffic load $L(t)$ (from $|\nu(t)|$ users), real MN traffic load traces from~\cite{bigdata2015tim} are used. It is assumed that \textit{only} Operators A and B share the remote/rural BS site, and their traffic load profiles are denoted by $L_{\rm A} (t)$ and $L_{\rm B}(t)$ ([bits]), respectively. It is also assumed that $L_{\rm A}(t)$ (or $L_{\rm B}(t)$) consists of $0.8$ \mbox{delay-sensitive} workloads $\gamma_{\rm A}(t)$ (or $\gamma_{\rm B}(t)$) and the remainder is delay-tolerant. The total admitted workload is denoted by $\gamma^*(t) = \gamma_{\rm A}(t) + \gamma_{\rm B}(t)$, i.e., $\gamma^*(t) \leq L_{\rm in}$). The input/output (I/O) queue of the system are assumed to be \mbox{loss-free} such that the time evolution of the backlogs queues follows Lindley's equations. The normalized BS traffic load behavior representation of the two mobile operators is illustrated in Fig. \ref{fig:trace_load} above.
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{traffic_profiles.eps}
\caption{Normalized BS traffic loads behavior representing two MN operators represented as operator A and B.}
\label{fig:trace_load}
\end{figure}
\subsection{Communication and Computing Energy Cost Model}
\noindent For the BS system deployed in the remote/rural area, the total energy consumption $\theta_{\rm SITE}(t)$ (measured in $\SI{} {\joule}$) at time slot $t$ consists of the BS communications, denoted by $\theta_{\rm COMM}(t)$, and computing platform processes, related to computing, caching, and communication, which is denoted by $\theta_{\rm COMP}(t)$. Thus, the energy consumption model at time slot $t$ is formulated as follow, inspired by~\cite{steering}:
\begin{equation}
\theta_{\rm SITE}(t) = \theta_{\rm COMM}(t) + \theta_{\rm COMP}(t).
\label{eq:siteconsupt}
\end{equation}
\noindent The \ac{BS} energy consumption processes $\theta_{\rm COMM}(t)$ constitutes of the sum of the following:
\begin{equation}
\theta_{\rm COMM}(t) = \sigma(t)\theta_0 + \theta_{\rm load}(t) + \theta_{\rm bk} + \theta_{\rm data}(t)\gamma^*(t)\,,
\end{equation}
\noindent where $\sigma (t)\in \{0,1\}$ is the BS switching status indicator, with $1$ representing the active mode while $0$ indicates the power saving mode. $\theta_0$ is the load independent constant value representing the operation energy, \mbox{$\theta_{\rm load} (t) = L(t) (2^{\frac{r_0}{\zeta(t) W}}-1)N_0 (K)^\alpha \beta^{-1}$} the load dependent transmission power to the served subscribers that guarantees low latency services at a target rate $r_0$. The term $W$ is the channel bandwidth, $\zeta(t)$ is the fraction of the bandwidth used by the mobile users from operator A and B, while $\alpha$ and $\beta$ are the path loss exponent and the path loss constant, respectively. The term $K$ denotes the average distance between two BSs within the same region, and $N_0$ is the noise power. The parameter $\theta_{\rm bk}$ represents the constant microwave backhaul transmission energy cost, and $\theta_{\rm data}(t)$ (fixed value in J/byte) is the \mbox{inter-communication} cost incurred by exchanging data between the BS and MEC interfaces.\\
\indent Next, we discuss the MEC server processes that make up $\theta_{\rm COMP}(t)$. With $\gamma^*(t)$ being the currently admitted workload to be processed, let $\gamma_c(t) \leq \gamma_{\rm max}, c = 1, \dots, C(t)$, denote the size of the task that the scheduler allocates, per container, bounded by the set maximum amount $\gamma_{\rm max}$. This is such that the following constraint: $\sum_{c=1}^{C(t)} \gamma_c(t) = \gamma^*(t)$, guarantees that the overall workload is partitioned into $|C(t)|$ parallel tasks.
This load distribution is motivated by the shares feature~\cite{migrationpower} that is inherent in virtualization technologies. This enables the resource scheduler to efficiently distribute resources amongst contending containers, thus guaranteeing the completion of the computation process within the expected time.
Thus, the set of attributes which characterize each container are: $\{\psi_c(t), \theta_{{\rm idle},c}(t), \theta_{{\rm max},c}(t), \Delta, f_c(t) \},$, where $\psi_c(t) = (f_c(t)/f_{\rm max})^2$ is the container utilization function, and $f_{\rm max}$ is the maximum available processing rate for container. Here, $f_c(t) \in [f_0, f_{\rm max}]$ denote the processing rates of container $c$, whereby the term $f_0$ is the zero speed of the container, e.g., deep sleep or shutdown. The term $\theta_{{\rm idle},c}(t)$ represents the static energy drained by the container $c$ in its idle state, $\theta_{{\rm max},c}(t)$ is the maximum energy that container $c$ can consume, and $\Delta$ is the maximum \mbox{per-slot} and \mbox{per-container} processing time ([s]).\\
\indent Within the computing platform, the energy drained due to the active containers, denoted by $\theta_{\rm CP}(t)$, is induced by the \ac{CPU} share that is allocated for the workload, and it is given by:
\begin{equation}
\theta_{\rm CP}(t) = \sum_{c=1}^{C(t)}\theta_{{\rm idle}, c}(t) + \psi_{c}(t) (\theta_{{\rm max},c}(t)-\theta_{{\rm idle}, c}(t)).
\label{eq:cp}
\end{equation}
It should be noted that within the edge server there is the virtualization layer with switching capabilities (see Fig.~\ref{fig:remotesite}). Thus, the processing rates are switched from the processing rates of the previous time instance ($t-1$), denoted by $f_c(t-1)$, to the present instance ($t$), denoted by $f_c(t)$. This entails an energy cost, denoted by $\theta_{\rm SW}(t)$, which is defined as:
\begin{equation}
\theta_{\rm SW}(t) = \sum_{c =1}^{C(t)} k_e (f_c(t)-f_c(t-1))^2,
\label{eq:sw}
\end{equation}
where $k_e$ represents the \mbox{per-container} reconfiguration cost caused by a unit-size frequency switching which is limited to a few hundreds of $\SI{}{\metre\joule}$ per $(\rm MHz)^2$. \\
\indent The MEC server can perform TCP/IP computation processing in the network adapter in order to minimize the CPU utilization. Such process incurs an energy that is drained, denoted by $\theta_{\rm OF}(t)$, which is obtained as:
\begin{equation}
\theta_{\rm OF}(t) = \delta(t) \theta_{\rm idle}^{\rm nic}(t) + \theta_{\rm max}^{\rm nic}(t),
\label{eq:of}
\end{equation}
where $\theta_{\rm idle}^{\rm nic}(t)$ (a non-zero value) is the energy drained by the adapter when powered but with no data transfer processes. This avails an opportunity to reduce the \mbox{non-zero} value to zero energy. For this, $\delta(t) = (0, 1)$ is the switching status indicator, with 1 indicating the active state and $0$ representing the idle state. Then, $\theta_{\rm max}^{\rm nic}(t)$ is the maximum energy drained by the network adapter process and it is obtained in a similar way as in~\cite{steering}. \\
\indent In order to keep the \mbox{intra-communication} delays at a minimum, it is assumed that each container $c$ communicates with the resource scheduler through a dedicated reliable link that operates at the transmission rate of $r_c(t)$ [(bits/s)]. Thus, the power drained by the $c^{\rm th}$ \mbox{end-to-end} connection is given by:
\begin{equation}
P_c^{\rm net}(t) = \Psi_c (\overline{rtt_c} \, r_c(t))^2,
\end{equation}
where $c = 1, \dots, C(t), \overline{rtt_c}$ is the average \mbox{round-trip-time} of the $c^{\rm th}$ \mbox{intra-connection}, and $\Psi_c$ (measured in $\SI{}{\watt})$ is the power consumption of the connection when the product, i.e., the \mbox{round-trip-time}, which is by \mbox{communication-rate-unit-valued}. Therefore, after $\gamma_c(t)$ has been allocated to container $c$, the corresponding communication energy consumed by the $c^{\rm th}$ links is, denoted by $\theta_{\rm LK}(t)$, is obtained as:
\begin{equation}
\theta_{\rm LK}(t) = P_c^{\rm net}(t) (\gamma_c(t)/r_c(t)) \equiv (2\Psi_c/(\tau - \Delta)) (\overline{rtt}_c \gamma_c(t))^2.
\label{eq:lk}
\end{equation}
\noindent In practical application scenarios, the maximum \mbox{per-slot} communication rate within the \mbox{intra-communications} is generally limited by a \mbox{pre-assigned} value $r_{\rm max}$, thus the following hard constraint must hold: $\sum_{c=1}^{C(t)} r_c(t) = \sum_{c=1}^{C(t)} (2\gamma_c(t)/ (\tau - \Delta)) \leq r_{\rm max}$. We also note that there exists a \mbox{two-way} per task execution delay where each link delay is denoted by $\varrho_c(t) = \gamma_c(t)/r_c(t)$. In this work, we assume that the overall delay equates to $2\,\varrho_{c}(t) + \Delta$.\\
\indent To dequeue the computational results from the output buffer, denoted by $L_{\rm out}$, the optical tunable drivers are used for the data transfers processes. A \mbox{trade-off} between the transmission speed and the number of active drivers per time instance is required to reduce the energy consumption. For data transfers, $|D(t)| \leq D$ drivers are required for transferring $l_d(t) \in L_{\rm out}$. The energy drained by the data transfer process, denoted by $\theta_{\rm LS}(t)$, consists of the energy for utilizing each fast tunable driver, denoted by $m_d(t) [(J/s)]$ (a constant value), the target transmission rate $r_0$, and $L_{\rm out}$. Thus, the energy is obtained as follows:
\begin{equation}
\theta_{\rm LS}(t) = \sum_{d=1}^{D(t)} (m_d(t) l_d(t))/{r_0},
\label{eq:ls}
\end{equation}
where the parameters are obtained similar to~\cite{steering}. \\
\indent To minimize the network traffic from the remote/rural site to the remote clouds, some of the frequently requested internet content are cached locally, more especially viral contents. The caching process contribute to the energy consumption within the site, denoted by $\theta_{\rm CH}(t)$, and it is obtained as~\cite{steering}:
\begin{equation}
\theta_{\rm CH}(t) = \overline{\lambda} (t)\,(\theta_{\rm TR} (t) + \theta_{\rm CACHE}(t)),
\label{eq:cache}
\end{equation}
where $\theta_{\rm TR} (t)$ represents the power consumption due to transmission processes, $\theta_{\rm CACHE}(t)$ is the power consumption contributed by the caching process with its \mbox{intra-communication}, and $\overline{\lambda} (t)$ is the response time function for viral content~\cite{large_youtube}. \\
\indent In overall, the resulting \mbox{communication-plus-computing} processes incurs an energy cost $\theta_{\rm COMP}(t)$, per slot $t$, which is given by Eqs.~(\ref{eq:cp}), (\ref{eq:sw}), (\ref{eq:of}), (\ref{eq:lk}), (\ref{eq:ls}), (\ref{eq:cache}), as follows:
\begin{equation}
\begin{aligned}
\theta_{\rm COMP}(t) & = \theta_{\rm CP}(t) + \theta_{\rm SW}(t) + \theta_{\rm OF}(t) \\
& + \theta_{\rm LK}(t) + \theta_{\rm LS}(t) + \theta_{\rm CH}(t).
\end{aligned}
\label{eq:mec_cost}
\end{equation}
\subsection{Energy Harvesting and Demand Profiles}
\noindent The rechargebale energy storage device is characterized by its finite energy storage capacity $E_{\rm max}$, and the energy level reports are periodically pushed to the \mbox{DRC-RS} application in the \ac{MEC} server. In this case, the \ac{EB} level $B(t)$ is known, which enables for the provisioning of the required communication and computing resources in the form of the required containers, transmission drivers, and the transmission power in the BS. To emulate the profiles, the amount of harvested energy $H(t)$ in time slot $t$ is obtained from \mbox{open-source} solar and wind traces from a farm located in Belgium~\cite{belgium}, and they are as shown in Fig.~\ref{fig:energy_trace}.
\noindent The data in the dataset matches the time slot duration of ($\SI{30} {\minute}$) used in this work and it is the result of daily environmental records.
In this work, the wind energy is selected as a power source during the solar energy \mbox{off-peak} periods. The available \ac{EB} level $B(t + 1)$ located at the offgrid site evolves according to the following dynamics:
\begin{equation}
\mbox{$E(t + 1) = \min\{E(t) + H(t) - \theta_{\rm SITE}(t)- a(t), E_{\rm max}\}$},
\label{eq:offgrid}
\end{equation}
where $E (t)$ is the energy level in the battery at the beginning of time slot $t$, $\theta_{\rm SITE}(t)$ represents the site energy consumption, {\it see} Eq.~\eq{eq:siteconsupt} above, and $a(t)$ is leakage energy. However, it is worth noting that the energy level $E(t)$ is updated at the beginning of time slot $t$, whereas $H(t)$ and $\theta_{\rm SITE}(t)$ are only known at the end of $t$. Thus, the energy constraint at the off-grid site must be satisfied for every time slot: $\theta_{\rm SITE}(t) \leq E(t)$. Therefore, for decision making, the online controller simply compares the received EB level reports with two \mbox{set-points} ($0 < E_{\rm low} < E_{\rm up} < E_{\rm max}$), the lower $E_{\rm low}$ and upper $E_{\rm up}$ energy thresholds. Here, $E_{\rm low}$ is the lowest EB level that the off-grid site should reach and $E_{\rm up}$ corresponds to the desired energy buffer level at the site. If $E(t) < E_{\rm low}$, then the site is said to be energy deficient, and a suitable energy source at each time slot $t$ is selected on the forecast expectations, i.e., the expected harvested energy $\hat{H}(t)$.
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{energy_profiles.eps}
\caption{Example traces for harvested solar traces and wind traces from~\cite{belgium}.}
\label{fig:energy_trace}
\end{figure}
\section{Problem Formulation}
\label{sec:prob}
\noindent In this section, the optimization problem is formulated to obtain an energy efficient infrastructure sharing and resource management procedures through \mbox{short-term} traffic load and harvested energy forecasting. The overall goal is to enable energy efficient infrastructure sharing and resource management, within remote and rural communication sites, and in turn guaranteeing a comparable \ac{QoS} to that of urban areas, with reduced energy consumption in remote/rural sites.
\subsection{Optimization Problem}
\label{opt}
\noindent Within the BS, the allocated bandwidth $W$ is shared between mobile subscribers from operator A and B, and within the computing platform, the containers (i.e., as the computing resources) and the underlying physical resources (e.g., \ac{CPU}) are shared among the users who offloaded their \mbox{delay-sensitive} workloads.
To address the aforementioned problem, two cost functions are defined, namely, F1 and F2, where (F1) is defined as: $\theta_{\rm SITE}(t)$ (F1), weighs the energy drained in the BS site due to transmission and computing processes; and (F2) which accounts for the comparable \ac{QoS} is defined as: $(\gamma^*(t) - L_{\rm in})^2$. Regarding this formulation, it is worth noting that F1 tends to push the system towards \mbox{self-sustainability} solutions and F2 favors solutions where the delay sensitive load is entirely admitted in the computing platform by the router application, taking into account the expected energy to be harvested. The corresponding (weighted) cost function is defined as:
\begin{equation}
\label{eq:Jfunc_2}
\begin{aligned}
J(\zeta, \sigma,C,D, t) & \stackrel{\Delta}{=} \Upsilon \, \theta_{\rm SITE}(\zeta(t), \sigma(t), C(t), D(t), t)\\
& + \overline{\Upsilon}(\gamma^*(t) - L_{\rm in}(t))^2 \, ,
\end{aligned}
\end{equation}
where $\Upsilon = [0,1]$ is the weight used to balance the two functions, and $\overline{\Upsilon} \stackrel{\Delta}{=} 1 - \Upsilon$. Hence, starting from the current ti,e slot $t = 1$ to the finite horizon $T$, the time is discretized as follows: $t = 1,2, \dots, T$), thus the optimization problem is formulated as follows:
\begin{eqnarray}
\label{eq:objt_2}
\textbf{P1} & : & \min_{\mathcal{N}} \sum_{t=1}^T J(\zeta, \sigma, C,D, t) \\
&& \hspace{-1.25cm}\mbox{subject to:} \nonumber \\
{\rm A1} & : & \sigma(t) \in \{0,1\}, \nonumber \\
{\rm A2} & : & \beta \leq C(t) \leq C, \nonumber \\
{\rm A3} & : & E(t) \geq E_{\rm low} , \nonumber \\
{\rm A4} & : & 0 \leq \gamma_{c}(t) \leq \gamma_{\rm max}, \nonumber \\
{\rm A5} & : & 0 \leq f_{c}(t) \leq f_{\rm max}, \nonumber \\
{\rm A6} & : & \mbox{$r_{\rm min} \leq r_c(t) \leq r_{\rm max}$}, \nonumber\\
{\rm A7} & : & \mbox{$\theta_{\rm SITE}(t) \leq E(t)$}, \nonumber\\
{\rm A8} & : & \max \{2\,\varrho_{c}(t)\} + \Delta = \tau_{\rm max}, \quad t=1,\dots, T \, , \nonumber
\end{eqnarray}
where the set of objective variables to be configured at slot $t$ in the BS system and MEC server is defined as \mbox{$\mathcal{N} \stackrel{\Delta}{=} \{\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t)\}$}. These settings handle the transmission and computing activities using the following constraints. Here, Constraint A1 specifies the BS operation status (either {\it power saving} or {\it active}), A2 forces the required number of containers, $C(t)$, to be always greater than or equal to a minimum number \mbox{$\beta \geq 1$}. The purpose of this is to be always able to handle mission critical communications. The constraint A3 ensures that the \ac{EB} level is always above or equal to a preset threshold $E_{\rm low}$, to guarantee {\it energy \mbox{self-sustainability}} over time. Furthermore, A4 bound the maximum workloads of each running container $c$, with $c = 1,\dots, C(t)$, A5 represents a \mbox{hard-limit} on the corresponding \mbox{per-slot} and \mbox{per-VM} processing time. A6 forces $r_c(t)$ to fall in a desired range: [$r_{\rm min}, r_{\rm max}$] of transmission rates and A7 ensures that the energy consumption at the site is bounded by the available energy in the EB. A8 offers the hard \ac{QoS} guarantees within the computing platform. From \textbf{P1}, it is noted that there exists a \mbox{non-convex} component $P_c^{\rm net}(t)$, from $\theta_{\rm LK}(t)$. In this case, the Geometric programming (GP) concept can be used to convert $\theta_{\rm LK}(t)$ into a convex function similar to~\cite{steering}. Thus, in order to solve {\bf P1} in~\eq{eq:objt_2}, the \ac{LLC} approach~\cite{hayes_2004}, GP technique, and heuristics, is used towards obtaining the feasible system control inputs $\eta (t) = (\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t))$ for $t=1,\dots,T$. Well, it should be noted that~\eq{eq:objt_2} can iteratively be solved at any time slot $t \geq 1$, by just redefining the time horizon as $t^\prime = t, t+1, \dots, t+T-1$.
\subsubsection{Feasibility and QoS guarantees}
Regarding the feasibility of the problem, the following formal results holds.\\
\noindent\textbf{Proposition 1.} Feasibility conditions\\
\indent \textit{The following two inequalities:}
\begin{equation}
(r_{\rm max}/2)(\tau - \Delta) \geq L_{\rm in}
\end{equation}
\begin{equation}
\sum_{c=1}^{C(t)} f_c(t) \Delta \geq r_{\rm min}
\end{equation}
\textit{guarantees that the infrastructure sharing and resource reconfiguration problem is feasible}. \qquad\qquad\qquad\qquad\qquad\qquad $\square$
Since the reported conditions assure that P1 admits the solution, we then consider the corresponding QoS properties. In this regard, it is safe to point out that A6 and A8 lead to the following hard bounds on the resulting \mbox{communication-plus-computing} delay.\\
\noindent\textbf{Proposition 2.} Hard \ac{QoS} guarantees\\
\indent\textit{Firstly, the feasibility conditions of Proposition 1 must be met. Next, we let random variables measure the following: the random queue delay of the input queue $\tau_{IQ}$, the service time of the input queue $\tau_{SI}$, the queue delay of the output queue $\tau_{OQ}$, and the service time of the output queue $\tau_{SO}$. Thus, the following QoS guarantees hold:}
\textit{the random total delay ($\tau_{\rm tot} \stackrel{\Delta}{=} \tau_{IQ} + \tau_{SI} + \tau_{OQ} + \tau_{SO}$) induced by the computing platform is limited (in a hard way) up to:}
\begin{equation}
\tau_{\rm tot} \leq ((L_{\rm in} + L_{\rm out})/ r_{\rm min}) + 2.
\end{equation}
Thus, the reported QoS guarantee lead to the conclusion that the remote/rural site can handle \mbox{delay-sensitive} workloads while meeting the bound in A8.
\subsection{Infrastructure Sharing and Resource Allocation}
\label{infra}
\noindent In this subsection, the predictions for the BS traffic load and energy consumption, the description of the remote/rural site system dynamics, and the proposed online \mbox{controller-based} algorithm are presented.
\subsubsection{Prediction of exogenous processes}
\label{predic}
\noindent Two exogenous processes are considered in this work: the harvested energy $H(t)$ and the BS traffic loads $L(t)$. In order to generate the predictions ($\hat{H}(t), \hat{L}(t)$), the \ac{LSTM} neural networks~\cite{lstmlearn} were adopted. Thus, the \mbox{LSTM-based} predictor has been trained to give an output of the the forecasts for the required number of future time slots $T$. The trained LSTM network consists of an input layer, a single hidden layer consisting of $40$ neurons, for $80$ epochs, for a batch size of $4$; and an output layer. For training and testing purposes, the dataset was split as $70\%$ for training and $30\%$ for testing. As for the performance measure of the model, the \ac{RMSE} is used.
\subsubsection{Remote/Rural site system dynamics}
\label{rurdynamics}
\noindent In order to effectively manage the remote/rural site, an adaptive implementation of the controller is developed. Its purpose is to compute the solutions of both the infrastructure sharing and resource configurations \mbox{on-the-fly}. For this purpose, an online \mbox{controller-based} algorithm is proposed and is outlined in {\bf Algorithm \ref{tab:genm}} below.
\begin{small}
\begin{algorithm}[h!]
\begin{tabular}{l l}
{\bf Input:} & $s(t)$ (current state) \\
{\bf Output:} & $\eta^{*}(t)$ (control input vector)\\
01: & \hspace{-1cm} Parameter initialization\\
& \hspace{-1cm} ${\mathcal G}(t) = \{s(t)\}$ \\
02: & \hspace{-1cm} {\bf for} ($k$ within the prediction horizon of depth $T$) {\bf do}\\
& \hspace{-1cm}\quad - $\hat{L}(t+k)$:= forecast the workload \\
&\hspace{-1cm}\quad - $\hat{H}(t+k)$:= forecast the energy\\
& \hspace{-1cm}\quad - ${\mathcal G}(t+k) = \emptyset$ \\
03: & \hspace{-1cm}\quad {\bf for} (each $s(t)$ in ${\mathcal G}(t+k)$) {\bf do}\\
& \hspace{-1cm}\qquad - generate all reachable states $\hat{s}(t+k)$\\
& \hspace{-1cm}\qquad - ${\mathcal G}(t+k) = {\mathcal G}(t+k) \cup \{\hat{s}(t+k)\}$\\
04: & \hspace{-1.1cm} \quad\quad {\bf for} (each $\hat{s}(t+k)$ in $\mathcal G(t+k)$) {\bf do}\\
& \hspace{-1.1cm}\qquad\quad - calculate the corresponding $\theta_{\rm SITE}(\hat{s}(t+k))$\\
& \hspace{-1.1cm}\qquad\quad taking into account of $\zeta(t)$, and $l_d(t)$ from $L_{\rm out}(t)$\\
& \hspace{-1.1cm} \quad\quad {\bf end for}\\
& \hspace{-1.1cm}\quad\quad {\bf end for}\\
& \hspace{-1cm} \quad {\bf end for}\\
05: & \hspace{-1cm} - obtain a sequence of reachable states yielding\\
& \hspace{-1cm}\quad the best system input\\
06: & \hspace{-1cm} {$\eta^{*}(t):=$ control leading from $s(t)$ to $\hat{s}_{\min}$}\\
07: & \hspace{-1cm} {\bf Return $\eta^{*}(t)$}
\end{tabular}
\caption{DRC-RS Algorithm Pseudocode}
\label{tab:genm}
\end{algorithm}
\end{small}
\noindent At this point, it should be noted that at time slot $t$ the system state vector is $s(t) = (\zeta(t), \sigma(t), C(t), D(t), E(t))$ and the applied input vector that drivers the system towards the desired behaviour. These drivers perform bandwidth sharing, adaptive BS power transmission, autoscaling and reconfiguration of containers, and tuning of the optical drivers and is denoted by $\eta^*(t) = \{\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t)\}$. The system behavior is described by the \mbox{discrete-time} \mbox{state-space} equation, adopting the \ac{LLC} principles~\cite{hayes_2004}:
\begin{equation}
s(t + 1) = \Phi(s(t), \eta(t)) \, ,
\end{equation}
where $\Phi(\cdot)$ is a behavioral model that captures the relationship between $(s(t),\eta(t))$, and the next state $s(t + 1)$. This relationship accounts for the amount of energy drained $\theta_{\rm SITE}(t)$, that harvested $H(t)$, which together lead to the next buffer level $E(t+1)$ through Eq.~\eq{eq:offgrid}. The \ac{DRC-RS} algorithm, finds the best control action vector $\eta^*(t)$ that yields the desired system behaviour within the remote/rural site. Note that $P_c^{\rm net}(t)$ is obtained using the CVXOPT toolbox and $\gamma_c(t), C(t),$ is obtained following the procedure outlined in remark 1 in~\cite{steering}. The entire process is repeated every time slot $t$ when the controller can adjust the behavior given the new state information. The state values of $s(t)$ and $\eta(t)$ are measured and applied at the beginning of the time slot $t$, whereas the offered load $L(t)$ and the harvested energy $H(t)$ are accumulated during the time slot and their value becomes known only at the end of it. This means that, being at the beginning of time slot $t$, the system state at the next time slot $t+1$ can only be estimated, which is formally written as:
\begin{equation}
\hat{s}(t + 1) = \Phi(s(t),\eta(t)) \,.
\label{eq:state_forecast}
\end{equation}
At this regard, it is worth noting that the control actions are taken after exploring only a limited prediction horizon, yielding a limited number of possible operating states. In order to ensure system stability, we rely on the notion that a system is said to be stable under control, if for any state, it is always possible to find a control input that forces it closer to the desired state or within a specified neighborhood of it~\cite{llcprediction}.
\subsubsection{Dynamic Resource Controller for Remote/Rural Sites}
\label{alg}
\noindent The edge network management algorithm pseudocode is outlined in Algorithm 1 above and it is based on the LLC principles, where the controller obtains the best control action $\eta^*(t)$. Starting from the {\it initial state}, the controller constructs, in a \mbox{breadth-first} fashion, a tree comprising all possible future states up to the prediction depth $T$. The algorithm proceeds as follows: \\
\begin{table} [t]
\caption{System Parameters.}
\center
\begin{tabular} {|l| l|l|}
\hline
{\bf Parameter} & {\bf Value} \\
\hline
Microwave backhaul power, $\theta_{\rm bk}$ & $\SI{50}{\watt}$\\
BS operating power $\theta_0$, & $\SI{10.6}{\watt}$\\
Max. number of containers, $C$ & $20$\\
Min. number of containers, $\beta$ & $1$ \\
Time slot duration, $\tau$ & $\SI{30} {\minute}$\\
Container $c$ (idle state), $\theta_{{\rm idle}_c}(t)$ & $\SI{4} {\joule}$\\
Container $c$ (max), $\theta_{{\rm max},c}(t)$ & $\SI{10} {\joule}$\\
Reconfiguration cost, $k_e$ & $ 0.005 \rm J/(\rm MHz)^2$\\
NIC in idle state, $\theta_{\rm idle}^{\rm nic}(t)$ & $13.1 \rm J$\\
Max. allowed processing time, $\Delta$ & $\SI{0.8} {\second}$\\
Processing rate set, $\{f_c(t)\}$ & $\{0,50,70,90,105\}$\\
Bandwidth, $W$ & $1 {\rm MHz}$\\
Max. allocated $c$ workload $\gamma_{\rm max}$ & 10 MB\\
Max. number of drivers, $D$ & $6$\\
Noise spectral density, $N_0$ & $-174 \, {\rm dBm/Hz}$\\
Max. container $c$ load, $\gamma_{\max}$ & $ 10$ MB\\
Driver energy, $m_d(t)$ & $1 \, \rm J/s$\\
Target transmission rate, $r_0$ & $1 \, \rm Mbps$\\
Leakage energy, $a (t)$ & $2\, \mu \rm J$\\
Energy storage capacity, $E_{\rm max}$ & $\SI{490} {\kilo\joule}$\\
Lower energy threshold, $E_{\rm low}$ & $30$\% of $E_{\rm max}$\\
Upper energy threshold, $E_{\rm up}$ & $70$\% of $E_{\rm max}$
\\
\hline
\end{tabular}
\label{tab_opt}
\end{table}
\indent A search set $\mathcal G$ consisting of the current system state is initialized (line 01), and it is accumulated as the algorithm traverse through the tree (line 03), accounting for predictions, accumulated workloads at the output buffer, past outputs and controls, operating intervals. The set of states reached at every prediction depth $t+k$ is referred to as $\mathcal G(t+k)$ (line 02). Given $s(t)$, the traffic load $\hat{L}(t+k)$ and harvested energy $\hat{H}(t+k)$ is estimated first (line 02), and generate the next set of reachable control actions by applying the accepted workload $\gamma^{*}(t+k)$, energy harvested and shared bandwidth fraction $\zeta (t+k)$. The cost function corresponding to each generated state $\hat{s}(t+k)$ is then computed (line 04), where $\hat{s}(t+k)$ take into account of $l_d$ as observed from $L_{\rm out}(t)$. Once the prediction horizon is explored, a sequence of reachable states yielding minimum energy consumption is obtained (line 05). The control action $\eta^{*}(t)$ corresponding to $\hat{s}(t+k)$ (the first state in this sequence) is provided as input to the system while the rest are discarded (line 06). The process is repeated at the beginning of each time slot $t$.
\section{Performance Evaluation}
\label{sec:eval}
\noindent In this section, some selected numerical results for the scenario of Section~\ref{sec:sys} are shown. The parameters that were used in the simulations are listed in Table~\ref{tab_opt} above.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width = \columnwidth]{trafficpredictions.eps}
\caption{One-step ahead predictive mean value for $L(t)$.}
\label{fig:bs_load}
\end{subfigure}
\quad
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width = \columnwidth]{prediction_profiles.eps}
\caption{One-step ahead predictive mean value for $H(t)$.}
\label{fig:energy_load}
\end{subfigure}
\centering
\caption{One-step online forecasting for both $L(t)$ and $H(t)$ patterns.}
\label{fig:patterns}
\end{figure}
\subsection{Simulation setup}
A BS empowered with computation capabilities deployed in a rural/remote area is considered in this setup. Our time slot duration $\tau$ is set to $\SI{30} {\minute}$ and the time horizon is set to $T = 3$ time slots. For simulation, Python is used as the programming language.
\subsection{Numerical results}
\textit{Data preparation:} The information from the used mobile and energy traces is aggregated to the set time slot duration. The mobile traces are aggregated from $\SI{10}{\minute}$ observation time to $\tau$. As for the wind and solar traces, they were aggregated from $\SI{15}{\minute}$ observation time to $\tau$. The used datasets are readily available in a public repository (\textit{see}~\cite{traces}).\\
In Fig.~\ref{fig:patterns}, the real and predicted values for traffic load from operator A and B, harvested energy is shown. Here, the forecasting routing tracks each value and predict it over \mbox{one-step}. The shown selected prediction results are for operator A and B, Solar, and Wind. Then, Table~\ref{tab:pred} below shows the the average \ac{RMSE} of the normalized harvested energy and traffic load processes ($L_A, L_B$), for different time horizon values, $T \in \{1,2,3\}$. In the table, the term $H_{\rm wind} (t)$ represent the forecasted values for energy harvested from wind turbines and $H_{\rm solar} (t)$ is for the harvested energy from solar panels. From the obtained results, the prediction variations are observed between $H(t)$ and $L(t)$ when comparing the average RMSE. The measured accuracy is deemed good enough for the proposed optimization.
\begin{table}[H]
\footnotesize
\centering
\caption{Average prediction error (RMSE) for harvested energy and
traffic load processes, both normalized in [0,1].}
\begin{tabulary}{1.0\textwidth}{|L|L|L|L|}
\hline
& {$T = 1$} & {$T = 2$} & {$T = 3$} \\
\hline
$L_A (t)$ & 0.070 & 0.090 & 0.011\\ \hline
$L_B (t)$ & 0.050 & 0.070 & 0.010\\ \hline
$H_{\rm wind}(t)$ & 0.011 & 0.013 & 0.016\\ \hline
$H_{\rm solar}(t)$ & 0.050 & 0.070 & 0.090\\
\hline
\end{tabulary}
\label{tab:pred}
\end{table}
The \mbox{DRC-RS} algorithm is benchmarked with another one, named Resource Reservation Manager (RRM), which is inspired by the backup reservation agreement from~\cite{strategicsharing}. In the RRM, the network resources are reserved per time slot based on a \mbox{set-point} threshold percentage. Both algorithms make use of the learned information.\\
\begin{figure}[h]
\centering
\includegraphics[width = \columnwidth]{bsenergy.eps}
\caption{Energy savings versus number of users connected to the BS.}
\label{fig:bsusers}
\end{figure}
Figure~\ref{fig:bsusers}, shows the average energy savings obtained within the offgrid system. Here, the number of users connected to the remote site is increased from $|\nu(t)|$ = $5$ to $50$, using an incremental step size of $5$. The obtained energy savings are with respect to the case where the BS site is dimensioned for maximum expected capacity (maximum value of $\theta_{\rm COMM}(t), \theta_{\rm COMP}(t)$). From the results, as expected, it is observed that the energy savings decrease as the number of mobile users connected to the remote site increases. The \mbox{DRC-RS} outperforms the RRM algorithm. At this regard, we note that the communication site will accept users as long as energy harvesting projections are positive.\\
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{joint.eps}
\caption{Mean energy savings for the remote/rural site system.}
\label{fig:rmsite}
\end{figure}
Then, Fig.~\ref{fig:rmsite} shows the average energy savings for the edge system. Here, the BS group size is set to $|\nu(t)| = 20$ and the obtained energy savings results are with respect to the case where no energy management procedures are applied, i.e., the BS is dimensioned for maximum expected capacity (maximum value of $\theta_{\rm SITE} (t)$, $\forall t$) and the MEC server provisions the computing resources for maximum expected computation workload (maximum value of $\theta_{\rm MEC} (t)$, with $C = 20\, \text{containers}, \forall t$). The average results of \mbox{DRC-RS} ($k_e = 0.05, \gamma_{\rm max} = 10$ MB) show energy savings of $51 \%$, while RRM achieves $43 \%$ on average. The effectiveness of the BS management procedure, autoscaling and reconfiguration of the computing resources, and on/off switching of the fast tunable laser drivers, coupled with foresighted optimization is observed in the obtained numerical results.
\section{Conclusions}
\label{sec:concl}
The challenge of providing connectivity to remote/rural areas will be one of the pillars for future mobile networks. To address this issue, in this paper, we present an infrastructure sharing and resource management mechanism for handling \mbox{delay-sensitive} workloads within a remote/rural site.
Numerical results, obtained with \mbox{real-world} energy and traffic load traces, demonstrate that the proposed algorithm achieves mean energy savings of $51 \%$ when compared with the $43 \%$ obtained by our benchmark algorithm. Also, the energy that can be saved decreases as the number of user connected to the BS increases, with a guarantee of serving more users as long the green energy is available.
The energy saving results are obtained with respect to the case where no energy management techniques are applied in the remote site.
\section*{Data Availability}
In this paper, open-source datasets for the mobile network (MN) traffic load, solar, and wind energy have been used. The details are as follows: (1) the real MN traffic load traces used to support the findings of this study were obtained from the Big Data Challenge organized by Telecom Italia Mobile (TIM) and the data repository has been cited in this article. (2) The real solar and wind traces used to support the findings of this study have also been cited in this article.
\bibliographystyle{IEEEtran}
\scriptsize
|
\section{Introduction}
Extreme solar storms can be defined as energetic solar events related to
large-scale disturbances in the Earth's magnetosphere, called as geomagnetic events \citep{Cliver04,Koskinen06, Echer11a,Echer11b,Echer13,Gonzalez11b}.
{ Before the launch of satellites, the activity of the Sun was recorded by ground-based instruments observing in visible light
(e.g. see the Meudon data-base ''BASS2000'' with spectroheliograms registered from 1909 until today- see examples in Figure \ref{spot}). Surveys in white light, in H$\alpha$, and Ca II H and K lines allow to study the solar cycle activity by tracking the sunspots and studying their size, and their complexity \citep{Waldmeier1955,McIntosh1990,Eren17}. The enhancement of emission was used as a good proxy for detecting flares \citep{Carrington1859}. However the detection of flares was limited by the spatial and the temporal resolution of the observations.}
Recently different approaches have succeeded to quantify the intensity of some historical events using different magnetometer stations over the world.
The analysis of magnetic recordings made as early as the middle of the nineteenth century by
ground stations allowed us to clarify the importance of several extreme events { \citep{Tsurutani2003,Cliver04,Lakhina2008,Cid13,Cliver2013}.}
During the XX$^{th}$ century, several important events with $Dst < -700$ nT were observed after intense flares and connected to aurora.
Exploring historical extreme events shows all the problems encountered when one aims at understanding the phenomena from one end to the other.
{ It is difficult to identify the solar source of extreme geoeffective events without continuous observations of the Sun and without quantified numbers of the energy release during the solar events.
The {\it Geostationary Operational Environmental Satellites} (GOES) register the global soft X ray emission 1- 8 \AA\, of the Sun since the ''80s''. The intensity of the flares are classified by the letters X, M, C, which correspond to 10$^{-4}$, 10$^{-5}$, 10$^{-6}$ W m$^{-2}$ energy release respectively. The extreme historical solar events, for which only the size of sunspots and "the magnetic crochet" recorded on the Greenwich magnetogram, for example, for the Carrington event or ionospheric disturbances are known, were associated with extreme geomagnetic events by comparison with recent events.
It is interesting to read the papers of \citet{Tsurutani2003,Cliver2013} where several historical events e.g. Sept. 1859, Oct. 1847, Sept.1849, May 1921 have been discussed and classified.}
With the {\it SOlar and Heliospheric Observatory }\citep[SOHO;][]{Fleck1995}, launched in 1995, and its on-board spectro-imagers and coronagraphs, and more recently with the {\it Solar TErrestrial RElations Observatory} { \citep[STEREO A and B 2006;][]{Wuelser2004,Russel2008} } and its { COR} and { HI} coronagraphs able to reach the Earth in particular conjunction (see the website of HI
HELCATS)
the solar sources of geoeffective events could be identified with more accuracy. A new era was open for { forcasting geomagnetic disturbances by being able to follow the solar events in multi-wavelengths, and particularly the coronal mass ejections from the Sun to the Earth. This is the new science called "Space Weather''.}
Intense flares responsible for geoeffective events are commonly associated with Solar Energetic Particles (SEP) events and/or coronal mass ejections (CMEs). Several minutes after the flares, very high energetic particles (SEPs) may enter in the Earth's atmosphere affecting astronauts or electronics parts in satellites.
However, concerning geomagnetic disturbances, CMEs { can be as geoeffective as the energetic particles when their arrival trajectory is oriented towards the Earth and when their speed is large enough \citep{Gopalswamy10a,Gopalswamy10b,Wimmer14}. SEP ejections produce particle radiation }with large fluence, however only a few of SEPs occur during each solar cycle while CMEs have an occurrence rate between { 2 and 3 per week in solar minimum and between 5 and 6 per day in solar maximum, these numbers also depend on the used coronagraphs \citep{StCyr2000,Webb2012,Lugaz2017}. They are originated from highly-sheared magnetic field regions which can be refereed as large magnetic flux ropes carrying strong electric currents.} They are statistically more likely to lead to geomagnetic disturbances when their solar sources are facing the Earth \citep{Bothmer07,Bein11,Wimmer14}. According to their speed, their interplanetary signatures (ICMEs) may reach the Earth in one to five days after the flare \citep{Yashiro06,Gopalswamy09,Bein11}.
{ Halo CMEs observed with the white light SMM coronagraph were firstly named ''global CMEs'' \citet{Dere2000} and already suspected to be responsible of geoeffective events \citep{Zhang1988}. }Recent studies confirmed the geoeffectivity of halo CMEs which generally form magnetic clouds (MC) (e.g. Bocchialini et al 2017, Solar Physics in press). The MCs are associated with extreme storms ($Dst < -200$ nT) and intense storms ($-200 < Dst < -100$ nT) \citep{Gonzalez07,Zhang07}, while the moderate storms ($-100 < Dst < -50$ nT) studied in the solar cycle 23 were found to be associated with co-rotating regions by $47.9 \%$, to ICMEs or magnetic clouds (MC) by $20.6 \%$, to sheath fields by $10.8 \%$, or to combinations of sheath and ICME ($10\%$) \citep{Echer13}.
{ However magnetic clouds can be not so effective if they are directed away from Earth like the fast ICME of July 2012 \citep{Baker2013} or if the magnetic field of the cloud arrives close to the magnetosphere with an orientation towards the North as for the cases of August 1972 \citep{Tsurutani1992}. In August 1972 a huge sunspot group McMath region 11976 (see Figure \ref{spot}) crossed the disk and was the site of energetic flares and consequently shocks were detected at 2.2 AU by Pionneer 10 \citep{Smith1976}. The estimated velocity of the ejecta was around 1700 km/s which is nearly the highest transit speed on record. \citet{Tsurutani2003} estimated its magnetic field to be around 73 nT which is also a huge number. But the Dst index indicated a recovery phase relatively low like a moderate storm \citep{Tsurutani1992}.
Nowaday the {\it in situ } parameters of the solar wind including the interplanetary magnetic field, IMF,
are monitored at L1 by the ACE spacecraft \citep{Chiu1998} magnetic field (MAG experiment) or similar instruments. They indicate clearly the passage of the satellite through an ICME or magnetic cloud by the changes of the solar wind speed, the reversed sign of the magnetic components Bx and By. The ICME is more geoeffective if the IMF-Bz component is negative indicating a strong coupling with the magnetosphere. }
{ We can conclude that if extreme solar storms do not necessary initiate extreme geomagnetic events, extreme geomagnetic events are nearly always produced by extreme solar storms. And extreme solar storms are most of the time issued from the biggest sunspot groups which produce the most energetic events \citep{Sammis2000}.}
The paper is organized as following. After an historical review of large sunspot groups observed on the Sun related to geomagnetic storms (Section 2), we present statistical results on star and sun flares according to the characteristics of the spots (flux, size) (Section3). Section 4 is focused on a MHD model (OHM) predicting the capability of the Sun to produce extreme events. { Finally the conclusion is given in Section 5.}
\begin{figure}
\centering
\mbox{
\includegraphics[width=12cm]{present_1947_2003.png}
}
\hspace{0.5cm}
\mbox{
\includegraphics[width=12cm]{present_2003_Nov17_1972_3.png}
}
\hspace{-5 cm}
\caption{Full disk spectroheliograms from the Observatoire de Paris in Meudon. ({\it top panels}) The largest sunspot groups ever reported: ({\it left}) on April 4, 1947 with no geoffective effect, ({\it right}) on October 28 2003. The AR 10486 in the south hemisphere { led} to a X17 flare and consequently a geomagnetic disturbance with a Dst=-350 nT.({\it bottom panels}):
({\it left}) AR 10501 on November 17 2003 observed in Ca II KIv with an inserted H$\alpha$ image of the active region. The huge eruptive filament surrounding the AR initiated
the largest Dst of the 23$^{th}$ solar cycle (Dst=-427 nT). ({\it right}) McMath region 11976 large sunspot, source of flares and ejected energetic particles on August 1972
(spectroheliograms from the Meudon data-base ''BASS2000''). }
\label{spot}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=10cm]{Kilcik_figure.png}
}
\caption{{ CME number and speed per solar Carrington rotation related to sunspot number and indexes of geoeffectivity (Dst and Ap).}
The dashed line shows the sunspot number, the bold solid line the CME
speed index, the dotted line the CME number, the double line the
Dst
index, and the
thin solid line represents the
Ap
index (adapted from \citet{Kilcik11}).}
\label{CME}
\end{figure}
\begin{figure*}
\centering
\mbox{
\includegraphics[width=14cm]{Aulanier_graph.png}
}
\caption{
Magnetic flux in the dominant polarity of the bipole, and magnetic energy released during the flare, calculated as a function of the maximum
magnetic field and the size of the photospheric bipole. The x and + signs correspond to extreme solar values. The former is unrealistic and the
latter must be very rare (from \citet{Aulanier13}.)}
\label{OHM}
\end{figure*}
\section{Historical view of solar sources of geoeffectivity}
The Carrington event in September 1, 1859, well known to be one of the largest solar Sunspot groups leading to one of the strongest
flare \citep{Carrington1859,Hodgson1859} had the largest magnetic signature ever
observed at European latitudes with the consequent aurora visible at low { geographic} latitude ($\pm18^\circ$) observed 17.5 hours later. Using the transit time, \citet{Tsurutani2003} proposed that the $Dst$ value decreased down to $-1\ 760$ nT during this event. { The Colaba (Bombay) record allowed to have a more precise determination around -1600 nT \citep{Cliver2013,Cid13}. This value is more than twice the value of the next extreme geomagnetic events.}
Revisiting this event by analysing ice core nitrates and $^{10}Be$ data,
\citet{Cliver2013} claimed that it reached only $-900$ nT. Nevertheless it { seems} to be the strongest geoeffective event registered up to now. A correlation between solar energetic proton fluence (more than $30$MeV) and flare size based on modern data proves that this event can be classified as an extreme solar event with a X-ray flare having an estimated energy larger than $ X10$.
All these extreme registered events, 12 episodes since the Carrington events, are solar activity dependent \citep{Gonzalez11a} (rough association). They occurred mainly during solar cycle maximum of activity with its two bumps and a secondary peak during the declining phase of the solar cycle.
Between 1876 and 2007, the largest sunspot area overlaid by large bright flare ribbons was observed in the Meudon spectroheliograms in Ca II K1v and H$\alpha$ between July 20-26 1946 \citep{Dodson1949}.
A well observed flare event occurred on July 25 1946 at 17:32 UT and caused a huge geomagnetic storm 26.5 hours later.
The size of the sunspot was equivalent to 4200 millionths of the solar hemisphere (MSH) and the ribbon surface around 3570 MSH \citep{Toriumi16};
The Carrington AR sunspot group seemed to be smaller than that one according to the sunspot drawings.
The next year an even larger sunspot was visible in { the spectroheliogram of } April 5, 1947 with a size reaching 6000 MSH but had no geoeffectivity effect (Figure \ref{spot}). The flare looked to be extended and powerful but
not accompanied by coronal mass ejections. It could be a similar case to the more recently event observed in October 2014. The AR 12192 presented a sunspot area of 2800 MSH and was the site of several flares (6 X- and 24 M-class) \citep{Sun15,Thalmann15}. These two active regions are really exceptional. The AR 12192 did not launch any CMEs. Different interpretations have been proposed: the region would possess not enough stress, no enough free energy. Or the CME eruptive flux rope would not have reached the threshold height of the torus instability \citep{Zuccarello15}.
Although there are in average two CMEs per day, only some of them are geoeffective. In October and November 2003, the largest sunspot groups (AR 10486 with an area of 3700 MSH), crossed the disk and were the sites of extreme events (Figure \ref{spot}). X 17, X 10 and X 35 flares were reported on October 28, October 29 and November 4 respectively. However the more extreme geomagnetic storm occurring during the whole Solar Cycle 23 with a $Dst =-422$ nT was linked to a M9.6 class flare on November 20, 2003 \citep{Gopalswamy05,Moestl08,Marubashi12}. The origin of the solar event was in the region AR 10501 and has been associated with the eruption of a large filament \citep{Chandra10} (Figure \ref{spot}).
The AR 10501 had not the largest sunspot area but the cause of the flare and CME was merely due to the injection of opposite magnetic helicity by a new emerging flux which produced a destabilization of the large filament and lead to a full halo CME (speed = 690 km/s) and a magnetic cloud in the heliosphere. The size of the sunspot is an important parameter but it is not sufficient to get an extreme solar storm.
Since the geoeffectivity is not straightforward, in order to forecast major storms, it is important to understand the nature (magnetic strength and helicity) and the location of the solar sources, the propagation of the CMEs through the interplanetary medium and their impacts on the magnetosphere/ionosphere system. Statistical studies of solar and magnetic activities during solar cycle 23 have permitted to associate CMEs and geomagnetic disturbances, providing long lists of CMEs with their characteristics i.e. their width, velocity, and solar sources \citep{Zhang07,Gopalswamy10a, Gopalswamy10b}. They showed that a CME would more likely give rise to a geoeffective event if its characteristics are: a fast halo CME (with an apparent width around $360^\circ$) and a solar source close to the solar central meridian.
In some cases, the proposed sources came from active regions close to the limb. \citet{Cid12} proposed to revisit this subset of events: in order to associate every link in the Sun-Earth chain, they have not only considered the time window of each CME-ICME, but also they have carefully revised every candidate at the solar surface.
The result was that a CME coming from a solar source close to the limb cannot be really geoeffective (i.e, associated with a at least moderate and a fortiori intense storm) if it does not belong to a complex series of other events. {
Possible deflection of a CME in the corona as well as in the interplanetary space may change the geoeffectiveness of a CME \citep{Webb2012}.
It has been reported deflection up a few ten degrees, even during the SMM mission \citep{Mein1982,Bosman2012,Kilpua2009,Zuccarello2012,Isavnin2013,Mostl2015}.}
In the statistical analysis of Bocchialini et al 2017, it has been shown that a CME deflected from its radial direction by more than 20 degrees produced an exceptional geoeffective event. { Moreover the orientation of the magnetic field of the magnetic cloud ($Bz <0$) is also an important parameter to get an extreme geoffective event (see the Introduction).}
\section{Characteristics of super flares}
Free magnetic energy stored in the atmosphere is released through global solar activity including CMEs (kinetic energy), flares and SEPs (thermal and non thermal energy).
There is not really a physical reason to have a relationship between the different categories of released energy.
\citet{Emslie12} estimated all energy components for 38 solar eruptive flares observed between 2002 and 2006. The maximum of non potential energy in an active region reached 3$\times 10^{33}$ erg and therefore could power all flare activity in the region. 0.5 percent of CMEs have a kinetic energy reaching 3 $\times 10^{32}$ erg, otherwise the mean kinetic energy of 4133 CMEs is around 5 $\times 10^{29}$ erg. They found a weak relationship between the estimations of the different energies due to large uncertainties. However the relationship looks to be more reliable for extreme events (syndrome of the big flare).
However the systematic study of geoeffective events occurring through the solar maximum activity year (2002) already mentioned in Section 1, showed that only 2 X-class flares among the 12 X-class flares were related to Sudden Storm Commencement (SSC) leaded events in the magnetosphere, the other SSCs were related to M and even C class flares (Bocchialini et al 2017). The solar cycle variation of the {\it Dst } does not follow the general trend of the sunspot number during the
declining phases of solar cycles but is comparable to the
trend of CME speeds, and CME numbers with the secondary peak \citep{Kilcik11} (Figure \ref{CME}). This behaviour confirmed the importance of CME in the geoeffectivity.\\
However statistical analysis of flare intensity showed a relationship with some categories of active regions. Flares were related to large sunspot active regions (category A, B, F ) in the classification of Zurich \citep{Eren17}. The class F consists of large ARs with sunspot fragmentation, indicating commonly the existence of strong shear.
This study confirmed the finding concerning the historical events that large geoffective effects are linked to the existence of large sunspot groups \citep{Carrington1859,Dodson1949}. { The extreme events should be related to large sunspots like for the ''Halloween'' events on October-November 2003 in AR 10486 (Figure \ref{spot} top right). The flare on November 4 2003, is generally considered to be
the most intense SXR event during the space age, with an estimated
peak SXR classification ranging from X25 to X45 \citep{Gopalswamy05,Cliver2013}. However the most geoeffective event occurred on the 20 November 2003. The AR 11501 has not a large sunspot and the solar extreme event is a coronal mass ejection with large kinetic energy. This event shows one example of large geoffectivity not related to the sunspot size (Figure \ref{spot} bottom row) but to the magnetic shear and magnetic helicity injection \citep{Chandra10}.}
Recently super flares (energy $ 10^{34}$ to $ 10^{36}$ erg) have been discovered in Sun-like stars (slow rotating stars) by the Kepler new space satellite \citep{Maehara12}. A debate started about the possibility of observing such super flares on the Sun. \citet{Shibata13} forecasted that one such super flare could occur every 800 years. Stars are suspected to have large spots and a large sunspot on the Sun with a flux of 2 $\times$ 10$^{23}$ Mx flux would be not impossible and would correspond to an energy of 10$^{34}$ erg \citep{Shibata13}.
\citet{Toriumi16} made a statistical analysis of the new solar Cycle 24 flares between May 2010 and April 2016. Considering 51 flares exhibiting two flare ribbons (20 X and 31 M-class), they determined an empirical relationship between the size of sunspots (S$_{spot}$) in flaring active regions and the magnetic flux $\Phi_{spot}$ in logarithm scale.\\
log $\Phi_{spot}$=0.74 $\times$ log S$_{spot}$ +20 with some uncertainties. \\
Considering the largest spots ever observed on the Sun (July 1946 and October 2014) they extrapolated this relationship and estimated a maximum flux of 1.5$\times 10^{23}$ Mx. They did not take into account the fact that all the energy of the spots can be transformed in thermal and non thermal energy and not in kinetic energy (no CME was launched in October 2014 for example).
\begin{figure*}
\centering
\mbox{
\includegraphics[width=12cm]{present_sun_star.png}
}
\caption{Schematic representation of several modeled sunspot groups without faculae on the solar disk, with their corresponding modeled flare energies computed with the OHM simulation. { A sunspot group consists of several pairs of sunspots. In each group a pair of sunspots (surrounded by red curve) representing 1/3 of the sunspot group area, is modeled in the simulation. The size of the grey areas is normalized to the size of the spots considered in the simulation (adapted from \citet{Aulanier13}).}}
\label{star}
\end{figure*}
\section{Prediction of extreme solar storms}
It appears that MHD simulations of emerging flux could be used to have a systematic survey to investigate the process of energy storage and find the relationship between sunspot size, CME eruptive events.
The {\it Observationally driven High order scheme Magnetohydrodynamic code} (OHM) \citep{Aulanier05,Aulanier2010} simulation has been used as a tool to experiment huge energetic events on the Sun e.g. large super flare ($10^{36}$ erg) by varying the characteristics of the sunspots in a large
parameter space \citep{Aulanier13}.
The model consisted of a bipole with two rotating sunspots which is equivalent to create along the polarity inversion line a strong shear with cancelling flux. The 3D numerical simulation solved the full MHD equations for the mass density, the fluid velocity u, and the magnetic field B under the plasma $\beta$ =0 assumption. The calculations were performed in non-dimensionalized
units, using $\mu$ = 1.
The magnetic field diffusion favored the expulsion of the flux rope. The space parameter study lead to graphs of values of magnetic flux and energy according to the size of sunspot in MSH units and the stress of the field (Figure \ref{OHM}).
The magnetic flux $\Phi$ and the total flare energy E are defined as following:\\
\noindent $\phi$ = 42 $(\frac{B_z}{8T})$ $(\frac{L^{bipole}}{5m})^2$ Wb \\
\noindent E= $\frac{40}{\mu(\frac{B_z}{8T})^2(\frac{L^{bipole}}{5m})^3 }$J
\\
\\
B is the strength of the magnetic field in the bipole (sunspot), L is the size of the bipole.
The problem is the estimation of the value L.
L$^2$ can be computed as the area of an active region with facula (L=200 Mm), The maximum value for the flux is $\phi$ = 10$^{23}$ Mx and for the energy E =3 $\times$ 10$^{34}$ erg that falls in the range of star superflares \citep{Maehara12}. However L should be reduced to 1/3 due to the fact that the stress of the field concerned only a small part of the PIL \citep{Aulanier13}. The maximum of energy could not exceed 10$^{34}$ erg. These results come from a self consistent model with shear flux leading to CME with no approximation.
On the other hand the estimations of \citet{Toriumi16} are very empirical mixing different observations not related one to the other one. Each estimation has been overestimated. For example the volume of the active region concerned by the flare has been estimated by the product of S$_{ribbon}$ (surface area of the ribbons) and distance between the ribbons \citep{Toriumi16}. However the uncertainty on the estimation of the magnetic field in this volume can lead to an overestimation by one to two orders of magnitude { according to the f value introduced in their equations}. Taking unrealistic values of B and flux lead to unrealistic energy values never observed in our era \citep{Emslie12}.
\section{Conclusion}
{ Commonly extreme solar events are produced in active regions having a strong magnetic reservoir (high magnetic field and stress). There are defined as very powerful X ray flares, coronal mass ejections with high kinetic energy faced to the Earth leading to magnetic cloud arriving at the magnetosphere with a good orientation (B$_z$ negative) and strong ejections of energetic particles (SEPs). Large sunspot groups with fragmentation are good candidates for extreme solar storms \citep{Sammis2000}.}
With our Sun as it is today, it seems impossible to get larger sunspots and super-flares with energy $>$ 10$^{34}$ erg. { Figure \ref{star} shows
different sunspot groups. In each of them a pair of sunspot surrounded by red curves represents the bipole used as boundary condition of the OHM simulation. The energy mentioned below the pair is the result of the simulation. With huge sunspots we obtain large energies as it is recorded for stars by the Kepler satellite. Such large spots
have never been observed on the Sun.}
We should not forget that the simulation concerns a bipole with rotating spots imposing a strong shear along the PIL. The shear is a necessary ingredient to have expulsions of CMEs in the simulation and also in the observations. In order to produce stronger flares the Sun-like stars should have a much stronger dynamo than the Sun and a rotation rate exceeding several days. The prediction of having extreme solar storms in 800 years would be very speculative.
Acknowledgements\\
The author would like to thank the organizers of the meeting Drs. Katya Georgieva and Kazuo Shiokawa to invite me in Varna for the VarSITI meeting in June 2016. I want to thank G.Aulanier for his fruitful comments on this work.
|
\section{ Introduction}
The conjectured duality between
the type IIB superstring theory on the AdS$_5\times$S$^5$ space
(AdS superstring)
and
$D=4,~{\cal N}=4$ Yang-Mills theory
\cite{M,GKP,W}
has been driven not only
studies of variety of background theories
but also studies of basic aspects such as integrability.
The approach of the pp-wave background superstring theory \cite{MTpp}
was explored
by Berenstein, Maldacena and Nastase \cite{BMN} and
developed in, for example
\cite{GKP2,FT}.
For further development Mandal, Suryanarayan and Wadia
pointed out the relevance with the integrability \cite{MSW},
and Bethe anzatz approach was explored
by Minahan and Zarembo \cite{MZ} and in for example \cite{B,DNW,BFST}.
The integrability
is a powerful aspect expected in the large N QCD \cite{Lipatov}
and shown to exist in
the IIB superstring theory on the AdS$_5\times$S$^5$ space
by Bena, Polchinski and Roiban \cite{BPR}.
The integrability provides hidden symmetry generated by
an infinite number of
conserved ``non-local" charges \cite{LP,BIZZ}
as well as
an infinite number of conserved ``local" charges
\cite{pol2} which are related by a spectral parameter
at different points.
Related aspects on the integrability of the AdS superstring were
discussed in \cite{new18}.
Recently the conformal symmetry of AdS superstrings
was conjectured due to the $\kappa$ symmetry
\cite{polyakov}.
The classical conformal symmetry of the AdS superstring theory
also leads to an infinite number of conserved Virasoro operators.
The naive questions are
how the conformal generator is related to the infinite number of conserved
``local" currents, and how many independent conserved currents exist.
For principal chiral models the stress-energy tensor
is written by trace of the square of the conserved flat current;
for reviews see refs.
\cite{EHMM,MSW}.
For the AdS superstring theory
the Wess-Zumino term and the $\kappa$ symmetry make a difference.
Recently issues related to
the integrability and the conformal symmetry
of the AdS superstring theory have been discussed
\cite{MP,Mh,AAT}.
In this paper we will obtain the expression of the conformal generator,
which is the stress-energy tensor relating to
the lowest spin ``local" current,
and we calculate the higher spin ``local" currents
to clarify independent components.
The AdS space contains the Ramond/Ramond flux which causes
difficulty of
the standard Neveu-Schwarz-Ramond
(NSR) formulation of the superstring theory.
The AdS superstring was described in the Green-Schwarz (GS) formalism
by Metsaev and Tseytlin based on the coset
PSU(2,2$\mid$4)/[SO(4,1)$\times$SO(5)]
\cite{MT}.
Later Roiban and Siegel reformulate it in terms of the unconstrained
GL(4$\mid$4) supermatrix coordinate based on an alternative
coset GL(4$\mid$4)/[Sp(4)$\times$GL(1)]$^2$ \cite{RS}.
In this formalism the local Lorentz is gauged,
and it turns out that this treatment is essential for
separation into $+/-$ modes (right/left moving modes) easier.
Furthermore the fermionic constraint including the first class and second class
is necessary for
separation of the fermionic modes into $+/-$ modes.
As the first step toward the CFT formulation of the AdS superstring,
the affine Sugawara construction \cite{Halpern},
the Virasoro algebra and the algebra of currents carrying the
space-time indices are also listed.
The organization of this paper is the following;
in the next section the notation is introduced.
In section 3 we analyze the superparticle in the AdS$_5\times$S$^5$ space,
and the relation between
the reparametrization constraint and the conserved right invariant (RI) current
is given.
In section 4 we analyze the superstring in the AdS$_5\times$S$^5$ space,
and the infinite number of conserved currents are presented
both from the conformal point of view and
from the integrability point of view.
We show that the stress-energy tensor
is written by
the ``supertrace" of the square of the RI current
as the lowest spin ``local" current.
Then we calculate higher spin ``local" currents
to clarify independent components of the ``local" currents.
\par\vskip 6mm
\section{ GL(4${\mid}$4) covariant coset}
We review the Roiban-Siegel formulation of the AdS$_5\times$S$^5$ coset
\cite{RS} and follow the notation in \cite{HKAdS}.
The coset GL(4$\mid$4)/[GL(1)$\times$Sp(4)]$^2$ is used instead of
PSU(2,2$\mid$4)/[SO(4,1)$\times$SO(5)] for the linear realization of the
global symmetry after Wick rotations and introducing the auxiliary variables.
A coset element $Z_M{}^A$
is an unconstrained matrix defined on a world-volume
carrying indices $M=(m,\bar{m}),~A=(a,\bar{a})$ with
$m,\bar{m},a,\bar{a}=1,\cdots,4$.
The left invariant (LI) current, $L^L$, is invariant under the left action
$Z_M{}^A~\to~\Lambda_M{}^NZ_N{}^A
$ with
a global parameter GL(4$\mid$4)$\ni \Lambda$
\begin{eqnarray}
(J^L)_A{}^B=(Z^{-1}d Z)_A{}^B~~.
\end{eqnarray}
The LI current satisfies the flatness condition by definition
\begin{eqnarray}
dJ^L=-J^LJ^L~~~.
\end{eqnarray}
The right invariant (RI) current, $J^R$, is invariant under the right action
$Z_M{}^A~\to~Z_M{}^B\lambda_B{}^A
$ with a local parameter
[Sp(4)$\otimes$GL(1)]$^2$ $\ni\lambda$
\begin{eqnarray}
(J^R)_M{}^N=({\cal D}ZZ^{-1})_M{}^N~~,~~({\cal D}Z)_M{}^A\equiv
dZ_M{}^A+Z_M{}^BA_B{}^A
\end{eqnarray}
with
\begin{eqnarray}
A~\to~\lambda A\lambda^{-1}+(d\lambda) \lambda^{-1}~~,
\end{eqnarray}
and
\begin{eqnarray}
dJ^R=J^RJ^R+Z(dA-AA)Z^{-1}~~~.\label{dAAA}
\end{eqnarray}
Originally $A$ is bosonic [Sp(4)$\otimes$GL(1)]$^2$
$\ni$$A$, but we will show that
the fermionic constraint i.e. $\kappa$ symmetry
gives fermionic components of $A$.
The conjugate momenta are introduced
\begin{eqnarray}
\{
Z_M{}^A,\Pi_B{}^N
\}=(-)^A\delta_B^A\delta_M^N~~~
\end{eqnarray}
as the graded Poisson bracket and
$\{q,p\}=-(-)^{qp}\{p,q\}$.
There are also two types of differential operators;
the global symmetry generator (left action generator), $G_M{}^N$,
and
the supercovariant derivatives (right action generator), $D_A{}^B$,
\begin{eqnarray}
G_M{}^N=Z_M{}^A\Pi_A{}^N~~,~~D_A{}^B=\Pi_A{}^M Z_M{}^B~~~.\label{DGDG}
\end{eqnarray}
In our coset approach $8\times 8=64$ variables for $Z_M{}^A$ are introduced
and auxiliary variables are eliminated by the following constraints
corresponding to the stability group [Sp(4)$\times $GL(1)]$^2$,
\begin{eqnarray}
({\bf D})_{(ab)}=(\bar{\bf D})_{(\bar{a}\bar{b})}={\rm tr}~{\bf D}=
{\rm tr}~\bar{\bf D}\equiv 0~~~\label{DSp4GL1}~~~,
\end{eqnarray}
where the bosonic components are denoted by boldfaced characters as
${\bf D}_{ab}\equiv D_{ab}$ and $\bar{\bf D}_{\bar{a}\bar{b}}\equiv
D_{\bar{a}\bar{b}}$ of \bref{DGDG}.
The number of the coset constraints is $10+10+1+1=22$,
so the number of the coset parameters is $64-22=42$
where $10$ bosons and $32$ fermions.
The $[Sp(4)]^2$ invariant metric is anti-symmetric
and a matrix is decomposed into
trace part, anti-symmetric-traceless part and the symmetric part,
denoted by
\begin{eqnarray}
{\bf M}_{ab}=-\frac{1}{4}\Omega_{ab}{\bf M}^c{}_c+{\bf M}_{\langle ab\rangle}+{\bf M}_{(ab)}\equiv - \frac{1}{4}\Omega~{\rm tr}{\bf M}
+\langle {\bf M}\rangle+({\bf M})~~~,
\end{eqnarray}
with $M_{(ab)}=\frac{1}{2}(M_{ab}+M_{ba})$,
and similar notation for the barred sector.
Both $G_M{}^N$ and $D_A{}^B$ in \bref{DGDG} satisfy GL(4$\mid$4) algebra.
If we focus on the AdS superalgebra part,
the global symmetry generators $G_M{}^N$ satisfies the global AdS superalgebra
\begin{eqnarray}
\left\{Q_{A\alpha},Q_{B,\beta}\right\}&=&-2\left[
\tau_3{}_{AB}P_{\alpha\beta} +\epsilon_{AB}
M_{\alpha\beta}\right]\label{QQPM}\\
Q_{1\alpha}&=&G_{m\bar{m}}+G_{\bar{m}m
\nn\\
Q_{2\alpha}&=&G_{m\bar{m}}-G_{\bar{m}m
\nn\\
P_{\alpha\beta}
&=&{G}_{\langle mn\rangle}\Omega_{\bar{m}\bar{n}}
-G_{\langle\bar{m}\bar{n}\rangle}\Omega_{mn}\cdots {\rm total~ momentum}\nn\\
M_{\alpha\beta}&=&-G_{(mn)}\Omega_{\bar{m}\bar{n}}
+G_{(\bar{m}\bar{n})}\Omega_{mn}\cdots{\rm total~ Lorentz}\nn~~~.
\end{eqnarray}
The right hand side of \bref{QQPM} can not be diagonalized
by the real SO(2) rotation of $Q_A$'s
because of the total Lorentz charge term with $\epsilon_{AB}$.
On the other hand the local AdS supersymmetry algebra is given by
\begin{eqnarray}
\left\{d_{A\alpha},d_{B,\beta}\right\}&=&2\left[
\tau_3{}
_{AB}
\tilde{p}_{\alpha\beta} +
\epsilon_{AB}m_{\alpha\beta}\right]\\
d_{1\alpha}&=&{D}_{a\bar{a}}+{\bar{D}}_{\bar{a}a}
\nn
\\
d_{2\alpha}&=&{D}_{a\bar{a}}-{\bar{D}}_{\bar{a}a}
\nn
\\
\tilde{p}_{\alpha\beta}&=&
{\bf D}
_{\langle ab\rangle}\Omega_{\bar{a}\bar{b}}
-{\bar{\bf D}}
_{\langle \bar{a}\bar{b}\rangle}\Omega_{ab}
\cdots {\rm local~LI~momentum}\nn
\\m_{\alpha\beta}&=&
-{\bf D}_{(ab)}\Omega_{\bar{a}\bar{b}}
+\bar{\bf D}
_{(\bar{a}\bar{b})}\Omega_{a{b}}\cdots{\rm local~Lorentz}\nn
~~~.
\end{eqnarray}
In our coset approach the local Lorentz generator
is a constraint \bref{DSp4GL1},
so the
local supercovariant derivative $d_{A\alpha}$'s
can be separated into;
\begin{eqnarray}
\left\{d_{1\alpha},d_{2\beta}\right\}=
2 m_{\alpha\beta}\equiv 0~,~~
\left\{d_{1\alpha},d_{1\beta}\right\}=
2 \tilde{p}_{\alpha\beta}~~,~~
\left\{d_{2\alpha},d_{2\beta}\right\}=
-2\tilde{p}_{\alpha\beta}
\end{eqnarray}
Although the global superalgebra can not be separated into
irreducible algebras in the AdS background,
the local superalgebra can be separated into
irreducible sets on the GL(4${\mid}$4) covariant coset approach.
This property allows simpler description of the AdS superstring
as the flat case at least in the classical mechanics level.
\par\vskip 6mm
\section{ AdS Superparticle}
We begin with the action for a superparticle in the AdS$_5\times$S$^5$
\begin{eqnarray}
&S=\displaystyle\int d\tau~\displaystyle\frac{1}{2e}
\left\{-{\bf J}_\tau^{\langle ab\rangle}{\bf J}_{\tau, \langle ab\rangle}
+\bar{\bf J}_\tau^{\langle \bar{a}\bar{b}\rangle}\bar{\bf J}_{\tau, \langle \bar{a}\bar{b}\rangle}
\right\}&~~~.\label{RS}
\end{eqnarray}
Here we omit the upper-subscript $L$ for the LI currents and their components are denoted as
\begin{eqnarray}
(J^L_{~\mu})_A{}^B
=\left(
\begin{array}{cc}
{\bf J}_{\mu,}{}_a{}^b&j_{\mu,}{}_a{}^{\bar{b}}\\\bar{j}_{\mu,}{}_{\bar{a}}{}^b&\bar{\bf J}_{\mu,}{}_{\bar{a}}{}^{\bar{b}}
\end{array}
\right)~~~.
\end{eqnarray}
From the definition of the canonical conjugates,
$
\Pi_A{}^M={\delta S}/{\delta \partial_\tau Z_M{}^A} (-)^A
$,
we have
the following primary constraints \cite{HKAdS}
\begin{eqnarray}
{\cal A}_{\rm P}=\frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2\right]=0~~,~~
D_{a\bar{b}}=\bar{D}_{\bar{a}{b}}=0~~~\label{ApDD}
\end{eqnarray}
with
\begin{eqnarray}
D_A{}^B=\left(
\begin{array}{cc}{\bf D}_a{}^b&D_a{}^{\bar{b}}\\
\bar{D}_{\bar{a}}{}^b&\bar{\bf D}_{\bar{a}}{}^{\bar{b}}
\end{array}
\right)~~~.
\end{eqnarray}
The Hamiltonian is chosen as
\begin{eqnarray}
{\cal H}=-{\cal A}_{\rm P}=- \frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2\right]
\end{eqnarray}
and the $\tau$-derivative is determined by
the Poisson bracket with ${\cal H}$,
$\partial_\tau{\cal O}=\{{\cal O},{\cal H}\}$.
The fact that a half of the fermionic constraints is second class
requires the Dirac bracket in general.
Fortunately the Dirac bracket with the Hamiltonian is equal to
its Poisson bracket because the fermionic
constrains are ${\cal H}$ invariant.
The LI current is calculated as
\begin{eqnarray}
J^L_\tau=Z^{-1}\partial_\tau Z=\left(
\begin{array}{cc}
\langle{\bf D}\rangle&0\\0&\langle \bar{\bf D}\rangle
\end{array}
\right)~~,~~\partial_\tau J^L=0~~~.
\end{eqnarray}
The RI current, generating the global GL(4$\mid$4) symmetry,
is given as
\begin{eqnarray}
J^R_\tau\equiv Z\Pi=Z\left(J^L_\tau+A_\tau\right)Z^{-1}~~,~~
A_\tau=
\left(
\begin{array}{cc}
({\bf D})-\frac{1}{4}\Omega {\rm tr}{\bf D}&D
\\\bar{D}&(\bar{\bf D})-\frac{1}{4}\Omega {\rm tr}\bar{\bf D}
\end{array}
\right)~~~.\label{JRSP}
\end{eqnarray}
Although the stability group does not contain fermionic components originally,
the fermionic components of the gauge connection $A$ in \bref{JRSP} is induced.
It is noted that ``$A$" is the gauge connection
distinguishing from
the reparametrization
constraint ``${\cal A}$".
The RI current is conserved, since the Hamiltonian is written
by LI currents which are manifestly global symmetry invariant
\begin{eqnarray}
\partial_\tau J^R=0~~~.
\end{eqnarray}
The $\kappa$ symmetry generators are half of the fermionic constraints
by projecting out with the null vector as
\begin{eqnarray}
{\cal B}_{\rm P}{}_a{}^{\bar{b}}=\langle{\bf D}\rangle_a{}^b D_b{}^{\bar{b}}
+D_a{}^{\bar{a}}\langle\bar{\bf D}\rangle_{\bar{a}}{}^{\bar{b}}~~,~~
\bar{\cal B}_{\rm P}{}_{\bar{a}}{}^{{b}}=\langle\bar{\bf D}\rangle_{\bar{a}}{}^{\bar{b}}\bar{D}_{\bar{b}}{}^{{b}}
+\bar{D}_{\bar{a}}{}^{{a}}\langle{\bf D}\rangle_{a}{}^{b}~~~.
\end{eqnarray}
If we construct the closed algebra including these $\kappa$ generators
with keeping
the bilinear of the fermionic constraints,
the $\tau$-reparametrization constraint, ${\cal A}_{\rm P}$, is modified to \cite{HKAdS}
\begin{eqnarray}
\tilde{\cal A}_{\rm P}&=&\frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2
+2D\bar{D}
\right]~~~~~.
\end{eqnarray}
This expression appears in the Poisson bracket of
${\cal B}$ with $\bar{\cal B}$,
when we keep the bilinear of fermionic constrains.
The RR flux is responsible for the last term ``$D\bar{D}$".
The term which is bilinear of the constraints
does not change the Poisson bracket
since its bracket with an arbitrary variable
gives terms proportional to the constraints
which are zero on the constrained surface.
In another word ${\cal A}_{\rm P}$ has an ambiguity of bilinear of the constraints,
and the $\kappa$ invariance fixes it.
On the original coset constrained surface \bref{DSp4GL1}
it is also rewritten as
\begin{eqnarray}
\tilde{\cal A}_{\rm P}=\frac{1}{2}~{\rm Str}~[D_A{}^B]^2=\frac{1}{2}~{\rm Str}~[J^R_\tau]^2~~~.
\end{eqnarray}
This is zero-mode contribution of the classical Virasoro constraint
for a superstring
in the AdS$_5\times$S$^5$ background.
\par\vskip 6mm
\section{ AdS Superstring}
\subsection{Conserved currents}
We take the action for a superstring in the AdS$_5\times$S$^5$ given by
\begin{eqnarray}
&S=\displaystyle\int d^2\sigma~\frac{1}{2}\left\{
-\sqrt{-g}g^{\mu\nu}({\bf J}_\mu^{\langle ab\rangle}{\bf J}_{\nu, \langle ab\rangle}
-\bar{\bf J}_\mu^{\langle \bar{a}\bar{b}\rangle}\bar{\bf J}_{\nu, \langle \bar{a}\bar{b}\rangle}
)
+\frac{k}{2}\epsilon^{\mu\nu}(
E^{1/2}j_\mu^{a\bar{b}}j_{\nu, a\bar{b}}
-
E^{-1/2}\bar{j}_\mu^{\bar{a}{b}}\bar{j}_{\nu, \bar{a}{b}}
)\right\}&\nn\\\label{RS}
\end{eqnarray}
where ``$k$" represents the WZ term contribution with $k=1$ and
$E={\rm sdet} Z_M{}^A$.
The consistent $\tau$ and $\sigma$ reparametrization generators \cite{HKAdS} are
\begin{eqnarray}
{\cal A}_\perp&=&{\cal A}_{0\perp} +k~{\rm tr}\left[-E^{1/4}Fj_\sigma+E^{-1/4}\bar{F}\bar{j}_\sigma\right]\nn\\
{\cal A}_\parallel&=&{\cal A}_{0\parallel} +k~{\rm tr}\left[E^{-1/4}F\bar{j}_\sigma-E^{1/4}\bar{F}{j}_\sigma\right]\label{Aperp}
\end{eqnarray}
with the following primary constraints
\begin{eqnarray}
{\cal A}_{0\perp}&=&\frac{1}{2}{\rm tr}\left[
(\langle{\bf D}\rangle^2+\langle{\bf J}_\sigma\rangle^2)-
(\langle\bar{\bf D}\rangle^2+\langle\bar{\bf J}_\sigma\rangle^2)
\right]=0\nn\\
{\cal A}_{0\parallel}&=&{\rm tr}\left[
\langle{\bf D}\rangle\langle{\bf J}_\sigma\rangle-
\langle\bar{\bf D}\rangle\langle\bar{\bf J}_\sigma\rangle
\right]=0\\
F_{a\bar{b}}&=&E^{1/4}D_{a\bar{b}}
+\frac{k}{2}E^{-1/4}(\bar{j}_\sigma)_{\bar{b}a}=0\nn\\
\bar{F}_{\bar{a}{b}}&=&E^{-1/4}\bar{D}_{\bar{a}{b}}+\frac{k}{2}E^{1/4}({j}_\sigma)_{{b}\bar{a}}=0~~~.\label{FermionicF}
\end{eqnarray}
Their Poisson brackets are
\begin{eqnarray}
\left\{{\cal A}_\perp(\sigma),{\cal A}_\perp(\sigma')\right\}&=&
2{\cal A}_\parallel(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\parallel(\sigma)\delta(\sigma-\sigma')\nn\\
\left\{{\cal A}_\perp(\sigma),{\cal A}_\parallel(\sigma')\right\}&=&
2{\cal A}_\perp(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\perp(\sigma)\delta(\sigma-\sigma')\\
\left\{{\cal A}_\parallel(\sigma),{\cal A}_\parallel(\sigma')\right\}&=&
2{\cal A}_\parallel(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\parallel(\sigma)\delta(\sigma-\sigma')\nn~~~.
\end{eqnarray}
The Hamiltonian is chosen as
\begin{eqnarray}
{\cal H}&=&-\int d\sigma {\cal A}_\perp\label{HamiltonianSUST}\\&=&
-\int d\sigma {\rm tr}\left[
\frac{1}{2}\left\{
\langle{\bf D}\rangle^2+\langle{\bf J}_\sigma\rangle^2-
\langle\bar{\bf D}\rangle^2-\langle\bar{\bf J}_\sigma\rangle^2\right\}
+\left(kE^{-1/2}\bar{D}\bar{j}_\sigma-k E^{1/2}Dj_\sigma
+j_\sigma\bar{j}_\sigma\right)
\right]
~~~.\nn
\end{eqnarray}
From now on $E=1$ gauge is taken using the local GL(1) invariance.
The global GL(1) symmetry is broken by the WZ term.
Using the Hamiltonian in \bref{HamiltonianSUST}
the $\tau$-derivative of ${\cal A}_{\perp}$ and ${\cal A}_\parallel$ are given as
\begin{eqnarray}
\partial_\tau {\cal A}_\perp=\partial_\sigma {\cal A}_\parallel~~,~~
\partial_\tau {\cal A}_\parallel=\partial_\sigma {\cal A}_\perp~~~.\label{dAdA}
\end{eqnarray}
Although the coset parameter $Z_M{}^A$ does not satisfy
the world-sheet free wave equation, it is essential to introduce
the world-sheet lightcone
coordinate
\begin{eqnarray}
\sigma^\pm=\tau \pm \sigma~~,~~
\partial_\pm=\frac{1}{2}(\partial_\tau \pm \partial_\sigma)~~~.
\end{eqnarray}
The differential equations \bref{dAdA} are rewritten as
\begin{eqnarray}
\partial_- {\cal A}_+=0~~,~~
\partial_+ {\cal A}_-=0~~,~~
{\cal A}_\pm={\cal A}_\perp\pm {\cal A}_\parallel~~~,
\end{eqnarray}
so the infinite number of the conserved
currents are
\begin{eqnarray}
\partial_- \left[f(\sigma^+){\cal A}_+\right]=0~~,~~
\partial_+ \left[f(\sigma^-){\cal A}_-\right]=0~~\label{conformal}
\end{eqnarray}
with an arbitrary function $f$.
Then there exist infinite number of conserved charges
\begin{eqnarray}
\partial_- \left[\displaystyle\int d\sigma~
f(\sigma^+){\cal A}_+\right]=0~~,~~
\partial_+ \left[\displaystyle\int d\sigma~
f(\sigma^-){\cal A}_-\right]=0~~~.
\end{eqnarray}
On the other hand the integrability of the superstring
will provide the infinite number of ``local" charges as well as the
``non-local" charges written down in \cite{HY}.
The LI currents
is given by
\begin{eqnarray}
\left\{\begin{array}{ccl}
J^L_\tau&=&\left(
\begin{array}{cc}
\langle{\bf D}\rangle&-k\bar{j}_\sigma\\-kj_\sigma&\langle \bar{\bf D}\rangle
\end{array}
\right)
=\left(
\begin{array}{cc}
\langle{\bf D}\rangle&
2D-2F\\2\bar{D}-2\bar{F}&\langle \bar{\bf D}\rangle
\end{array}
\right)
\approx
\left(
\begin{array}{cc}
\langle{\bf D}\rangle&
2D\\2\bar{D}&\langle \bar{\bf D}\rangle
\end{array}
\right)\nn\\
J^L_\sigma&=&
\left(
\begin{array}{cc}
{\bf J}_{\sigma}
&j_{\sigma}\\
\bar{j}_{\sigma}&
\bar{\bf J}_{\sigma}
\end{array}
\right)
\end{array}\right.~~~
\end{eqnarray}
where the $\tau$ component is determined by \bref{HamiltonianSUST}.
The LI currents satisfy the flatness condition but
does not satisfy the conservation law.
The RI currents are obtained in \cite{HY} as
\begin{eqnarray}
\left\{\begin{array}{ccl}
J^R_\tau&=&ZDZ^{-1}=Z(J^L_\tau+A_\tau)Z^{-1}\label{RISUST}\\
J^R_\sigma&=&Z(J^L_\sigma+A_\sigma)Z^{-1}~~,~~
J^L_\sigma+A_\sigma=
\left(
\begin{array}{cc}
\langle{\bf J}_{\sigma}\rangle
&\bar{F}+\frac{1}{2}j_{\sigma}\\
F+\frac{1}{2}\bar{j}_{\sigma}&
\langle\bar{\bf J}_{\sigma}\rangle
\end{array}
\right)
\end{array}\right.
\end{eqnarray}
where the gauge connection $A_\mu$ is
\begin{eqnarray}
\left\{\begin{array}{ccl}
A_\tau&=&\left(
\begin{array}{cc}
({\bf D})-\frac{1}{4}\Omega {\rm tr}{\bf D}&-D\\-\bar{D}&(\bar{\bf D})-\frac{1}{4}\Omega {\rm tr}\bar{\bf D}
\end{array}
\right)\nn\\
A_\sigma&=&
\left(
\begin{array}{cc}
-({\bf J}_\sigma)+\frac{1}{4}\Omega {\rm tr}{\bf J}_\sigma&
\bar{F}-\frac{1}{2}j_\sigma
\\F-\frac{1}{2}\bar{j}_\sigma
&-(\bar{\bf J}_\sigma)+\frac{1}{4}\Omega {\rm tr}\bar{\bf J}_\sigma
\end{array}
\right)
\end{array}\right.~~~.
\end{eqnarray}
The fermionic components of $A_\mu$ appear again.
In this paper
the fermionic constraints, $F$ and $\bar{F}$,
in
the fermionic components of $A_\sigma$ are kept
while they were absent in our previous paper \cite{HY}
depending on the treatment of the constraint bilinear terms.
Then the integrability of the superstring
leads to the current conservation and the flatness condition
for the RI current;
\begin{eqnarray}
\partial_\tau J^R_\tau=\partial_\sigma J^R_\sigma~~,~~
\partial_\tau J^R_\sigma-\partial_\sigma J^R_\tau=
2\left[J^R_\tau, J^R_\sigma\right]~~~\label{dJRdJR}~~~.
\end{eqnarray}
They are rewritten as
\begin{eqnarray}
\partial_-J^R_+=\left[J^R_-,J^R_+\right]~~,~~
\partial_+J^R_-=\left[J^R_+,J^R_-\right]~~,~~
J^R_\pm=J^R_\tau\pm J^R_\sigma~~~.\label{JRpm}
\end{eqnarray}
Taking the supertrace, denoting ``Str", leads to
the infinite number of conserved ``local"
currents because $J^R_\mu$ are supermatrices,
\begin{eqnarray}
\partial_-~{\rm Str}\left[(J^R_+)^n\right]=0~~,~~
\partial_+~{\rm Str}\left[(J^R_-)^n\right]=0~~,n=1,2,\cdots~~~.\label{JRn}
\label{integrablity}
\end{eqnarray}
It gives the infinite number of conserved ``local" charges
\begin{eqnarray}
\partial_\tau \left[\displaystyle\int d\sigma~
f(\sigma^+){\rm Str}(J^R_+)^n
\right]=0~~,~~
\partial_\tau \left[\displaystyle\int d\sigma~
f(\sigma^-){\rm Str}(J^R_-)^n
\right]=0~~~.\end{eqnarray}
In this way classical 2-dimensional conformal symmetry and
integrability of AdS superstring lead to two infinite sets of
conserved currents,
\bref{conformal} and \bref{integrablity}.
In next sections the relation between them is examined.
\subsection{Stress-energy tensor ($n=2$)}
The ``$+/-$" (right/left moving) modes of the RI currents
on the original coset constrained space
\bref{DSp4GL1}
are written as
\begin{eqnarray}
J^R_\pm=
Z\left(
\begin{array}{cc}
\langle{\bf D}_\pm \rangle&D\pm (\bar{F}+\frac{1}{2}j_\sigma)
\\
\bar{D}\pm(F+\frac{1}{2}\bar{j}_\sigma)&\langle\bar{\bf D}_\pm\rangle
\end{array}
\right)Z^{-1}=
Z\left(
\begin{array}{cc}
\langle{\bf D}_\pm\rangle&d_\pm+\frac{1}{2}j_\pm
\\
\pm(d_\pm -\frac{1}{2}j_\pm)
&\langle\bar{\bf D}_\pm\rangle
\end{array}
\right)Z^{-1}\nn\\
\end{eqnarray}
with
\begin{eqnarray}
{\bf D}_\pm={\bf D}\pm{\bf J}_\sigma~~,~~
\bar{\bf D}_\pm=\bar{\bf D}\pm\bar{\bf J}_\sigma~~,~~d_\pm=F\pm\bar{F}~~,~~
j_\pm=j_\tau\pm j_\sigma=-\bar{j}_\sigma\pm j_\sigma
\label{pmpm}
\end{eqnarray}
carrying the LI currents indices, $AB$.
This is supertraceless, Str$J^R_\pm=0$, so $n=1$ case of
\bref{JRn} gives just trivial equation.
Let us look at the $n=2$ case of \bref{JRn},
${\rm Str}\left[(J^R_\pm)^2\right]$.
Then the ``+" sector is written as
\begin{eqnarray}
\frac{1}{2}{\rm Str}\left[(J^R_+)^2\right]&=&
\frac{1}{2}{\rm Str}\left[
\left(
\begin{array}{cc}
\langle{\bf D}_+\rangle&d_++\frac{1}{2}j_+
\\
d_+-\frac{1}{2}j_+
&\langle\bar{\bf D}_+\rangle
\end{array}
\right)^2
\right]\nn\\
&=&\frac{1}{2}{\rm tr}\left[
\langle{\bf D}_+\rangle^2-\langle\bar{\bf D}_+\rangle^2
+2(d_++\frac{1}{2}j_+
)(d_+-\frac{1}{2}j_+
)
\right]\nn\\
&=&{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_+\rangle^2-\langle\bar{\bf D}_+\rangle^2\right)
+j_+d_+\right]~~~.\label{419}
\end{eqnarray}
The ``$-$" sector is
\begin{eqnarray}
\frac{1}{2}{\rm Str}\left[(J^R_-)^2\right]&=&
{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_-\rangle^2-\langle\bar{\bf D}_-\rangle^2\right)
-j_-d_-
\right]~~~.\label{420}
\end{eqnarray}
On the other hand the conformal symmetry generator ${\cal A}_\pm$ is
rewritten from the relation \bref{Aperp} and \bref{pmpm} as
\begin{eqnarray}
{\cal A}_\pm&=&
{\rm tr}
\left[\frac{1}{2}\left(
\langle{\bf D}_\pm \rangle^2-\langle\bar{\bf D}_\pm \rangle^2\right)
\pm j_\pm d_\pm
\right]~=~
\frac{1}{2}{\rm Str}\left[(J^R_\pm)^2\right]
~~~.\label{421}
\end{eqnarray}
If we take care of the square of the fermionic constraints,
the closure of the first class constraint set
including the $\kappa$ symmetry
generators,
\begin{eqnarray}
{\cal B}_\pm&=&
\langle{\bf D}_\pm\rangle d_\pm
+d_\pm\langle\bar{\bf D}_\pm\rangle\label{kappaSUST}
~~
\end{eqnarray}
determines the ambiguity of bilinear of the constraints as
\begin{eqnarray}
\tilde{\cal A}_\pm=
{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_\pm \rangle^2-\langle\bar{\bf D}_\pm \rangle^2\right)
\pm(\frac{1}{2}d_\mp +j_\pm)d_\pm
\right]=
{\cal A}_\pm+{\rm tr}F\bar{F}~~
\end{eqnarray}
obtained in \cite{HKAdS} as a generator of the ${\cal ABCD}$
constraint set
known to exist for a superstring in a flat space
\cite{WSmech,ABCD}.
Then the stress-energy tensor is
\begin{eqnarray}
T_{\pm\pm}\equiv\tilde{\cal A}_\pm
\approx
{\cal A}_\pm
={\rm Str}J^R_\pm J^R_\pm
~~~.\label{423}
\end{eqnarray}
This is $\kappa$ symmetric stress-energy tensor
in a supersymmetric generalization of
Sugawara form.
\subsection{Supercovariant derivative algebra}
Existence of the conformal invariance should present the
irreducible coset components of supercovariant derivatives
\cite{HKAdS};
\begin{eqnarray}
\langle{\bf D}_\pm\rangle&=&\langle{\bf D}\rangle\pm\langle{\bf J}_\sigma\rangle
~~,~~
\langle\bar{\bf D}_\pm\rangle~=~\langle\bar{\bf D}\rangle\pm\langle\bar{\bf J}_\sigma\rangle\nn\\
d_\pm&=&F\pm\bar{F}=(D\pm\frac{1}{2}j_\sigma)\pm(\bar{D}\pm\frac{1}{2}\bar{j}_\sigma)
\nn~~~.
\end{eqnarray}
On the constraint surface \bref{DSp4GL1} and \bref{FermionicF}
the $+/-$ sector supercovariant derivatives are separated as
\begin{eqnarray}
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\langle{\bf D}_-\rangle_{cd}(\sigma')\right\}
&=&2\Omega_{\langle c|\langle b}({\bf D})_{a\rangle|d\rangle}
\delta(\sigma-\sigma')
\equiv 0\nn\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
d_{-,c\bar{d}}(\sigma')\right\}
&=&\Omega_{c\langle b}d_{+,a\rangle \bar{d}}
\delta(\sigma-\sigma')=
\Omega_{c\langle b}(F+\bar{F})_{a\rangle \bar{d}}
\delta(\sigma-\sigma')
\approx 0\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
d_{-,c\bar{d}}(\sigma')\right\}
&=&2\left[
\Omega_{ac}(\bar{\bf D})_{\bar{b}\bar{d}}
+\Omega_{\bar{b}\bar{d}}({\bf D})_{ac}
\right]
\delta(\sigma-\sigma')
\equiv 0\nn
\end{eqnarray}
with
analogous relation for the barred sector, $\langle\bar{\bf D}_\pm\rangle$.
The ``+" sector supercovariant derivative algebra is
\begin{eqnarray}
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\langle{\bf D}_+\rangle_{cd}(\sigma')\right\}
&=&2\Omega_{\langle c|\langle b}\Omega_{a\rangle|d\rangle}
\delta'(\sigma-\sigma')+4\Omega_{\langle c|\langle b}
({\bf J}_\sigma)_{a\rangle|d\rangle}
\delta(\sigma-\sigma')\nn\\
&\equiv& 2\Omega_{\langle c|\langle b} \nabla_{a\rangle|d\rangle}
\delta(\sigma-\sigma')
\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
d_{+,c\bar{d}}(\sigma')\right\}
&=&2\left[
\Omega_{\bar{b}\bar{d}}\langle{\bf D}_+\rangle_{ac}
-\Omega_{ac}\langle\bar{\bf D}_+\rangle_{\bar{b}\bar{d}}
\right]
\delta(\sigma-\sigma')
\nn\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
d_{+,c\bar{d}}(\sigma')\right\}
&=&
\Omega_{c\langle b}(d_-+2j_+)_{a\rangle \bar{d}}
\delta(\sigma-\sigma')\approx
2\Omega_{c\langle b}\omega_{+,a\rangle \bar{d}}
\delta(\sigma-\sigma')
\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&-2\Omega_{\bar{b}\bar{d}}\Omega_{ac}\delta'(\sigma-\sigma')
2\left[
-\Omega_{\bar{b}\bar{d}}({\bf J}_\sigma)_{ac}
-\Omega_{ac}(\bar{\bf J}_\sigma)_{\bar{b}\bar{d}}\right]\delta(\sigma-\sigma')\nn\\
&\equiv& -2\nabla_{\bar{b}\bar{d};ac}\delta(\sigma-\sigma')\nn\\
\label{sucovder}\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&
\Omega_{c\langle b}\omega_{-,a\rangle \bar{d}}
\delta(\sigma-\sigma')
\nn\\
\left\{\omega_{+,a\bar{b}}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&0~~~\nn
\end{eqnarray}
where
\begin{eqnarray}
\omega_\pm&=&j_\pm=-\bar{j}_\sigma\pm j_\sigma~~~.
\end{eqnarray}
This is comparable with the flat case where
the non-local term, $\partial_\sigma\delta(\sigma-\sigma')$, is replaced by the
local Lorentz (~[Sp(4)]$^2$~) covariant non-local term,
$\nabla_\sigma\delta(\sigma-\sigma')$.
For the fifth Poisson bracket,
$\left\{\langle{\bf D}_+\rangle,
\omega\right\}
$,
it is zero for the flat case but it is not for the AdS case.
For a superstring in a flat space
the consistency of the $\kappa$ symmetry constraint
requires
the first class constraint set,
namely ``${\cal ABCD}$" constraint,
which are bilinear of the supercovariant derivatives
\cite{WSmech,ABCD}.
For the AdS case the situation is completely the same,
despite of this anomalous term \cite{HKAdS}.
\par\vskip 6mm
\subsection{``Local" currents ($n\geq 3$)}
Next let us look at $n\geq 3 $ cases
of
the infinite number of conserved ``local" current \bref{JRn}.
For simplicity we focus on the ``+" sector and replace
$``+"$ by $``~\hat{~}~"$, as
$J_+ \to \hat{J}$.
The first three powers of the RI current,
$(J^R)^n$ with $n=1,2,3$, are listed as below:
\begin{eqnarray}
\left[Z^{-1}\hat{J}^R Z\right]_{AB}&=&\left(
\begin{array}{cc}
\langle\hat{\bf D}\rangle_{\langle ab\rangle}&
(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
\\
\pm(\hat{d} -\frac{1}{2}\hat{j})_{b\bar{a}}
&\langle\hat{\bar{\bf D}}\rangle_{\bar{a}\bar{b}}
\end{array}
\right)
\end{eqnarray}
\begin{eqnarray}
\left[Z^{-1}(\hat{J}^R)^2 Z\right]_{AB}&=&-\frac{1}{4}
\left(
\begin{array}{cc}
\Omega_{ab}~{\rm tr}(
\langle\hat{\bf D}\rangle^2
+\hat{j}\hat{d})
&\\&
\Omega_{\bar{a}\bar{b}}~{\rm tr}(
\langle\hat{\bar{\bf D}}\rangle^2
-\hat{j}\hat{d})
\end{array}
\right)\\
&&+
\left(
\begin{array}{cc}
(\hat{d}^2-\frac{1}{4}\hat{j}^2)_{(ab)}
+\langle \hat{j}\hat{d}\rangle_{\langle ab \rangle}
&\hat{\cal B}_{a\bar{b}}
+\frac{1}{2}(\langle \hat{\bf D}\rangle \hat{j}
+\hat{j}\langle \hat{\bar{\bf D}}\rangle
)_{a\bar{b}}
\\
\hat{\cal B}_{b\bar{a}}
-\frac{1}{2}(\langle \hat{\bf D}\rangle \hat{j}
+\hat{j}\langle \hat{\bar{\bf D}}\rangle
)_{b\bar{a}}
&(\hat{d}^2-\frac{1}{4}\hat{j}^2)_{(\bar{a}\bar{b})}
-\langle\hat{j}\hat{d}\rangle_{\langle \bar{a}\bar{b} \rangle}
\end{array}
\right)\nn
\end{eqnarray}
\begin{eqnarray}
&&\left[Z^{-1}(\hat{J}^R)^3 Z\right]_{AB}~=~
\frac{1}{4}
\left(
\begin{array}{cc}
\Omega_{ab}~{\rm tr}\left[\hat{\cal B}\hat{j}-
(\langle\hat{\bf D}\rangle \hat{d} )\hat{j}
\right]
&\\&
-\Omega_{\bar{a}\bar{b}}~{\rm tr}\left[\hat{\cal B}\hat{j}
-(\hat{d}\langle\hat{\bar{\bf D}}\rangle)\hat{j}
\right]
\end{array}
\right)\\
&&-
\left(
\begin{array}{cc}
\left[\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2+\hat{j}\hat{d})~
\langle\hat{\bf D}\rangle
-\langle\hat{\bf D}\rangle (\hat{j} \hat{d})
+\hat{\cal B}\hat{j}
\right]_{\langle ab\rangle}
&
\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2
-\langle\hat{\bar{\bf D}}\rangle^2)~(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
\\
\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2
-\langle\hat{\bar{\bf D}}\rangle^2)~(\hat{d}-\frac{1}{2}\hat{j})_{b\bar{a}}
&
\left[\frac{1}{4}{\rm tr}(\langle\hat{\bar{\bf D}}\rangle^2
+\hat{j}\hat{d})~
\langle\hat{\bar{\bf D}}\rangle
-(\hat{j} \hat{d})
\langle\hat{\bar{\bf D}}\rangle -\hat{\cal B}\hat{j}
\right]_{\langle \bar{a}\bar{b}\rangle}
\end{array}
\right)\nn\\
&&+
\left(
\begin{array}{cc}
\left[
2(\hat{d}^2-\frac{1}{4}\hat{j}^2)\langle\hat{\bf D}\rangle
+\hat{d}\langle\hat{\bar{\bf D}}\rangle\hat{d}
-\frac{1}{4}\hat{j}\langle\hat{\bar{\bf D}}\rangle \hat{j}
\right]_{(ab)}&
-\frac{1}{4}{\rm tr}(\hat{j}\hat{d})
(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
+
\left[\langle\hat{{\bf D}}\rangle
(\hat{d}+\frac{1}{2}\hat{j})\langle\hat{\bar{\bf D}}\rangle
\right]_{a\bar{b}}
\\
\frac{1}{4}{\rm tr}(\hat{j}\hat{d})
(\hat{d}-\frac{1}{2}\hat{j})_{b\bar{a}}
+
\left[\langle
\hat{{\bf D}}\rangle
(\hat{d}-\frac{1}{2}\hat{j})\langle\hat{\bar{\bf D}}\rangle
\right]_{b\bar{a}}
&
\left[
2(\hat{d}^2-\frac{1}{4}\hat{j}^2)\langle\hat{\bar{\bf D}}\rangle
+\hat{d}\langle\hat{{\bf D}}\rangle\hat{d}
-\frac{1}{4}\hat{j}\langle\hat{{\bf D}}\rangle \hat{j}
\right]_{(\bar{a}\bar{b})}
\end{array}
\right)\nn\\
&&+
\left(
\begin{array}{cc}
&
\left[\left\{
(\hat{d}^2-\frac{1}{4}\hat{j}^2)
+\langle\hat{j}\hat{d}
\rangle
\right\}(\hat{d}+\frac{1}{2}\hat{j})
\right]_{a\bar{b}}
\\
\left[\left\{
(\hat{d}^2-\frac{1}{4}\hat{j}^2)
-\langle\hat{j}\hat{d}
\rangle
\right\}(\hat{d}-\frac{1}{2}\hat{j})
\right]_{\bar{a}b}
&
\end{array}
\right)\nn
\end{eqnarray}
In this computation
5-dimensional $\gamma$-matrix relations are used,
for example
${\bf V}^{\langle ab\rangle}{\bf U}_{\langle bc\rangle}
+{\bf U}^{\langle ab\rangle}{\bf V}_{\langle bc\rangle}
=\frac{1}{2}\delta^a_c~{\rm tr}{\bf V}{\bf U}$ for bosonic
vectors ${\bf V},~{\bf U}$.
The conserved ``local" current with $n=3$ becomes
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^3 &=&
{\rm tr}\left[2\hat{\cal B}\hat{j}
-(\langle\hat{\bf D}\rangle \hat{d} )\hat{j}
-(\hat{d}\langle\hat{\bar{\bf D}}\rangle)\hat{j}
\right]
=~{\rm tr}
~(\hat{\cal B}\hat{j}) \label{430}
\end{eqnarray}
where $\hat{\cal B}$ is the $\kappa$ generating constraint
\bref{kappaSUST}.
The conserved ``local" current with $n=4$ becomes
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^4 &=&
-\frac{1}{2}
{\rm tr}\left(
\langle\hat{\bf D}\rangle^2+
\langle\hat{\bar{\bf D}}\rangle^2
\right)
\hat{\cal A}+\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})~~.
\end{eqnarray}
The conserved ``local" current with $n=5,6$ are given as;
Str$(\hat{J}^R)^5=$(
$\hat{\cal B}$ dependent terms),
Str$(\hat{J}^R)^6=$(
$\hat{\cal A}$ and $\hat{\cal B}$ dependent terms).
In general for even $n=2m$ its bosonic part is given as
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^{2m} \mid_{\rm bosonic}&=&
\left({\rm tr}\langle\hat{\bf D}\rangle^2\right)^m-
\left({\rm tr}\langle\hat{\bar{\bf D}}\rangle^2\right)^m\nn\\
&=&{\rm tr}\left(\langle\hat{\bf D}\rangle^2-\langle\hat{\bar{\bf D}}\rangle^2
\right)~\left\{\left(
{\rm tr}\langle\hat{\bf D}\rangle^2\right)^{m-1}
+\cdots +
\left({\rm tr}\langle\hat{\bar{\bf D}}\rangle^2\right)^{m-1}
\right\}\nn\\
&\Rightarrow& (\cdots)\hat{\cal A}+\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})
\end{eqnarray}
where the last equality is guaranteed by the $\kappa$ invariance.
It is also pointed out that
the conserved supertraces of multilinears in the currents factorize in traces of lower number of currents and that for an even number of currents one of the factors
is the stress tensor in \cite{BPR}.
For odd $n=2m+1$ its bosonic part is given as
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^{2m+1} \mid_{\rm bosonic}~=~0
~\Rightarrow~\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})
\end{eqnarray}
where the possible fermionic variable dependence is
a term proportional to $\hat{\cal B}$
guaranteed by the $\kappa$ invariance.
In this way, after taking supertrace the even
$n$-th power of $J^R$
reduces terms proportional to ${\cal A}$ and ${\cal B}$,
and the odd $n$-th power of $J^R$
reduces a term proportional to ${\cal B}$ only.
In this paper
${\cal CD}$ constraints in the ${\cal ABCD}$ first class constraint set
are not introduced for simpler argument,
and set to zero because they are bilinears of constraints.
\par\vskip 6mm
\section{ Conclusion and discussions}
We obtained
the expression of
the conserved ``local" currents
derived from the integrability of a superstring
in the AdS$_5\times$S$^5$ background.
The infinite number of the conserved ``local" currents
are written by the supertrace of the $n$-th power of the RI currents.
The lowest nontrivial case, $n=2$, is nothing but the stress-energy tensor
which is also Virasoro constraint,
Str$(J^R)^2_\pm$ in \bref{419} and \bref{420}.
For even $n$ the ``local" current reduces to terms proportional to the
Virasoro constraint and the $\kappa$ symmetry constraint.
For odd $n$ it reduces to a term proportional to the
$\kappa$ symmetry constraint.
In another word the integrability
reduces to
the ${\cal AB}({\cal CD})$ first class constraint set
where ${\cal A}$ is the Virasoro generator
and ${\cal B}$ is the $\kappa$ symmetry generator.
The ${\cal ABCD}$ first class constraint set
is the local symmetry generator of superstrings both on the flat space
and on the AdS space.
It is natural that the physical degrees of freedom of
a superstring is common locally,
independently of flat or AdS backgrounds.
It seems that the combination of the ${\cal B}_\pm j_\pm$
in \bref{430}
plays the role of the world-sheet supersymmetry operator
in a sense of the grading of the conformal generator.
However it is not
straightforward
to construct the worldsheet supersymmetry operator.
As in the flat case where the lightcone gauge makes
the relation between
the GS fermion and the NSR fermion more transparent,
the $\kappa$ gauge fixing will be a clue to
make a connection to the world-sheet supersymmetry.
We leave this problem in addition to the quantization
problem for future investigations.
\par\vskip 6mm
\noindent{\bf Acknowledgments}
The author thanks to K. Kamimura, S. Mizoguchi and K. Yoshida for fruitful discussions.
\par\vskip 6mm
|
\section{Introduction}
Two problems that have attracted much attention in the quantum
information community are the MUB and SIC problems for Hilbert spaces of
finite dimension $N$. In the MUB problem \cite{Ivanovic, Wootters1}
one looks for $N+1$ orthonormal bases that are mutually unbiased, in the sense
that
\begin{equation} |\langle e_m|f_n\rangle|^2 = \frac{1}{N} \ , \hspace{8mm}
0 \leq m,n \leq N-1 \ , \end{equation}
\noindent whenever the vector $|e_m\rangle$ belongs to one basis and the
vector $|f_n\rangle$ to another. In the SIC problem \cite{Zauner, Renes}
one looks for a symmetric and informationally complete POVM, which translates
to the problem of finding $N^2$ unit vectors $|\psi_i\rangle$ such that
\begin{equation} |\langle \psi_i|\psi_j\rangle |^2 = \frac{1}{N+1}
\ , \hspace{8mm} 0 \leq i,j \leq N^2 - 1 \ , \end{equation}
\noindent whenever $i \neq j$. These problems are hard. For the MUB problem
an elegant solution exists whenever $N$ is a power of a prime \cite{Wootters2}.
For the SIC problem quite ad hoc looking analytic solutions are known for
eighteen different dimensions; these are described (and in some cases derived)
by Scott and Grassl, who also give full references to the earlier literature
\cite{Grassl}. The belief in the community is that a complete set of $N+1$
MUB does not exist for general $N$, while the SICs do.
Since the problems are so easy to state, it is not surprising that they have
been posed independently in many different branches of science. One purpose
of this article is to describe what nineteenth century geometers had to say
about them. A story told by Eddington \cite{Eddington}
is relevant here:
\
\noindent {\small Some years ago I worked out the structure of this group of operators
in connection with Dirac's theory of the electron. I afterwards learned that
a great deal of what I had written was to be found in a treatise on Kummer's
quartic surface. There happens to be a model of Kummer's quartic surface in
my lecture-room, at which I had sometimes glanced with curiosity, wondering
what it was all about. The last thing that entered my head was that I had
written (somewhat belatedly) a paper on its structure. Perhaps the author
of the treatise would have been equally surprised to learn that he was
dealing with the behaviour of an electron.}
\
\noindent We will see what Eddington saw as we proceed. Meanwhile, let us
observe that when $N$ is a prime
the MUB are the eigenbases of the $N+1$ cyclic subgroups of the Heisenberg
group, while there is a conjecture (enjoying very considerable numerical
support \cite{Grassl}) that the SICs can always be chosen to be special orbits of
this group. When $N$ is a power of a prime the solution of the MUB problem
shifts a little, since the MUBs now consist of eigenvectors of the cyclic subgroups
of the Heisenberg group defined over a finite field rather than over the
ring of integers modulo $N$. Concerning SICs that are orbits under the
Heisenberg group there is a link to the MUB problem: If the dimension
$N$ is a prime the SIC Bloch vectors, when projected onto any one of the
MUB eigenvalue simplices, have the same length for all the
$N+1$ MUB \cite{Khat, ADF}.
In mathematics elliptic curves provide the natural home for the Heisenberg
group, so it seems natural to investigate if elliptic curves can be used
to illuminate the MUB and SIC problems. In dimensions 3 \cite{Hughston} and
4 they certainly can, as we will see, but in higher dimensions I am not so
sure. There will be some comments and formulas that I could not find in
the books and papers I studied, but keeping
Eddington's example in mind I do not claim originality for them.
\section{Two pieces of background information}
We had better define the Heisenberg group properly. A defining non-unitary
representation is given by the upper triangular matrices
\begin{equation} g(\gamma, \alpha, \beta) =
\left( \begin{array}{ccc} 1 & \alpha & \gamma \\ 0 & 1 & \beta \\
0 & 0 & 1 \end{array} \right) \ . \end{equation}
\noindent Here the matrix elements belong to some ring. In the original
Weyl-Heisenberg group \cite{Weyl} they are real numbers, but here we are
more interested in the case that they belong to the ring of integers
modulo $N$. We denote the resulting group by $H(N)$. It is generated
by two elements $X$ and $Z$ obeying
\begin{equation} ZX = qXZ \ , \hspace{8mm} X^N = Z^N = {\bf 1} \ ,
\hspace{8mm} q = e^{\frac{2\pi i}{N}} \ . \end{equation}
\noindent For $N = 2$ we can use the Pauli
matrices to set $X = \sigma_X$, $Z = \sigma_Z$, which makes it possible
to remember the notation. We will consider the group projectively, so for
our purposes it can often be regarded as a group of order $N^2$.
Because $q$ is a primitive $N$th root of unity the unitary representation
in which $Z$ is diagonal is unique up to permutations \cite{Weyl}.
It is known as the clock and shift representation. If the components of
any vector are denoted $x_a$ the action is given by
\begin{equation} \begin{array}{lll} X: & & x_0 \rightarrow x_{N-1} \rightarrow
x_{N-2} \rightarrow \dots \rightarrow x_1 \rightarrow x_0 \\
\ \label{group} \\ Z: & & x_a \rightarrow q^ax_a \end{array} \ ,
\hspace{8mm} 0 \leq a \leq N-1 \ . \end{equation}
\noindent The unitary automorphism group of the Heisenberg group plays
prominent roles in quantum information theory \cite{Fivel, Gottesman},
and is often called the Clifford group. In the older literature the
Heisenberg group is sometimes called the Clifford collineation group,
and the Clifford group is called the Clifford transform group \cite{Horadam}.
Although we will discuss it in detail for the case $N = 3$ later on, we will
mostly be concerned with automorphisms of order 2. In the clock and shift
representation such an automorphism acts according to
\begin{equation} A: \ \ \ x_a \leftrightarrow x_{-a} \ . \label{A} \ .
\end{equation}
\noindent Adding this generator leads us to consider an extended group which is
twice as large as $H(N)$. In quantum information language the involution $A$ is
generated by one of Wootters' phase point operators \cite{Wootters1}.
Finally there is the curious conjecture \cite{Zauner} that
the SIC vectors are always left invariant by a unitary automorphism of the
Heisenberg group having order 3. No one knows why this should be so,
but it does appear to be true \cite{Marcus, Grassl}, and in four dimensions
we will see exactly how it happens.
What is special about the case when $N$ is prime is that $H(N)$ then admits
$N+1$ cyclic subgroups of order $N$, forming a flower with $N+1$ petals
with only the unit element in common. Correspondingly there are $N+1$
eigenbases, and they necessarily form a complete set of
MUB \cite{Vatan}. In prime power dimensions $N = p^k$ the known complete set
of MUB is the
set of eigenbases of the cyclic subgroups of a Heisenberg group defined
over a Galois field. The only case we will discuss is when $N = 4$, for
which the Galois Heisenberg group is the tensor product $H(2) \otimes H(2)$.
Another piece of background information is that SICs and MUBs look
natural in Bloch space, which is the $N^2-1$ dimensional
space of Hermitean operators of trace 1, considered as a vector space with
the trace inner product and with the maximally mixed state at the origin.
Density matrices form a convex body in Bloch space. A SIC is simply a regular
simplex in Bloch space, inscribed into this convex body. But it is not easy
to rotate the simplex while keeping the body of density matrices fixed,
because the symmetry group of this body is only $SU(N)/Z_N$, a rather
small subgroup of $SO(N^2-1)$ as soon as $N > 2$. This is why the SIC
problem is hard. An orthonormal basis is a regular simplex with only
$N$ corners, spanning some $(N-1)$-plane through the origin in Bloch space.
Two bases are mutually unbiased if the corresponding $(N-1)$-planes are
totally orthogonal, from which it immediately follows that no more than
$N+1$ MUB can exist.
Any pure state corresponds to a Bloch vector of a definite length. Given a
complete set of MUB we can project this vector onto the $N+1$ different
$(N-1)$-planes defined by the MUB. Should it happen that these projected
vectors all have the same length the vector is as it were unbiased with
respect to the MUB, and is then---for some reason---called a
Minimum Uncertainty State \cite{Wootters3, Appleby}. The condition on a
state vector to be unbiased in this sense is easily worked out using the
Euclidean metric on Bloch space in conjunction with Pythagoras' theorem.
Choose any one of the MUB as the computational basis, and express the
Hilbert space components of a unit vector with respect to that basis as
\begin{equation} x_a = \sqrt{p_a}e^{i\mu_a} \ , \hspace{8mm}
\sum_{n = 0}^{N-1} p_a = 1 \ . \label{octant} \end{equation}
\noindent If the corresponding Bloch vector projected onto the $(N-1)$-plane spanned by
the computational basis has the length appropriate to a Minimum Uncertainty
State it must be true that
\begin{equation} \sum_{a = 0}^{N-1}p_a^2 = \frac{2}{N+1} \ . \label{MUS}
\end{equation}
\noindent This is simple enough, but there is the complication that this has
to be done for all the $N+1$ MUB, which will give an additional set of $N$
constraints on the phases ensuring that the vector has the appropriate
length when projected to the other MUB planes. We spare the reader from
the details, but we repeat that all Heisenberg covariant SIC vectors are
Minimum Uncertainty States whenever $N$ is a prime. Examining the
proof of this interesting statement shows that something similar
is true also when no complete set of MUB is available: In any eigenbasis
of a cyclic subgroup of $H(N)$ of order $N$ eq. (\ref{MUS}) will hold
for any vector belonging to a Heisenberg covariant SIC \cite{Khat, ADF}.
This is true regardless of how many bases of this kind there are.
\section{The syzygetic Hesse pencil}
We now descend to the complex projective plane, and begin by introducing the
language used by nineteenth century geometers. Points are represented by ket
or column vectors in ${\bf C}^3$, or more precisely by one-dimensional
subspaces, while lines are represented by two-dimensional subspaces. Using
the scalar product in Hilbert space we can equally well represent the
lines by bra or row vectors orthogonal to the subspaces they represent,
so that the relation
\begin{equation} \langle Y|X\rangle = 0 \label{ett} \end{equation}
\noindent means that the point $X$ lies on the line $Y$. The two-dimensional
subspace representing the line consists of all vectors whose scalar product with
the bra vector $\langle Y|$ vanishes. Since there is a one-to-one correspondence
$|X\rangle \leftrightarrow \langle X|$ between bras and kets there is also a
one-to-one correspondence between points and lines. Clearly eq.
(\ref{ett}) implies that
\begin{equation} \langle X|Y\rangle = 0 \ , \end{equation}
\noindent which says that the point $Y$ lies on the line $X$. This is known as
the duality between points and lines in the projective plane.
We will study complex plane curves defined by homogeneous polynomials
in three variables. Linear polynomials define two-dimensional
subspaces, that is to say two real-dimensional subsets of the complex plane,
and by the above they define projective lines. Intrinsically they are
spheres, namely Bloch spheres, because ${\bf CP}^1 = {\bf S}^2$.
Quadratic polynomials or quadrics define conic sections, and over the
complex numbers the intrinsic geometry of a conic section is again that of a
sphere. The set of spin coherent states is an example \cite{BH}. To
the next order in complication we choose a cubic polynomial. We require the curve
to transform into itself under the Heisenberg group in the clock and shift
representation (\ref{group}). Up to an irrelevant overall constant the most
general solution for the cubic is then
\begin{equation} P = x^3 + y^3 + z^3 + txyz \ . \label{cubic} \end{equation}
\noindent Here $t$ is a complex number parametrising what is known as the
syzygetic Hesse pencil of cubics. Intrinsically each cubic is a torus rather
than a sphere. We observe that the polynomial is automatically invariant
under the additional involution $A$ given above in (\ref{A}).
Hesse \cite{Hesse}, and before him Pl\"ucker \cite{Plucker}, studied this family
of curves in detail. Their first object was to determine the inflection points.
They are given by those points on the curve for which the determinant of its
matrix of second derivatives---its Hessian---vanishes. In the present case this
is a cubic polynomial as well; in fact
\begin{equation} H = \det{\partial_i\partial_jP} =
(6^3 + 2t^3)xyz - 6t^2(x^3 + y^3 + z^3) \ . \end{equation}
\noindent This is again a member of the Hesse pencil of cubics. In astronomy
a ``syzygy'' occurs when three planets lie on a line, so we can
begin to appreciate why the pencil is called ``syzygetic''. The inflection
points are given by $P = H = 0$. By B\'ezout's theorem two cubics in the
complex projective plane intersect in nine points, hence there are nine
inflection points. They coincide for all cubics in the pencil, and are
given by
\begin{equation} \left[ \begin{array}{ccccccccc} 0 & 0 & 0 & -1 & - q & - q^2 &
1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 & -1 & - q & - q^2 \\ -1 & - q & - q^2 &
1 & 1 & 1 & 0 & 0 & 0 \end{array} \right] \ . \label{points} \end{equation}
\noindent This is recognisable as a set of nine
SIC vectors covariant under the Heisenberg group \cite{Zauner, Renes}. We can
normalise our vectors if we want to, but in the spirit of projective geometry
we choose not to.
There are four singular members of the Hesse pencil, defined by values of the
parameter $t$ such that there are non-zero solutions to $P = P_{,x} = P_{,y}
= P_{,z} = 0$. These values are
\begin{equation} t = \infty \hspace{5mm} \mbox{and} \hspace{5mm} t^3 = -
3^3 \ . \end{equation}
\noindent If $t = \infty$ the polynomial reduces to $xyz = 0$. In
this case the singular cubic consists of three projective lines that make
up a triangle. The remaining three singular cases will give rise to three
other triangles. Therefore the syzygetic pencil singles
out 4 special triangles in the projective plane, given by their 12 vertices
\begin{eqnarray} \triangle^{(0)} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\
0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] \ , \hspace{6mm}
\triangle^{(1)} = \left[ \begin{array}{ccc} 1 & q^2 & q^2 \\ q^2 & 1 & q^2 \\
q^2 & q^2 & 1 \end{array} \right] \ , \hspace{12mm} \nonumber \\
\label{MUB3} \\
\triangle^{(2)} = \left[ \begin{array}{ccc} 1 & q & q \\ q & 1 & q \\
q & q & 1 \end{array} \right] \ , \hspace{8mm}
\triangle^{(\infty )} = \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & q & q^2 \\
1 & q^2 & q \end{array} \right] \ , \hspace{12mm} \nonumber \end{eqnarray}
\noindent where $q = e^{2\pi i/3}$. The columns, labelled consecutively by
$0,1,2$, can indeed be regarded as 12 points or by duality as 12 lines.
The four triangles are referred to as the inflection triangles.
What gives the triangles their name is the remarkable fact that the nine
inflection points lie by threes on their twelve edges. Hesse calls this
a ``{\it sch\"onen Lehrsatz}'', and attributes it to Pl\"ucker \cite{Plucker}.
It is not hard
to verify. After a small calculation one finds that the orthogonalities
between the columns in the four triangles and the vectors representing
the inflection points are as follows:
\
{\tiny \begin{tabular}{|c||ccc|ccc|ccc|ccc|} \hline
\ & $\triangle_0^{(0)}$ & $\triangle_1^{(0)}$& $\triangle_2^{(0)}$ & $\triangle_0^{(1)}$
& $\triangle_1^{(1)}$ & $\triangle_2^{(1)}$ & $\triangle_0^{(2)}$ & $\triangle_1^{(2)}$ &
$\triangle_2^{(2)}$ & $\triangle_0^{(\infty )}$ & $\triangle_1^{(\infty )}$ &
$\triangle_2^{(\infty )}$ \\ \hline \hline
$X_0$ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$X_1$ & $\bullet$ & \ & \ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$X_2$ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & $\bullet$ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_0$ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_1$ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & \ & \ & $\bullet$ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_2$ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & $\bullet$ & \ & \ & \ & \ & $\bullet$ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_0$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_1$ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_2$ & \ & \ & $\bullet$ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & \ & $\bullet$ \\
\hline
\end{tabular}}
\
\
\noindent Thus we have
\begin{equation} \langle \Delta_0^{(0)} |X_0\rangle =
\langle \Delta_0^{(0)} |X_1\rangle = \langle \Delta_0^{(0)} |X_2\rangle =
0 \end{equation}
\noindent and so on. Recalling the interpretation of the vanishing scalar
products we see by
inspection of the table that Hesse's beautiful theorem is true.
We have verified that there exists a configuration of 9 points and 12 lines
such that each point belongs to four lines, and each line goes through three points.
This is denoted $(9_4, 12_3)$, and is known as the Hesse configuration. Using the
duality between points and lines we have also proved the existence of the
configuration $(12_3, 9_4)$. From an abstract point of view such a configuration
is a combinatorial object known as a finite affine plane \cite{Dolgachev}. In the
language of quantum information theory the inflection triangles form a complete
set of four MUB, while the inflection points form a SIC.
We can now expand on our discussion of group theory in section 2.
First, every plane cubic can be
regarded as a commutative group in a natural way. This is not surprising,
given that the curve is intrinsically a torus---that is a group manifold.
The idea relies on B\'ezout's theorem, which this time assures us that any
line intersects the cubic in three points---two of which coincide
if the line is a tangent, and all of which coincide if the line is a line of
inflection. An arbitrary point on the cubic is taken to be the identity element,
and denoted $O$. To add two arbitrary points $A$ and $B$ on the
cubic, draw the line between them and locate its third intersection point $P$
with the cubic. Then draw the line between $O$ and $P$ and again locate the
third intersection point $C$. By definition then $A + B = C$. All the group
axioms are obeyed, although it is non-trivial to prove associativity.
Now choose the origin to sit at one of the inflection points. With Hesse's
construction in hand one sees that the nine inflection points form a
group of order nine, which is precisely the projective Heisenberg group.
This is also the torsion group of the curve, meaning that it contains all
group elements of finite order. Because they are group elements of order
3 the inflection points are also called 3-torsion points.
Next we ask for the group of transformations transforming the cubics of the
Hesse pencil among themselves. Recall that the parameter $t$ in the Hesse
cubic (\ref{cubic}) can serve as a complex coordinate on a sphere. The
four singular members of the pencil defines a tetrahedron
on that sphere. Transformations within the pencil act as M\"obius transformations
on the complex number $t$. Moreover they must permute the singular members
of the Hesse pencil among themselves. This means that they form a well known
subgroup of $SO(3)$, namely the symmetry group $A_4$ of the regular
tetrahedron. It enjoys the isomorphism
\begin{equation} A_4 \sim PSL(2, {\bf F}_3) \ , \end{equation}
\noindent where ${\bf F}_3$ is the field of integers modulo 3. The group
$SL(2, {\bf F}_3)$ consists of unimodular two by two matrices with
integer entries taken modulo three; here only its projective part enters
because the subgroup generated by the matrix $-{\bf 1}$ gives rise to
the involution $A$ and does not act on $t$, although it does
permute the inflection points among themselves. The full symmetry
group of the pencil is a semi-direct product of the Heisenberg group
and $SL(2, {\bf F}_3)$. This is the affine group on a finite affine
plane. It is known as the Hessian group \cite{Jordan}, or as the
Clifford group.
There are many accounts of this material in the literature, from geometric
\cite{Grove}, undergraduate \cite{Gibson}, and modern \cite{Artebani} points
of view. It forms a recurrent theme in Klein's history of nineteenth
century mathematics \cite{Klein}. The fact that the inflection points form
a SIC was first noted by Lane Hughston \cite{Hughston}.
\section{The elliptic normal curve in prime dimensions}
Felix Klein and the people around him put considerable effort into the
description of elliptic curves embedded into projective spaces of dimension
higher than 2. They proceeded by means of explicit parametrisations of
the curve using Weierstrass' $\sigma$-function \cite{Bianchi, Hulek}. As far
as we are concerned now, we only need to know that the symmetries they
built into their curves is again the Heisenberg group supplemented with
the involution $x_a \leftrightarrow x_{-a}$ coming from the Clifford group.
An analysis of this group of symmetries leads directly to ``{\it une
configuration tr\`es-remarquable}'' originally discovered by Segre
\cite{Segre}. We will present it using some notational improvements
that were invented later \cite{Gross, ADF}.
Since $N = 2n-1$ is odd, the integer $n$ serves
as the multiplicative inverse of $2$ among the integers modulo $N$.
It is then convenient to write the Heisenberg group elements as
\begin{equation} D(i,j) = q^{nij}X^iZ^j \hspace{5mm} \Rightarrow
\hspace{5mm} D(i,j)D(k,l) = q^{n(jk-il)}D(i+k, j+l) = q^{jk-il}D(k,l)D(i,j) \ . \end{equation}
\noindent Let us also introduce explicit matrix representations of
the group generators:
\begin{equation} D(i,j) = q^{nij + bj}\delta_{a,b+i} \ ,
\hspace{10mm} A = \delta_{a+b,0} \ . \end{equation}
\noindent Note that the spectrum of the involution $A$ consists of
$n$ eigenvalues $1$ and $n-1$ eigenvalues $-1$. Hence $A$ splits the
vector space into the direct sum
\begin{equation} {\cal H}_N = {\cal H}_n^{(+)} \oplus {\cal H}_{n-1}^{(-)}
\ . \end{equation}
\noindent It is these subspaces that we should watch. In fact there
are altogether $N^2$ subspaces of dimension $n$ singled out in this
way, because there are $N^2$ involutions
\begin{equation} A_{ij} = D(i,j)AD(i,j)^\dagger
\ . \label{Aij} \end{equation}
\noindent
The eigenvectors of the various cyclic subgroups can be collected
into the $N+1$ MUB
\begin{equation} \triangle_{am}^{(k)} = \left\{\begin{array}{cll}
\delta_{am} & , & k = 0 \\ \ \\ \frac{1}{\sqrt{N}}q^{\frac{(a-m)^2}{2k}} &
, & 1 \leq k \leq N-1 \\ \ \\
\frac{1}{\sqrt{N}}q^{am} & , & k = \infty \end{array} \right. \
.\end{equation}
\noindent Here $k$ labels the basis, $m$ the vectors, and $a$ their
components. For $N = 3$ this coincides with form (\ref{MUB3}) given
earlier. Note that $N-1$ MUB have been written as circulant matrices,
which is a convenient thing to do.
The key observation is that the zeroth columns in the MUB all obey---we
suppress the index labelling components---
\begin{equation} A\triangle_0^{(k)} = \triangle_0^{(k)} \ . \end{equation}
\noindent Hence this set of $N+1$ vectors belongs to the $n$-dimensional
subspace ${\cal H}_n^{(+)}$ defined by the involution $A$. We can go on to
show that each of the $n$-dimensional eigenspaces defined by the
$N^2$ involutions $A_{ij}$ contain $N+1$ MUB vectors. Conversely,
each MUB vector belongs to $N$ subspaces. We have found the Segre configuration
\begin{equation} \left( N(N+1)_N, N^2_{N+1} \right) \end{equation}
\noindent containing $N^2 + N$ points and $N^2$ $(n-1)$-planes in
projective $(N-1)$-space, always assuming that $N$ is an odd prime.
The intersection properties of the Segre configuration are remarkable.
Two $n$-spaces in $2n-1$ dimensions intersect at least in a single ray.
With a total of $N^2$ such subspaces to play with we expect many vectors
to arise in this way. But let $\psi$ be such a vector. A minor calculation
shows that
\begin{equation} \psi = A_{ij}A_{kl}\psi =
q^{2(il-jk)}D(2i-2k, 2j-2k)\psi \ . \end{equation}
\noindent Thus $\psi$ must be an eigenvector of some element in the Heisenberg
group, and hence the intersection of any two $n$-spaces is always one
of the $N(N+1)$ eigenvectors in the configuration. In the other direction
things are a little more complicated. Two vectors belonging to the same
basis are never members of same eigenspace, while two vectors of two
different MUB belong to a unique common eigenspace. Using
projective duality we obtain the dual configuration
\begin{equation} \left( N^2_{N+1}, N(N+1)_N \right) \end{equation}
\noindent consisting of $N^2$ $(n-1)$-spaces and $N^2 + N$ hyperplanes.
The intersection properties are precisely those of a finite affine plane
\cite{Dolgachev}.
These are the facts that so delighted Segre. A hundred years
later they delighted Wootters \cite{Wootters1}---although he phrased the
discussion directly in terms of the phase point
operators $A_{ij}$ rather than in terms of their eigenspaces.
A systematic study of prime power dimensions in Segre's spirit appears
not to have been made, although there are some results for $N = 9$ \cite{Horadam}.
But where is the SIC? It is hard to tell. When the dimension
$N = 2n-1 = 3$ we observe that $n-1 = 1$, so the dual Segre configuration
involves $N^2$ vectors, and these are precisely the SIC vectors (\ref{points}).
When $N \geq 5$ the Segre configuration contains not even a candidate
set of $N^2$ vectors. But at least, as a byproduct of the construction,
we find a set of $2n$ equiangular vectors in any $n$ dimensional Hilbert
space such that $2n-1$ is an odd prime. Explicitly they are
\begin{equation} \left[ \begin{array}{ccccc} \sqrt{2n-1} & 1 & 1 & \dots
& 1 \\ 0 & \sqrt{2} & \sqrt{2}q^{1\cdot 1^2} & \dots & \sqrt{2}q^{(2n-2)\cdot 1^2} \\
0 & \sqrt{2} & \sqrt{2}q^{1\cdot2^2} & \dots & \sqrt{2}q^{(2n-2)\cdot 2^2} \\
\vdots & \vdots & \vdots & & \vdots \\
0 & \sqrt{2} & \sqrt{2}q^{1\cdot (n-1)^2} & \dots & \sqrt{2}q^{(2n-2)(n-1)^2}
\end{array} \right] \ . \label{2n} \end{equation}
\noindent Such sets are of some interest in connection with pure state
quantum tomography \cite{Flammia}.
The elliptic curve itself has not been much in evidence in this section.
It is still there in the background though, and in any dimension it
contains $N^2$ distinguished $N$-torsion points. A study of the explicit
expression for the Heisenberg covariant elliptic curve shows that each
of its torsion points belong to one of the $N^2$ eigenspaces
${\cal H}_{n-1}^{(-)}$ \cite{Hulek}, and with the single exception of the
$N = 3$ example (\ref{points}) the known SICs never sit in such a subspace,
so the torsion points are not SICs. This is discouraging, but we will
find some consolation when we proceed to examine the $N = 4$ case.
\section{The SIC in 4 dimensions}
In an $N = 4$ dimensional Hilbert space there is a parting of the ways,
in the sense that the MUB and the SIC are defined using two different
versions of the Heisenberg group. The elliptic curve stays with $H(4)$.
Using an argument concerning line bundles and employing ingredients such
as the Riemann-Roch theorem, it can be shown that an elliptic
normal curve in projective 3-space (not confined to any projective plane)
is the non-singular intersection of two quadratic polynomials. If we insist
that it is transformed into itself by the Heisenberg group in its clock
and shift representation (\ref{group}), it follows \cite{Hulek} that
these quadratic polynomials are
\begin{equation} Q_0 = x_0^2 + x_2^2 + 2ax_1x_3 \ , \hspace{8mm} Q_1 =
x_1^2 + x_3^2 + 2ax_0x_2 \ . \end{equation}
\noindent The extra symmetry under the involution $A$, defined in (\ref{A}),
again appears
automatically. We can diagonalise these quadratic forms by means of a unitary
transformation of our Hilbert space. In the new coordinates we have
\begin{equation} Q_0 = z_0^2 + iz_1^2 + a(iz_2^2 + z_3^2) \ , \hspace{8mm}
Q_1 = iz_2^2 - z_3^2 + a(z_0^2 - iz_1^2) \ . \end{equation}
\noindent Note that $Q_0 = Q_1 = 0$ implies
\begin{equation} z_0^4 +z_1^4 + z_2^4 + z_3^4 = 0 \ . \end{equation}
\noindent Hence the elliptic curve lies on a quartic surface.
The new basis that we have introduced has a natural interpretation in
terms of the involution $A$. First of all, by acting on $A$ with the
Heisenberg group as in eq. (\ref{Aij}) we obtain only four involutions
altogether, rather than $N^2$ as in the odd prime case. Their spectra
are $(1,1,1,-1)$, and in the new basis they are all represented by
diagonal matrices. Hence each basis vector is inverted by one involution,
and left invariant by the others. In projective 3-space they correspond
to four reference points, and one can show that the 16 tangents of the 16 torsion
points on the curve divide into 4 sets of 4 each coming together at one of
the 4 reference points \cite{Hulek}. Each such set is an orbit under the
subgroup of elements of order 2.
In our preferred basis the generators of the Heisenberg group appear in the form
\begin{equation} Z = e^{\frac{i\pi}{4}} \left( \begin{array}{rrrr} 0 & 1 & 0 & 0 \\
-i & 0 & 0 & 0 \\
0 & 0 & 0 & -i \\ 0 & 0 & -1 & 0 \end{array}\right) \ , \hspace{9mm}
X = e^{\frac{i\pi}{4}} \left( \begin{array}{rrrr}
0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -i & 0 & 0 & 0 \\ 0 & i & 0 & 0 \end{array}
\right) \ . \end{equation}
\noindent Finding a set of 16 SIC-vectors covariant under the Heisenberg group
is now a matter of simple guesswork. One answer, ignoring overall phases and
normalisation, is
\begin{equation} \left[
\begin{array}{rrrrrrrrrrrrrrrr} x & x & x & x & i & i & - i & - i & i & i
& - i & - i & i & i & - i & - i \\ 1 & 1 & - 1 & - 1 & x & x & x & x & i & -i
& i & - i & 1 & - 1 & 1 & - 1 \\ 1 & -1 & 1 & -1 & 1 & - 1 & 1 & -1 & x &
x & x & x & - i & i & i & - i \\
1 & - 1 & - 1 & 1 & -i & i & i & - i & - 1 & 1 & 1 & - 1 & x & x & x & x
\end{array} \right] \ , \label{SIC4} \end{equation}
\noindent where
\begin{equation} x = \sqrt{2 + \sqrt{5}} \ . \end{equation}
\noindent All scalar products have the same modulus because
\begin{equation} (x^2-1)^2 = |x+1 + i(x-1)|^2 \ . \end{equation}
\noindent Thanks to our change of basis, this is significantly more memorable
than the standard solutions \cite{Zauner, Renes} (and it was in fact arrived
at, without considering the Heisenberg group at all, by Belovs \cite{Belovs}).
The whole set is organised
into 4 groups, where each group sits at a standard distance from the 4 basis
vectors that are naturally singled out by the elliptic curve. The normalised vectors
obey eq. (\ref{MUS}) for a Minimum Uncertainty State, even though our basis
is unusual.
The otherwise mysterious invariance of the SIC vectors under some element
of the Clifford group of order 3 is now easy to see. We focus
on the group of vectors
\begin{equation} \left[ \begin{array}{rrrr} x & x & x & x \\ 1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{array} \right] \ . \end{equation}
\noindent They form an orbit under the subgroup of elements of order 2. When
we project them to the subspace orthogonal to the first basis vector we have
4 equiangular vectors in a 3 dimensional subspace. Each projected vector
will be invariant under a rotation of order 3 belonging to the symmetry
group of this tetrahedron. An example leaving the first vector invariant is
\begin{equation} R = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right) \ . \end{equation}
\noindent It is straightforward to check that the rotation $R$ belongs to
the Clifford group, and is indeed identical to one of "Zauner's unitaries"
\cite{Zauner}.
Each of the four involutions $A$ admit a "square root" belonging to the
Clifford group, such as
\begin{equation} F = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\ 0 & 0 & 0 & i \end{array} \right) \hspace{5mm}
\Rightarrow \hspace{5mm} F^2 = A = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right)\ . \end{equation}
\noindent Acting with these unitaries on the SIC (\ref{SIC4}) will give a
set of altogether 16 different SICs, collectively forming an orbit under the
Clifford group \cite{Zauner, Marcus}.
Note that the 16 SIC points in projective space do not actually sit on
the elliptic curve. In this sense the step from $N = 3$ to $N = 4$ is
non-trivial. In an arbitrary even dimension $N = 2n$
the involution $A$, see (\ref{A}), has a spectrum consisting of $n+1$
eigenvalues $1$ and $n-1$ eigenvalues $-1$. When $N = 4$ this singles out
a unique ray but in higher dimensions this is not so, so generalising
to arbitrary even dimension will not be easy.
\section{Minimum Uncertainty States in four dimensions}
Eddington and his surface have not yet appeared. The group on whose twofold
cover his Fundamental Theory hinged was not the Heisenberg group over
the ring of integers modulo 4, but a different Heisenberg group of
the form $H(2)\otimes H(2)$ \cite{Eddington}. This group can be
represented by real matrices, and is in fact the group which gives rise
to the complete set of MUB in 4 dimensions. What can we do with it?
There does not exist a SIC which is covariant under Eddington's group.
In fact the group $H(2)^{\otimes k}$ admits such an orbit only if
$k = 1$ or $k = 3$ \cite{Godsil}.
As a substitute we can look for an orbit of 16 Minimum Uncertainty
States with
respect to the maximal set of MUB. Such an orbit does exist, and is
given by the 16 vectors
\begin{equation} \left[ \begin{array}{cccccccccccccccc} x & x & x & x &
\alpha & \alpha & - \alpha & - \alpha & \alpha & \alpha & - \alpha &
- \alpha & \alpha & \alpha & - \alpha & - \alpha \\
\alpha & \alpha & - \alpha & - \alpha & x & x & x & x &
\alpha & - \alpha & \alpha & - \alpha & \alpha &
- \alpha & \alpha & - \alpha \\
\alpha & - \alpha & \alpha & - \alpha & \alpha & - \alpha & \alpha
& - \alpha & x & x & x & x & \alpha & -\alpha & - \alpha & \alpha \\
\alpha & -\alpha & - \alpha & \alpha & \alpha & - \alpha & - \alpha
& \alpha & \alpha & - \alpha & - \alpha & \alpha & x & x & x & x
\end{array} \right] \ , \label{Edd} \end{equation}
\noindent where
\begin{equation} x = \sqrt{2 + \sqrt{5}} \ , \hspace{8mm} \alpha = e^{ia} \ ,
\hspace{8mm} \cos{a} = \frac{\sqrt{5}-1}{2\sqrt{2 + \sqrt{5}}} \ . \end{equation}
\noindent I omit the lengthy proof that these 16 vectors really are
Minimum Uncertainty States \cite{Asa}. Although this is not a SIC,
in a way it comes close to being one. Like the SICs (\ref{points}) and
(\ref{SIC4}), it can be arrived at using the following procedure: Introduce a
vector $(x, e^{i\mu_1}, \dots , e^{i\mu_{N-1}})^{\rm T}$, and adjust the value
of $x$ so that the normalised vector solves eq. (\ref{MUS}) for a Minimum
Uncertainty State. Next introduce a complex Hadamard matrix, that is to
say a unitary matrix all of whose matrix elements have the same modulus.
Such matrices exist in any dimension, although their classification problem
is unsolved if the dimension exceeds 5 \cite{Tadej}. By multiplying with
an overall factor $\sqrt{N}$, and then multiplying the columns with phase
factors, we can ensure that all matrix elements in the first row equal
1. Replace these elements with $x$. Next multiply the rows
with phase factors until one of the columns equals the vector we introduced.
The result is a set of $N$ vectors with all mutual scalar products taking
the value that characterises a SIC. Next permute the entries of the original
vector cyclically, and afterwards try to adjust the phases $\mu_a$ so that the
resulting $N$ vectors are again equiangular with the mutual scalar products
characterising a SIC. Extending the new vectors using an Hadamard matrix
in the same way as before then gives $N$ equiangular vectors each of which
belongs to a separate group of $N$ equiangular vectors. Before we can say
that we have constructed a SIC we must check that all scalar products
between pairs of vectors not belonging to the same group take the SIC values.
The vectors (\ref{Edd}) fail to form a SIC only because the last step fails.
Finally we come back to Eddington's lecture room. In the treatise that
he read \cite{Hudson}
it is explained that an orbit of $H(2)\otimes H(2)$ gives a realisation of
the Kummer configuration $16_6$, consisting of 16 points and 16 planes in
projective 3-space, such that each point belongs to 6 planes and each plane
contains 6 points. The above set of Minimum Uncertainty States realises this
configuration. As an example, the 6 vectors
\begin{equation} \left[
\begin{array}{cccccc} - \alpha & - \alpha & - \alpha & - \alpha & - \alpha
& - \alpha \\
x & x & \alpha & - \alpha
& \alpha & - \alpha \\ \alpha & -\alpha & x & x & - \alpha & \alpha \\
-\alpha & \alpha & - \alpha & \alpha & x & x
\end{array} \right] \end{equation}
\noindent are orthogonal to the row vector
\begin{equation} ( \begin{array}{ccccc} x & \alpha & \alpha &
\alpha \end{array} ) \ , \end{equation}
\noindent or in other words the corresponding 6 points belong to the
corresponding plane. This is a purely group theoretical property and does
not require the vectors to be Minimum Uncertainty States. Still, Eddington's
story suggests that our 16 special vectors may have some use, somewhere.
\section*{Acknowledgments}
I thank Subhash Chaturvedi for telling me about the Segre configuration, at a
point in time when neither of us knew about Segre. Both of us give our best
wishes to Tony!
\section*{References}
\medskip
|
\section{Introduction}
\label{sec:Introduction}
Manipulation of heavy and bulky objects is a challenging task for manipulators and humanoid robots. An object is considered heavy if the manipulator's joint torques are not large enough to balance the object weight while lifting it off the ground. Thus, heavy objects cannot be manipulated with usual pick-and-place strategy due to actuator saturation.
Consider the manipulation scenario shown in Fig.~\ref{Fig:Motivation}, where a heavy object has to be moved from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ by a dual-armed robot. The object has to negotiate a step during the manipulation which implies that the final pose cannot be achieved by either pick-and-place strategies or by pushing. One possible way to move the object and negotiate the step is to use a sequence of pivoting motions, which we call object gaiting, and this is a common strategy used by humans to manipulate heavy objects. Therefore, the goal of this paper is to develop an algorithmic approach to compute a plan for manipulating heavy objects by a sequence of pivoting motions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{Figures/Motivation.pdf}
\caption{Dual-handed manipulation of a heavy object between two given poses $\mathcal{C}_O$ and $\mathcal{C}_F$ by a sequence of pivoting motions.}
\label{Fig:Motivation}
\end{figure}
In a pivoting motion, we move the object while maintaining point or line contact with the environment. A point contact acts like a spherical joint, whereas a line contact acts like a revolute joint. The location and axes of these joints change during a gaiting motion. These joints are force-closed joints and can only be implemented through adequate frictional force at the object-ground contact that prevents slippage. Thus, a plan for pivoting operations consists of (a) {\em Motion plan}: a sequence of joint angles of the manipulators (that are within joint limits) and the corresponding object poses that maintains contact with the ground, (b) {\em Force plan}: a sequence of joint torques that are within the actuator limits and ensure that there is enough force at the object ground contact to prevent slippage. Furthermore, we also want to ensure that the manipulator does not lose the grasp of the object and there is no slippage at the hand-object contact. In this paper, we will focus on the motion planning problem. We have studied the problem of computing the force plan or force synthesis problem in~\cite{Patankar2020} and we will combine it with our motion plan to generate torques to achieve the motion.
The key challenge in solving the motion planning problem is that the kinematic constraints of the object maintaining a spherical or a revolute joint with the ground during the motion corresponds to nonlinear manifold constraints in the joint space of the manipulator. In sampling-based motion planning in joint space ($\mathbb{J}$-space), these constraints are hard to deal with, although there have been some efforts in this direction~\cite{BerensonSK11,JailletP12,Stilman10,KimU16,YaoK07, bonilla2015sample, KingstonMK2019}. Furthermore, in manipulation by gaiting, where we are performing a sequence of pivoting operations, these manifold constraints are not known beforehand since they depend on the choice of the pivot points (or lines) which has to be computed as a part of the plan. In this paper, we present a novel task-space ($\mathbb{T}$-space) based approach for generating the motion plan that exploits the fact that the kinematic constraints of a revolute or spherical joint constrains the motion of the object to a subgroup of $SE(3)$.
We present a two-step approach for computing the motion plan. In the first step, we develop an algorithm to compute a sequence of intermediate poses for the object to go from the initial to the goal pose. Two consecutive intermediate poses implicitly determine a point or line on the object and the ground that stay fixed during motion, thus encoding motion about a revolute or a spherical joint. In the second step, we use Screw Linear Interpolation (ScLERP) to determine a task space path between two intermediate poses, along with resolved motion rate control (RMRC)~\cite{whitney1969resolved,Pieper68} to convert the task space path to a joint space path. The advantage of using ScLERP is that it automatically satisfies the kinematic motion constraints during the pivoting motion without explicitly encoding it~\cite{Sarker2020}.
Thus, the joint space path that we compute along with the object path automatically ensures that the kinematic contact constraints are satisfied. This computationally efficient approach for motion planning for manipulation by pivoting is the key contribution of this paper. We also show that our motion plan can be combined with the second order cone programming (SOCP) based approach to compute joint torques and grasping forces~\cite{Patankar2020}, while ensuring that all no-slip constraints at the contacts and actuator limits are satisfied. We demonstrate our approach in simulation using a dual-armed Baxter robot.
\section{Related Work}
The use of external environment contacts to enhance the in-hand manipulation capability was first studied by Chavan-Dafle in \cite{Dafle2014}. More recently Hou \textit{et. al} have referred to the use of environment contact as \textit{shared grasping} wherein they treat the environment as an additional finger \cite{hou2020manipulation}. They have provided stability analysis of shared grasping by using \textit{Hybrid Force-Velocity Control} (HFVC).
Murooka \textit{et. al.} \cite{Murooka2015} proposed a method for pushing a heavy object by an arbitrary region of a humanoid robot. Polverini \textit{et. al.} \cite{Polverini2020} also developed a control architecture for a humanoid robot which is able to exploit the complexity of the environment to perform the pushing task of a heavy object.
Pivoting was first was first introduced by Aiyama \textit{et. al.} \cite{aiyama1993pivoting} as a new method of graspless/non-prehensile manipulation. Based on this method, Yoshida \textit{et. al.} \cite{yoshida2007pivoting,yoshida2008whole,yoshida2010pivoting} developed a whole-body motion planner for a humanoid robot to autonomously plan a pivoting
strategy for manipulating bulky objects. They first planned a sequence of collision-free Reeds and Shepp paths (especially straight and circular paths in $\mathbb{R}^2$), then convert these paths into a sequence of pivoting motions. However, this method is limited to the motion on Reeds and Shepp curves to satisfy a nonholonomic constraint, which is not always required. Thus, it is not a general, efficient, and optimum way to manipulate objects between two given poses, especially when there are no obstacles in the workspace.
Hence, we propose a general gait planning method as an optimization problem by defining the \textit{intermediate poses} and using the ScLERP to manipulate the object by gaiting between any two arbitrary poses.
\section{Preliminaries}
\noindent
\textbf{Quaternions and Rotations}: The quaternions are the set of hypercomplex numbers, $\mathbb{H}$. A quaternion $Q \in \mathbb{H}$ can be represented as a 4-tuple $Q = (q_0, \boldsymbol{q}_r) = (q_0, q_1, q_2, q_3)$, $q_0 \in \mathbb{R}$ is the real scalar part,
$\boldsymbol{q}_r=(q_1, q_2, q_3) \in \mathbb{R}^3$ corresponds to the imaginary part.
The conjugate, norm, and inverse of a quaternion $Q$ is given by
$Q^* = (q_0, -\boldsymbol{q}_r)$, $\lVert Q \rVert = \sqrt{Q Q^*} = \sqrt{Q^* Q}$,
and $Q^{-1} = Q^*/{\lVert Q \rVert}^2$, respectively. Addition and multiplication of two quaternions
$P = (p_0, \boldsymbol{p}_r)$ and
$Q = (q_0, \boldsymbol{q}_r)$ are performed as $P+Q = (p_0 + q_0, \boldsymbol{p}_r + \boldsymbol{q}_r)$ and $PQ = (p_0 q_0 - \boldsymbol{p}_r \cdot \boldsymbol{q}_r, p_0 \boldsymbol{q}_r + q_0 \boldsymbol{p}_r + \boldsymbol{p}_r \times \boldsymbol{q}_r)$.
The quaternion $Q$ is a \textit{unit quaternion}
if ${\lVert Q \rVert} = 1$, and consequently, $Q^{-1} = Q^*$. Unit quaternions are used to represent the set of all rigid body rotations, $SO(3)$, the Special Orthogonal group of dimension $3$. Mathematically, $SO(3)=\left\{\boldsymbol{R} \in \mathbb{R}^{3 \times 3}\left|\boldsymbol{R}^{\mathrm{T}} \boldsymbol{R}=\boldsymbol{R} \boldsymbol{R}^{\boldsymbol{T}}=\boldsymbol{I}_3,\right| \boldsymbol{R} \mid=1\right\}$, where $\boldsymbol{I}_3$ is a $3\times3$ identity matrix and $\left| \cdot \right|$ is the determinant operator. The unit quaternion corresponding to a rotation is $Q_R = (\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2})$, where $\theta \in [0,\pi]$ is the angle of rotation about a unit axis $\boldsymbol{l} \in \mathbb{R}^3$.
\noindent
\textbf{Dual Quaternions and Rigid Displacements}:
In general, dual numbers are defined as $d = a + \epsilon b$ where $a$ and $b$ are elements of an algebraic field, and $\epsilon$ is a \textit{dual unit} with $\epsilon ^ 2 = 0, \epsilon \ne 0$.
Similarly, a dual quaternion $D$ is defined as $D= P + \epsilon Q$
where $P, Q \in \mathbb{H}$. The conjugate, norm, and inverse of the dual quaternion $D$ is represented as $D^* = P^* + \epsilon Q^*$, $\lVert D \rVert = \sqrt{D D^*} = \sqrt{P P^* + \epsilon (PQ^* + QP^*)}$, and $D^{-1} = D^*/{\lVert D \rVert}^2$,
respectively. Another definition for the conjugate of $D$ is represented as $D^\dag = P^* - \epsilon Q^*$. Addition and multiplication of two dual quaternions $D_1= P_1 + \epsilon Q_1$ and $D_2= P_2 + \epsilon Q_2$ are performed as $D_1 + D_2 = (P_1 + P_2) + \epsilon (Q_1 + Q_2)$ and $D_1 D_2 = (P_1 P_2) + \epsilon (P_1 Q_2 + Q_1 P_2) $.
The dual quaternion $D$ is a \textit{unit dual quaternion} if ${\lVert D \rVert} = 1$, i.e., ${\lVert P \rVert} = 1$ and $PQ^* + QP^* = 0$, and consequently, $D^{-1} = D^*$. Unit dual quaternions can be used to represent the group of rigid body displacements, $SE(3) = \mathbb{R}^3 \times SO(3)$, $S E(3)=\left\{(\boldsymbol{R}, \boldsymbol{p}) \mid \boldsymbol{R} \in S O(3), \boldsymbol{p} \in \mathbb{R}^{3}\right\}$. An element $\boldsymbol{T} \in SE(3)$, which is a pose of the rigid body, can also be expressed by a $4 \times 4$ homogeneous transformation matrix as
$\boldsymbol{T} = \left[\begin{smallmatrix}\boldsymbol{R}&\boldsymbol{p}\\\boldsymbol{0}&1\end{smallmatrix}\right]$ where $\boldsymbol{0}$ is a $1 \times 3$ zero vector. A rigid body displacement (or transformation) is represented by a unit dual quaternion $D_T = Q_R + \frac{\epsilon}{2} Q_p Q_R$ where $Q_R$ is the unit quaternion corresponding to rotation and $Q_p = (0, \boldsymbol{p}) \in \mathbb{H}$ corresponds to the translation.
\noindent
\textbf{Screw Displacement}: Chasles-Mozzi theorem states that the general Euclidean displacement/motion of a rigid body from the origin $\boldsymbol{I}$ to $\boldsymbol{T} = (\boldsymbol{R},\boldsymbol{p}) \in SE(3)$
can be expressed as a rotation $\theta$ about a fixed axis $\mathcal{S}$, called the \textit{screw axis}, and a translation $d$ along that axis (see Fig.~\ref{Fig:ScrewDisplacement}). Plücker coordinates can be used to represent the screw axis by $\boldsymbol{l}$ and $\boldsymbol{m}$, where $\boldsymbol{l} \in \mathbb{R}^3$ is a unit vector that represents the direction of the screw axis $\mathcal{S}$, $\boldsymbol{m} = \boldsymbol{r} \times \boldsymbol{l}$, and $\boldsymbol{r} \in \mathbb{R}^3$ is an arbitrary point on the axis. Thus, the screw parameters are defined as $\boldsymbol{l}, \boldsymbol{m}, \theta, d$.
The screw displacements can be expressed by the dual quaternions as $D_T = Q_R + \frac{\epsilon}{2} Q_p Q_R = (\cos \frac{\Phi}{2}, L \sin \frac{\Phi}{2})$ where $\Phi = \theta + \epsilon d$ is a dual number and $L = \boldsymbol{l} + \epsilon \boldsymbol{m}$ is a
dual vector.
A power of the dual quaternion $D_T$ is then defined as $D_T^{\tau} = (\cos \frac{\tau \Phi}{2}, L \sin \frac{\tau \Phi}{2})$, $\tau >0$.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.57]{Figures/ScrewDisplacement.pdf}
\caption{Screw displacement from pose $\mathcal{C}_1$ to pose $\mathcal{C}_2$.}
\label{Fig:ScrewDisplacement}
\end{figure}
\noindent
\textbf{Screw Linear Interpolation (ScLERP)}: To perform a one degree-of-freedom smooth screw motion (with a constant rotation and translation rate) between two object poses in $SE(3)$, the screw linear interpolation (ScLERP) can be used. The ScLERP provides a \textit{straight line} in $SE(3)$ which is the closest path between two given poses in $SE(3)$.
If the poses are represented by unit dual quaternions $D_{1}$ and $D_{2}$, the path provided by the ScLERP is derived by $D(\tau) = D_1 (D_1^{-1}D_2)^{\tau}$ where $ \tau \in[0,1]$ is a scalar path parameter.
As $\tau$ increases from 0 to 1, the object moves between two poses along the path
$D(\tau)$ by the rotation $\tau \theta$ and translation $\tau d$. Let $D_{12} = D_1^{-1}D_2$. To compute $D_{12}^\tau$, the screw coordinates $\boldsymbol{l}, \boldsymbol{m}, \theta, d$ are first extracted from $D_{12} = P + \epsilon Q = (p_0,\boldsymbol{p}_r) + \epsilon (q_0,\boldsymbol{q}_r) = (\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2}) + \epsilon Q$ by $\boldsymbol{l} = \boldsymbol{p}_r/ \lVert \boldsymbol{p}_r \lVert $, $\theta = 2 \, \mathrm{atan2}(\lVert \boldsymbol{p}_r \lVert, p_0)$, $d = \boldsymbol{p} \cdot \boldsymbol{l}$, and $\boldsymbol{m} = \frac{1}{2} (\boldsymbol{p} \times \boldsymbol{l} + (\boldsymbol{p}-d \boldsymbol{l})\cot \frac{\theta}{2})$ where $\boldsymbol{p}$ is derived from $2QP^* = (0, \boldsymbol{p})$ and $\mathrm{atan2}(\cdot)$ is the two-argument arctangent. Then, $D_{12}^\tau = (\cos \frac{\tau \Phi}{2}, L \sin \frac{\tau \Phi}{2})$ is directly derived from $\left(\cos \frac{\tau \theta}{2}, \sin \frac{\tau \theta}{2}\boldsymbol{l}\right)+\epsilon \left( -\frac{\tau d}{2}\sin \frac{\tau \theta}{2}, \frac{\tau d}{2}\cos \frac{\tau \theta}{2}\boldsymbol{l}+\sin \frac{\tau \theta}{2}\boldsymbol{m} \right) $. Note that $\theta =0, \pi$ corresponds to pure translation between two poses and the screw axis is at infinity.
\section{Problem Statement}
\label{sec:ProblemStatement}
Let us assume that we want to manipulate a heavy cuboid object quasi-statically by using $n$ manipulators, while maintaining contact with environment, from an initial pose $\mathcal{C}_O \in SE(3)$ to a final pose $\mathcal{C}_F \in SE(3)$.
We also assume that the object always remains in the manipulators' workspace.
Figure~\ref{Fig:Cube_Manipulator} shows a cuboid object in contact with the environment at the vertex $v$ and also with the $i$-th manipulator's end-effector at the contact point $c_i$ (where $i=1,..,n$). Contact coordinate frames \{$c_i$\} and \{$v$\} are attached to the object at each manipulator and environment contact, respectively, such that $\bm{n}$-axis of the frames is normal (inward) to the object surface and two other axes, $\bm{t}$ and $\bm{o}$, are tangent to the surface. The coordinate frame \{$b$\} is attached to the object center of mass, coordinate frame \{$e_i$\} is attached to the $i$-th end-effector, and \{$s$\} is the inertial coordinate frame.
Let $\boldsymbol{\Theta}^i = [\theta_1^i, \theta_2^i, \cdots, \theta_{l_i}^i] \in \mathbb{R}^{l_i}$ be the vector of joint angles of the $i$-th $l_i$-DoF manipulator, which represents the \textit{joint space} ($\mathbb{J}$-space) or the \textit{configuration space} ($\mathbb{C}$-space) of the manipulator.
Moreover, $\mathcal{E}^i \in SE(3)$ is defined as the pose of the end-effector of the $i$-th manipulator where $\mathcal{E}^i = \mathcal{FK}(\boldsymbol{\Theta}^i)$ and $\mathcal{FK}(\cdot)$ is the manipulator forward kinematics map. Therefore, $\boldsymbol{\Theta}_O^i \in \mathbb{R}^{l_i}$ and $\mathcal{E}_O^i \in SE(3)$ represent the initial configuration of the $i$-th manipulator (in $\mathbb{J}$-space) and pose of $i$-th end-effector, respectively, corresponding to the object initial pose $\mathcal{C}_O$ and $\boldsymbol{\Theta}_F^i \in \mathbb{R}^{l_i}$ and $\mathcal{E}_F^i \in SE(3)$ represent the final configuration of the $i$-th manipulator (in $\mathbb{J}$-space) and pose of $i$-th end-effector, respectively, corresponding to the object final pose $\mathcal{C}_F$. We assume that the position of the manipulator-object contact $c_i$ is given and the transformation between the frames $\{e_i\}$ and $\{c_i\}$ remains constant during the manipulation, i.e., there is no relative motion at the contact interface.
Our motion planning problem is now defined as computing a sequence of joint angles $\boldsymbol{\Theta}^i(j)$, where $j=1,\cdots,m$, $\boldsymbol{\Theta}^i(1) = \boldsymbol{\Theta}^i_O$, $\boldsymbol{\Theta}^i(m) = \boldsymbol{\Theta}^i_F$, to manipulate the object while maintaining contact with the environment from its initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ when $(\mathcal{C}_O, \mathcal{E}_O^i, \boldsymbol{\Theta}_O^i)$ and $(\mathcal{C}_F, \mathcal{E}_F^i)$ ($i=1,...,n$) are given. Moreover, our force planning problem is computing the minimum contact wrenches required to be applied at $c_i$ during the object manipulation to balance the external wrenches (e.g., gravity) and also the environment contact wrenches using the method we have presented in \cite{Patankar2020}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.5]{Figures/Cuboid_Manipulator.pdf}
\caption{An cuboid object being tilted at one of its vertices.}
\label{Fig:Cube_Manipulator}
\end{figure}
\textbf{Solution Approach Overview}: Generally speaking, to move an object while maintaining contact we can use two primitive motions, namely, (1) \textit{sliding} on a vertex, edge, or face of the object in contact with the environment (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_S}) and
(2) \textit{pivoting} about an axis passing through a vertex, edge, or face of the object in contact with the environment (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_T},\subref{Fig:SRP_P}, Fig.~\ref{Fig:Motivation}). All other motions can be made by combining these primitive motions. Note that we consider \textit{tumbling} as a special case of pivoting when the axis of rotation passes through an object edge or face. Manipulation by sliding (or pushing) can be useful in many scenarios like picking a penny off a table. However, in heavy and bulky object manipulation scenarios, sliding may not give feasible solutions. Thus, in this paper, we will focus on manipulation using the pivoting primitive.
Our \textit{manipulation strategy} can be described briefly as follows.
(i) Given the initial and final pose of the object, we first determine if multiple pivoting moves have to be made and, if necessary, compute intermediate poses of the object. (ii) Using the dual quaternion representation of these poses, we compute paths in $SE(3)$ using the ScLERP for the object and end-effectors. These paths automatically satisfies all the basic task related constraints (without any additional explicit representation of the constraints).
(iii) We use the (weighted) pseudoinverse of the Jacobian to derive the joint angles in the $\mathbb{J}$-space from the computed $\mathbb{T}$-space path. (iv) Finally, we compute the minimum required contact wrenches and manipulators' joint torques required for object manipulation. Note that the steps (ii) to (iv) can be done either sequentially or they can be interleaved in a single discrete time-step.
\section{Pivoting}
Pivoting is a motion where an object is moved while maintaining a point or line contact with a support surface. When an object maintains a point contact, the constraints on motions are same as those imposed by a spherical joint. Thus, the motion of the object is restricted to $SO(3)$, which is a subgroup of $SE(3)$, and the axis of rotation passes through the contact point. During pivoting with line contact (or tumbling), the constraint on the motion is same as that imposed by a revolute joint with the axis of the joint being the line of contact. Thus, in this case, the motion of the object is restricted to $SO(2)$, which is also a subgroup of $SE(3)$. This mathematical structure of pivoting motions is key to our approach as we discuss below.
Suppose an object can reach a goal pose from a start pose using a single pivoting motion. This can happen when the start and the goal poses are such that there is a common vertex, say $v$, between the start and goal poses that lie on the support surface (see Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_P}). In such situations, when planning in $\mathbb{T}$-space, one should be careful about the interpolation scheme for generating the motion of the object. If we use linear interpolation between the end poses in the space of parameters (a popular choice being linear interpolation for position and spherical linear interpolation for orientation using unit quaternion parameterization of orientation), the resulting intermediate poses will not ensure that the contact between the object and the support surface is maintained. The motion obtained will also change with the choice of the coordinate frames for the initial and final pose. The advantage of using ScLERP is that it is coordinate invariant. Furthermore, since the pivoting motions also belongs to a subgroup of $SE(3)$, ScLERP ensures that all the intermediate poses will lie in the same subgroup that contains the initial and goal pose (i.e., all intermediate poses will have the vertex $v$ fixed to the support surface). Thus, it is not necessary to explicitly enforce the pivoting constraints for motion planning. Lemma $1$ formalizes this discussion.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.36]{Figures/S.pdf}\label{Fig:SRP_S}}
\subfloat[]{\includegraphics[scale=0.36]{Figures/T.pdf}\label{Fig:SRP_T}}
\subfloat[]{\includegraphics[scale=0.36]{Figures/P.pdf}\label{Fig:SRP_P}}
\caption{Examples of the primitive motions for manipulating polyhedral objects by exploiting the environment contact, (a) sliding or pushing on a face, (b) pivoting about an edge (tumbling), (c) pivoting about a vertex.}
\label{Fig:SRP}
\end{figure}
\begin{lemma}
Let $D_1 = Q_{R1} + \frac{\epsilon}{2} Q_{p1} Q_{R1}$ and $D_2 = Q_{R2} + \frac{\epsilon}{2} Q_{p2} Q_{R2}$ be two unit dual quaternions representing two poses of a rigid body. If a point $\boldsymbol{v} \in \mathbb{R}^3$ in the rigid body has the same position in both poses, the position of this point remains the same in all the poses provided by the ScLERP $D(\tau) = D_1 (D_1^{-1}D_2)^{\tau}$ where $ \tau \in[0,1]$.
\label{lemma:FixedPoint}
\end{lemma}
\begin{proof}
Let $Q_v = (0,\boldsymbol{v}) \in \mathbb{H}$ be a pure quaternion representing the point $\boldsymbol{v}$. Since the point $\boldsymbol{v}$ has the same position in both poses $D_1$ and $D_2$, therefore
\begin{align}
D_1(1+\epsilon Q_v)D_1^\dag & = D_2(1+\epsilon Q_v)D_2^\dag, \\
\therefore \,\, Q_{p2} - Q_{p1} & = Q_{R1} Q_v Q_{R1}^* - Q_{R2} Q_v Q_{R2}^*.
\label{eq:Qp2_Qp1}
\end{align}
Therefore, the transformation from $D_1$ to $D_2$ is derived as
\begin{equation}
\begin{split}
D_{12} & = D_1^{*}D_2 = Q_{R1}^* Q_{R2} + \frac{\epsilon}{2}Q_{R1}^* (Q_{p2} - Q_{p1}) Q_{R2}\\
&= Q_{R1}^* Q_{R2} + \frac{\epsilon}{2}(Q_v Q_{R1}^* Q_{R2} - Q_{R1}^* Q_{R2} Q_v).
\end{split}
\label{eq:D_1D_2_}
\end{equation}
By representing the rotation $Q_{R1}^* Q_{R2}$ as $(\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2}) \in \mathbb{H}$ (where $\boldsymbol{l}$ is a unit vector along the screw axis and $\theta$ is rotation about the screw axis), ($\ref{eq:D_1D_2_}$) can be simplified as
\begin{equation}
D_{12} = (\cos\frac{\theta}{2}, \boldsymbol{l}\sin\frac{\theta}{2}) + \epsilon (0, \boldsymbol{v} \times \boldsymbol{l} \sin\frac{\theta}{2}) = P + \epsilon Q.
\label{eq:D_12}
\end{equation}
The translation $d$ along the screw axis is determined by $d = \boldsymbol{p} \cdot \boldsymbol{l}$ where $\boldsymbol{p}$ is derived from $2QP^* = (0, \boldsymbol{p})$. By using (\ref{eq:D_12}),
\begin{equation}
\boldsymbol{p} = \boldsymbol{v} \times \boldsymbol{l} \sin\frac{\theta}{2} \cos\frac{\theta}{2} - (\boldsymbol{v} \times \boldsymbol{l}) \times \boldsymbol{l} \sin^2\frac{\theta}{2},
\label{eq:p}
\end{equation}
and $d = \boldsymbol{p} \cdot \boldsymbol{l} = 0$. Therefore, the transformation $D(\tau)$ is a pure rotation about the fixed point $\boldsymbol{v}$ on the screw axis.
\end{proof}
Furthermore, when using multiple manipulators to pivot an object and we assume that there is no relative motion at the hand-object contact, the motion of each end-effector can be obtained independently by ScLERP using a shared interpolation parameter. This will ensure that the constraint that the relative end-effector poses of the manipulators are unchanged during motion is maintained without explicitly encoding it (this follows from Lemma $3$ of~\cite{Sarker2020} and so we do not repeat the formal statements and proofs here). In the next section, we use pivoting as a primitive motion for motion planning between any two given poses in $\mathbb{T}$-space.
\section{Motion Planning in Task Space}
\label{sec:MotionPlanningTS}
To manipulate a polyhedral object between any two given poses $\mathcal{C}_O$ and $\mathcal{C}_F$ while maintaining contact with the environment, multiple pivoting moves can be combined by defining a set of appropriate \textit{intermediate poses}. The set of the intermediate poses $\mathcal{C}_I = \{\mathcal{C}_I^1,\mathcal{C}_I^2, \cdots, \mathcal{C}_I^h \}$ are defined in a way that the motion between any two successive poses $\{\mathcal{C}_O, \mathcal{C}_I, \mathcal{C}_F \}$ can be represented by a single constant screw pivoting move.
Thus, we can conveniently represent the motion between any two given object poses $\mathcal{C}_O$ and $\mathcal{C}_F$ in $SE(3)$ by using ScLERP to ensure that the object maintains its contact with the environment continuously. The object manipulation strategies on a flat surface can be categorized into 3 cases; (\textbf{Case I}) If $\mathcal{C}_O$ and $\mathcal{C}_F$ have a contact edge or vertex in common, the final pose can be achieved by pivoting the object about the common point or edge (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_T},\subref{Fig:SRP_P}).
(\textbf{Case II}) If $\mathcal{C}_O$ and $\mathcal{C}_F$ do not have any edge or vertex in common but the same face of the object is in contact with the environment in both poses, different strategies can be considered. One of the strategies is using a sequence of pivoting motions about the object edges (tumbling). In this motion, the travel distance is discrete and depends on the object size and it may not be suitable for manipulating some objects like furniture.
In this situation, we can manipulate the object is \textit{object gaiting} (Fig.~\ref{Fig:IntermediateConfigs_Edges}-a) which is defined as a sequence of pivoting motions on two adjacent object vertices in contact
(see \ref{subsec:IntermediatePosesObjectGaiting} and \ref{subsec:GaitPlanning}).
(\textbf{Case III}) If
the adjacent or opposite faces of the object are in contact with the environment in both poses, a combination of pivoting and gaiting is required to achieve the final pose as shown in Fig.~\ref{Fig:Examples}. Depending on the manipulators' physical limitations, object gaiting is more efficient only when a specific face of the object is in contact with the environment. For instance, manipulation on the longer edge of the cuboid shown in Fig.~\ref{Fig:IntermediateConfigs_Edges}-a may be more difficult than two other edges.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseIV_A.pdf}}
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseIV_B.pdf}}
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseV_A.pdf}}
\caption{Examples of the object manipulation with primitive motions when two adjacent (a,b) or opposite (c) object faces are in contact with the environment in initial and final poses (P: Pivoting, G: Gaiting).}
\label{Fig:Examples}
\end{figure}
\subsection{Intermediate Poses in Object Gaiting}
\label{subsec:IntermediatePosesObjectGaiting}
Let us assume that the axes of the body frame $\{b\}$ are parallel to the cuboid edges and the inertia frame $\{s\}$ is attached to the supporting plane such that $Z$-axis is perpendicular to the plane (Fig.~\ref{Fig:ObjectGaiting}). Three successive intermediate poses while pivoting about the vertex $a$ are shown in Fig.~\ref{Fig:ObjectGaiting}-a,b. The object is initially in the pose $\mathcal{C}_I^1 = (R_{1}, p_{1})$ (Fig.~\ref{Fig:ObjectGaiting}-a) holding on the contact edge $ab$. The angle $\gamma$ can be determined such that the object weight pass through the contact edge $ab$ to reduce the required contact forces during the manipulation. The pose $\mathcal{C}_I^2 = (R_{2}, p_{2})$ (Fig.~\ref{Fig:ObjectGaiting}-a) is achieved by rotating the object by a small angle $\beta$ along the edge passing through the vertex $a$; therefore, $R_{2} = R_{1} R_{x}({-\beta})$ and only the vertex $a$ is in the contact. Note that the angle $\beta$ can be adjusted during the motion to allow the object to pass over small obstacles in the environment. Finally, the pose $\mathcal{C}_I^3 = (R_{3}, p_{3})$ (Fig.~\ref{Fig:ObjectGaiting}-b) is determined by rotating $\mathcal{C}_I^1$ by an angle $\alpha$ along $Z$-axis about the vertex $a$; therefore, $R_{3} = R_Z({\alpha}) R_{1}$ and the edge $ab$ is again in contact with the environment. This procedure can be also repeated for the vertex $b$. Using the intermediate poses ScLERP can be used to derive a smooth motion for object gaiting while maintaining contact with the environment.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.55]{Figures/I2.pdf}} \quad \quad
\subfloat[]{\includegraphics[scale=0.55]{Figures/I3.pdf}}
\caption{Intermediate poses in object gaiting while pivoting.}
\label{Fig:ObjectGaiting}
\end{figure}
\subsection{Gait Planning}
\label{subsec:GaitPlanning}
In order to manipulate the object from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ by object gaiting, a sequence of the rotation angle $\alpha$ between these two poses should be properly determined (Fig.~\ref{Fig:IntermediateConfigs_Edges}-a). Let $k$ be the number of required edge contacts and $\bm{\alpha} = [\alpha_1, \cdots, \alpha_k]^T \in \mathbb{R}^k$ be the angles between the contact edges as shown in Fig.~\ref{Fig:IntermediateConfigs_Edges}-b. We can find $\bm{\alpha}$ using an optimization problem as
\begin{equation}
\begin{aligned}
&{\underset {\bm{\alpha}}{\operatorname {minimize}}}&& \lVert \bm{\alpha} \rVert \\[-8pt]
&\operatorname {subject\;to} && \boldsymbol{x} = \pm w \sum_{i=1}^k{\left( -1 \right) ^{i}\left[ \begin{array}{@{\mkern0mu} c @{\mkern0mu}}
\cos \left( \alpha _O \pm \bar{\alpha} \right)\\
\sin \left( \alpha _O \pm \bar{\alpha} \right)\\
\end{array} \right]},\\[-5pt]
&&& \alpha _{F} - \alpha _O = \pm \sum_{i=1}^k{\left( -1 \right) ^{i}\alpha_i },\\
&&& \left| \alpha _i \right| \leq \alpha_{\text{max}},\ \ i=1,...,k,
\end{aligned}
\label{eq:GaitPlanning}
\end{equation}
where $\bar{\alpha} = \sum_{j=1}^i{\left( -1 \right) ^{j}\alpha _j }$, $\alpha_{\text{max}}$ is the maximum allowed rotation angle, $w$ is the length of the edge contact, and $\alpha_{O}$ and $\alpha_{F}$ represent the orientation of the contact edges $a_O b_O$ and $a_F b_F$ relative to $X$-axis, respectively. The negative sign correspond to the case that the first gait begins from the edge $a_O$, where $\boldsymbol{x} = \boldsymbol{b}_{F} - \boldsymbol{a}_{O}$ if $k$ is an odd number and $\boldsymbol{x} = \boldsymbol{a}_{F} - \boldsymbol{a}_{O}$ if $k$ is an even number, moreover, the positive sign correspond to the case that the first gait begins from the edge $b_O$, where $\boldsymbol{x} = \boldsymbol{a}_{F} - \boldsymbol{b}_{O}$ if $k$ is an odd number and $\boldsymbol{x} = \boldsymbol{b}_{F} - \boldsymbol{b}_{O}$ if $k$ is an even number. $\boldsymbol{a}_{O}$, $\boldsymbol{b}_{O}$, $\boldsymbol{a}_{F}$, $\boldsymbol{b}_{F} \in \mathbb{R}^2$ are the coordinates of the contact vertices in $\mathcal{C}_O$ and $\mathcal{C}_F$ poses along $X$- and $Y$-axis of the frame $\{s\}$. In the optimization problems (\ref{eq:GaitPlanning}), the first constraint represents the distance of the last contact vertex ($a_F$ or $b_F$) relative to the first contact vertex ($a_O$ or $b_O$) in $X$ and $Y$ directions. The second constraint represents the relative angle between the contact edges $a_O b_O$ and $a_F b_F$, and the last constraint considers the manipulators' limitations to rotate the object.
In order to find the feasible minimum number of edge contacts, $k$, required to manipulate the object between two poses $\mathcal{C}_O$ and $\mathcal{C}_F$, we need to repeat (\ref{eq:GaitPlanning}) for different values of $k$.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.4]{Figures/ObjectGaiting.pdf}} \quad \,
\subfloat[]{\includegraphics[scale=0.34]{Figures/IntermediateConfigs_Edges_Obstacle.pdf}}
\caption{A sequence of contact edges for object gaiting between two poses $\mathcal{C}_O$ and $\mathcal{C}_F$ when the first gait begins from the edge $a_O$.}
\label{Fig:IntermediateConfigs_Edges}
\end{figure}
\section{Mapping from $\mathbb{T}$-space to $\mathbb{J}$-space}
Since it is assumed that the transformation between the end-effector frame $\{e_i\}$ and contact frame $\{c_i\}$ remains constant,
after planning a path in the $\mathbb{T}$-space, we can compute the end-effector poses $\mathcal{E}_i$ for each object intermediate pose.
Then, we use the ScLERP for each of these end-effector poses individually with a shared screw parameter \cite{daniilidis1999hand,kavan2006dual}. To find the joint angles of the manipulators in $\mathbb{J}$-space, we use the (weighted) pseudoinverse of the manipulators' Jacobian \cite{Klein1983}.
Let $\boldsymbol{\Theta}_{t}$ and $\boldsymbol{\chi}_{t}$ be the vector of joint angles and end-effector’s pose at the step $t$, respectively.
For each manipulator, given the current end effector pose $\boldsymbol{\chi}_{t}$ and the target end effector pose $\boldsymbol{\chi}_{t+1}$ (obtained from ScLERP) we have the corresponding joint angles $\boldsymbol{\Theta}_{t+1}$ as
\begin{equation}
\boldsymbol{\Theta}_{t+1} = \boldsymbol{\Theta}_{t} + \lambda \boldsymbol{J}(\boldsymbol{\Theta}_{t}) (\boldsymbol{\chi}_{t+1} - \boldsymbol{\chi}_{t}),
\label{eq:IK}
\end{equation}
where $0 < \lambda \le 1$ is a step length parameter (refer to \cite{Sarker2020} for a complete algorithm). Here $\boldsymbol{J}$ is the (weighted) pseudo-inverse of the manipulator Jacobian. By using (\ref{eq:IK}) between any two successive poses in $\{\mathcal{C}_O, \mathcal{C}_I, \mathcal{C}_F \}$, $\boldsymbol{\Theta}^i(j)$ ($j=1,\cdots,m$) for the $i$-th manipulator is computed.
\section{Implementation and Results}
In this section, we briefly present the simulation results for manipulating a heavy cuboid object on a flat surface and over a step.
Videos of our simulations are presented in the video attachment to the paper.
\noindent
\textbf{Manipulation on a Flat Surface}: In this example, we plan motion to reorient a heavy object from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$, in its vicinity, by object gaiting as shown in Fig.~\ref{Fig:Example_Flat}-\subref{Fig:Example_Flat_Object}.
Existing planning algorithms~\cite{yoshida2010pivoting} cannot efficiently solve this problem, because their motion plan is essentially restricted to move on Reeds and Shepp curves.
By using the proposed optimization problem (\ref{eq:GaitPlanning}), we can find the minimum number of contact edges required to manipulate the object between these two poses. The simulation results are shown in Fig.~\ref{Fig:Example_Flat}-\subref{Fig:Example_Flat_Edges}. As shown, at least 3 contact edges (in total 7 intermediate poses) are required to reach the final pose by starting pivoting from the edge $a$.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.44]{Figures/Example_Flat.pdf}\label{Fig:Example_Flat_Object}} \qquad
\subfloat[]{\includegraphics[scale=0.34]{Figures/Example_Flat_Edges.pdf}\label{Fig:Example_Flat_Edges}}
\caption{Object gaiting on a flat surface where $a_O = [0, \, 0]$, $\alpha_{O} = 0^{\circ}$, $a_F = [0.13, \, 0.13]$m, $\alpha_F = -80^{\circ}$, $w = 0.2$m, $\alpha_{\text{max}} = 35^{\circ}$, $\alpha_1 = -10.55^{\circ}$, $\alpha_2 = 29.56^{\circ}$, $\alpha_3 = -12.63^{\circ}$, $\alpha_4 = 27.25^{\circ}$.}
\label{Fig:Example_Flat}
\end{figure}
\noindent
\textbf{Manipulation over a Step}: In this example, we plan motion and force to manipulate a heavy object over a step (Fig.~\ref{Fig:Example_Step}) by both 7-DoF arms of Baxter robot.
The computed motion plan includes 3 stages: (1) pivoting about the object edge ($\mathcal{C}_I^1$), (2) pivoting about the vertex $v$ ($\mathcal{C}_I^2$), where the object face and only the vertex $v$ are in contact with the environment, (3) changing the location of the end-effectors' contacts and pivoting about the step edge ($\mathcal{C}_F$). Thus, we have two intermediate poses $\{\mathcal{C}_I^1,\mathcal{C}_I^2\}$.
We implemented $\mathbb{T}$-space planning, conversion to $\mathbb{J}$-space, and our force planning method described in \cite{Patankar2020} to find the minimum required normal forces $f_{c_{n,1}}$ and $f_{c_{n,2}}$ at both object--end-effector contacts $\{c_1\}$ and $\{c_2\}$ in each motion stage.
In Fig.~\ref{Fig:contact_force_results}, the variations of the normal contact forces with respect to the number of iterations to reach the goal pose at 3 stages of object manipulation over a step are shown.
In stage 1, $f_{c_{n,1}}$ and $f_{c_{n,2}}$ first decrease and become negligible at a particular object tilting angle where the weight of the object passes through its support edge, and then increases. In stage 2, since the motion is not symmetric, there is a difference between the right and left end-effector normal contact forces in order to balance the the object weight. In stage 3, the object-environment contact points are initially located closer to the object center of mass; thus, less contact forces are initially required and by pivoting the object, these forces increases.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.45]{Figures/Example_Step.pdf}
\caption{Object manipulation over a step.}
\label{Fig:Example_Step}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.29]{Figures/fc_complete.pdf}
\caption{The normal contact forces at $\{c_1\}$ and $\{c_2\}$ where the object weight is $m = 2$kg, maximum joint torque for shoulder and elbow joints is $\tau_{\text{max}} = 50$Nm, and maximum joint torque for wrist joints is $\tau_{\text{max}} = 15$Nm.}
\label{Fig:contact_force_results}
\end{figure}
\section{Conclusion and Future Work}
In this paper, we have proposed a novel approach for manipulating heavy objects while using a sequence of pivoting motions. We have implemented our proposed motion and force planning on two different scenarios; reorienting an object using gaiting and also manipulating a heavy object over a step. Given the initial and final poses of the object, we first compute the required intermediate poses. These poses can be derived by an optimization problem which computes the optimal values of the rotation angles between contact edges while \textit{object gaiting}. Then, by using ScLERP, we can interpolate between these intermediate poses while satisfying all the task-related constraints.
Using RMRC we can map the task-space based plan to the joint-space allowing us to compute the contact forces and the joint torques required to manipulate the object. Future work includes the relaxation of the quasi-static assumption for the force planning and experimental evaluation of the proposed approach.
\addtolength{\textheight}{-10.5cm}
\bibliographystyle{IEEEtran}
|
\section{Introduction} \label{sec:intro}
With the development of natural language processing and deep learning, multilingual machine translation has gradually attracted the interest of researchers \citep{dabre-etal-2020-multilingual}.
Moreover, the multilingual machine translation model demands less space than multiple bilingual unidirectional machine translation models, making it more popular among developers \citep{liu2020multilingual, zhang-etal-2020-improving, fan2020beyond}.
However, existing multilingual machine translation models face imbalance problems.
On the one hand, various sizes of training corpora for different language pairs cause imbalance.
Typically, the training corpora size of some high resource languages (HRLs) is hundreds or thousands of times that of some low resource languages (LRLs) \citep{schwenk2019ccmatrix}, resulting in lower competence of LRL learning.
On the other hand, translation between different languages has different difficulty, which also leads to imbalance.
In general, translation between closely related language pairs is more effortless than that between distant language pairs, even if the training corpora is of the same size \citep{barrault-etal-2020-findings}.
This would lead to low learning competencies for distant languages compared to closely related languages.
Therefore, multilingual machine translation is inherently imbalanced, and dealing with this imbalance is critical to advancing multilingual machine translation \citep{dabre-etal-2020-multilingual}.
To address the above problem, existing balancing methods can be divided into two categories, i.e., static and dynamic.
1) Among static balancing methods, temperature-based sampling \citep{arivazhagan2019massively} is the most common one, compensating for the gap between different training corpora sizes by oversampling the LRLs and undersampling the HRLs.
2) Researchers have also proposed some dynamic balancing methods \citep{jean2019adaptive, wang-etal-2020-balancing}.
\citet{jean2019adaptive} introduce an adaptive scheduling, oversampling the languages with poorer results than their respective baselines.
In addition, MultiDDS-S \citep{wang-etal-2020-balancing} focus on learning an optimal strategy to automatically balance the usage of training corpora for different languages at multilingual training.
Nevertheless, the above methods focus too much on balancing LRLs, resulting in lower competencies for HRLs compared to that trained only on bitext corpora.
Consequently, the performance on the HRLs by the multilingual translation model is inevitably worse than that of bitext models by a large margin \citep{lin-etal-2020-pre}.
Besides, knowledge learned by related HRLs is also beneficial for LRLs \citep{neubig-hu-2018-rapid}, while is neglected by previous approaches, limiting the performance on LRLs.
\iffalse
Therefore, in this paper, we we manage to balance the competencies of languages and propose a \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning Approach for \textbf{M}ultilingual Machine Translation}, named CCL-M.
Specifically, we treat the HRLs as easy samples, the LRLs as hard samples, and learn all the samples from easy to hard through curriculum learning \citep{bengio2009curriculum}.
Hence, we define two competencies to help scheduling: 1) \emph{Self-evaluated Competence}, evaluating how well a language is learned; 2) \emph{HRLs-evaluated Competence}, evaluating how well an LRL is initialized using HRLs' \emph{Self-evaluated Competence}.
We further define two sets, the training set and the candidate set, to determine whether to train a language.
At the beginning of the training, we assign the HRLs to the training set and the LRLs to the candidate set.
For the languages in the training set, we gradually calculate their \emph{Self-evaluated Competence} and apply sampling weight to the languages with the reciprocal of their \emph{Self-evaluated Competence}.
For the LRLs out of the training set, we also gradually calculate their \emph{HRLs-evaluated Competence} and add the LRLs to the training set when its \emph{HRLs-evaluated Competence} is larger than a certain threshold.
Eventually, all the languages are added to the training set.
\fi
Therefore, in this paper, we try to balance the learning competencies of languages and propose a \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning Approach for \textbf{M}ultilingual Machine Translation}, named CCL-M.
Specifically, we firstly define two competence-based evaluation metrics to help schedule languages, which are 1) \emph{Self-evaluated Competence}, for evaluating how well the language itself has been learned; and 2) \emph{HRLs-evaluated Competence}, for evaluating whether an LRL is ready to be learned by the LRL-specific HRLs' \emph{Self-evaluated Competence}.
Based on the above two competence-based evaluation metrics, we design the CCL-M algorithm to gradually add new languages into the training set.
Furthermore, we propose a novel competence-aware dynamic balancing sampling method for better selecting training samples at multilingual training.
We evaluate our approach on the multilingual Transformer \citep{vaswani2017attention} and conduct experiments on the TED talks\footnote{\url{https://www.ted.com/participate/translate}} to validate the performance in two multilingual machine translation scenarios, i.e., \emph{many-to-one} and \emph{one-to-many} ("\emph{one}" refers to English).
Experimental results show that our approach brings in consistent and significant improvements compared to the previous state-of-the-art approach \citep{wang-etal-2020-balancing} on multiple translation directions in the two scenarios.
Our contributions\footnote{We release our code on \url{https://github.com/zml24/ccl-m}.} are summarized as follows:
\begin{itemize}
\item
We propose a novel competence-based curriculum learning method for multilingual machine translation.
To the best of our knowledge, we are the first that integrate curriculum learning into multilingual machine translation.
\item
We propose two effective competence-based evaluation metrics to dynamically schedule which languages to learn, and a competence-aware dynamic balancing sampling method for better selecting training samples at multilingual training.
\item
Comprehensive experiments on the TED talks dataset in two multilingual machine translation scenarios, i.e., \emph{many-to-one} and \emph{one-to-many}, demonstrating the effectiveness and superiority of our approach,
which significantly outperforms the previous state-of-the-art approach.
\end{itemize}
\section{Background}
\subsection{Multilingual Machine Translation}
Bilingual machine translation model translates a sentence of source language $S$ into a sentence of target language $T$ (\citealp{sutskever2014sequence}; \citealp{cho-etal-2014-learning}; \citealp{bahdanau2014neural}; \citealp{luong-etal-2015-effective}; \citealp{vaswani2017attention}), which is trained as
\begin{equation}
\theta^* = \argmin_\theta \mathcal{L} (\theta; S, T) ,
\end{equation}
where $\mathcal{L}$ is the loss function, $\theta^*$ is the model parameters.
Multilingual machine translation system aims to train multiple language pairs in a single model, including \emph{many-to-one} (translation from multiple languages into one language), \emph{one-to-many} (translation from one language to multiple languages), and \emph{many-to-many} (translation from several languages into multiple languages) \citep{dabre-etal-2020-multilingual}.
Specifically, we denote the training corpora of $n$ language pairs in multilingual machine translation as $\{S_1, T_1\}$, $\{S_2, T_2\}$, $\dots$, $\{S_n, T_n\}$ and multilingual machine translation aims to train a model $\theta^*$ as
\begin{equation}
\theta^* = \argmin_\theta \frac{1}{n} \sum_{i = 1}^n \mathcal{L} (\theta; S_i, T_i) .
\end{equation}
\subsection{Sampling Methods}
Generally, the size of the training corpora for different language pairs in multilingual machine translation varies greatly.
Researchers hence developed two kinds of sampling methods, i.e., static and dynamic, to sample the language pairs at training \citep{dabre-etal-2020-multilingual}.
There are three mainstream static sampling methods, i.e., uniform sampling, proportional sampling, and temperature-based sampling \citep{arivazhagan2019massively}.
These methods sample the language pairs by the predefined fixed sampling weights $\psi$.
\paragraph{Uniform Sampling.} Uniform sampling is the most straightforward solution \citep{johnson-etal-2017-googles}. The sampling weight $\psi_i$ for each language pair $i$ of this method is calculated as follows
\begin{equation}
\psi_i = \frac{1}{\vert \mathcal{S}_\text{lang} \vert ,}
\end{equation}
where $\mathcal{S}_\text{lang}$ is the language sets for training.
\paragraph{Proportional Sampling.} Another method is sampling by proportion \citep{neubig-hu-2018-rapid}. This method improves the model's performance on high resource languages and reduces the performance of the model on low resource languages. Specifically, we calculate its sampling weight $\psi_i$ for each language pair $i$ as
\begin{equation}
\psi_i = \frac{\vert \mathcal{D}^i_\text{Train} \vert}{\sum_{k \in \mathcal{S}_\text{lang} } \vert \mathcal{D}^k_\text{Train} \vert} ,
\end{equation}
where $\mathcal{D}_\text{Train}$ is the training corpora of language $i$.
\paragraph{Temperature-based Sampling.} It samples the language pairs according to the corpora size exponentiated by a temperature term $\tau$ (\citealp{arivazhagan2019massively}; \citealp{conneau-etal-2020-unsupervised}) as
\begin{equation}
\psi_i = \frac{p_k^{1 / \tau}}{\sum_{k \in \mathcal{S}_\text{lang}} p_k^{1 / \tau}} \ \text{where} \ p_i = \frac{\vert \mathcal{D}^i_\text{Train} \vert}{\sum_{k \in \mathcal{S}_\text{lang} } \vert \mathcal{D}^k_\text{Train} \vert} .
\end{equation}
Obviously, $\tau = \infty$ is the uniform sampling and $\tau = 1$ is the proportional sampling. Both of them are a bit extreme from the perspective of $\tau$.
In practice, we usually select a proper $\tau$ to achieve a balanced result.
On the contrary, dynamic sampling methods (e.g., MultiDDS-S\citep{wang-etal-2020-balancing}) aim to automatically adjust the sampling weights by some predefined rules.
\paragraph{MultiDDS-S.} MultiDDS-S \citep{wang-etal-2020-balancing} is a dynamic sampling method performing differentiable data sampling.
It takes turns to optimize the sampling weights of different languages and the multilingual machine translation model, showing more significant potential than static sampling methods. This method optimizes the sample weight $\psi$ to minimize the development loss as follows
\begin{equation}
\psi^* = \argmin_\psi \mathcal{L} (\theta^*; \mathcal{D}_\text{Dev}) , \\
\end{equation}
\begin{equation}
\theta^* = \argmin_\theta \sum_{i = 1}^n \psi_i \mathcal{L} (\theta; \mathcal{D}_\text{Train}) ,
\end{equation}
where $\mathcal{D}_\text{Dev}$ and $\mathcal{D}_\text{Train}$ denote the development corpora and the training corpora, respectively.
\section{Methodology}
In this section, we first define a directed bipartite language graph, on which we deploy the languages to train.
Then, we define two competence-based evaluation metrics, i.e., the \emph{Self-evaluated Competence} $c$ and the \emph{HRLs-evaluated Competence} $\hat{c}$, to help decide which languages to learn.
Finally, we elaborate the entire CCL-M algorithm.
\subsection{Directed Bipartite Language Graph}
Formally, we define a directed bipartite language graph $G(V, E)$, in which one side is full of HRLs and the other side of LRLs.
Each vertex $v_i$ on the graph represents a language, and the weight of each directed edge (from HRLs to LRLs) $e_{ij}$ indicates the similarity between a HRL $i$ and an LRL $j$:
\begin{equation}
e_{ij} = \text{sim}(i, j) .
\end{equation}
Inspired by TCS \citep{wang-neubig-2019-target}, we measure it using vocabulary overlap and define the language similarity between language $i$ and language $j$ as
\begin{equation}
\text{sim}(i, j) = \frac{\vert \text{vocab}_k(i) \cap \text{vocab}_k(j) \vert}{k} ,
\label{eq:sim}
\end{equation}
where $\text{vocab}_k(\cdot)$ represents the top $k$ most frequent subwords in the training corpus of a specific language.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{cl.png}
\caption{Diagram of the CCL-M Algorithm. This graph shows how to gradually add the LRLs to the training set $\mathcal{S}_\text{selected}$ using graph coloring. "aze" stands for Azerbaijani, "bel" stands for Belarusian, etc. The number after the colon indicates current \emph{HRLs-evaluated Competence}, and suppose the corresponding threshold $t$ is set to 0.8. Subfigure (a) represents the state before training. Subfigure (b) indicates that "slk" (Slovak) is added to the training set because the \emph{HRLs-evaluated Competence} is higher than the threshold. Subfigure (c) indicates that "aze" (Azerbaijani) and "glg" (Glacian) are added to the training set, and Subfigure (d) indicates that all the LRLs are added to the training set. Notice we use the abbreviation of language (xxx) to indicate language pairs (xxx-eng or eng-xxx), which is more general.}
\label{ccl-m}
\end{figure*}
\subsection{Competence-based Evaluation Metrics}
\paragraph{Self-evaluated Competence.} We define how well a language itself has been learned as the \emph{Self-evaluated Competence} $c$.
In the following paragraphs, we first introduce the concept of \emph{Likelihood Score} and then give a formula for calculating the \emph{Self-evaluated Competence} in multilingual training based on the relationship between current \emph{Likelihood Score} and the \emph{Likelihood Score} of model trained on bitext corpus.
For machine translation, we usually use the label smoothed \citep{szegedy2016rethinking} cross-entropy loss $\mathcal{L}$ to measure how well the model is trained, and calculate it as
\begin{equation}
\mathcal{L} = - \sum_i p_i \log_2 q_i ,
\end{equation}
where $p$ is the label smoothed actual probability distribution, and $q$ is the model output probability distribution\footnote{We select 2 as the base number for all relevant formulas and experiments in this paper.}.
We find that the exponential of negative label smoothed cross-entropy loss is a likelihood to some extent, which is negatively correlated to the loss.
Since neural network usually optimizes the model by minimizing the loss, we use the likelihood as a positive correlation indicator to measure competence.
Therefore, we define a \emph{Likelihood Score} $s$ to estimate how well the model is trained as follows
\begin{equation}
s = 2^{-\mathcal{L}} = \prod_i q_i^{p_i} .
\end{equation}
Inspired by \citet{jean2019adaptive}, we estimate the \emph{Self-evaluated Competence} $c$ of a specific language by calculating the quotient of its current \emph{Likelihood Score} and baseline's \emph{Likelihood Score}.
Finally, we obtain the formula as follows
\begin{equation} \label{self-competence}
c = \frac{s}{s^*} = 2^{\mathcal{L}^* - \mathcal{L}} ,
\end{equation}
where $\mathcal{L}$ is the current loss on the development set, $\mathcal{L}^*$ is the \emph{benchmark} loss of the converged bitext model on the development set, and $s$ and $s^*$ are their corresponding \emph{Likelihood Scores}, respectively.
\paragraph{HRLs-evaluated Competence.} Furthermore, we define how well an LRL is ready to be learned as its \emph{HRLs-evaluated Competence} $\hat{c}$.
We believe that each LRL can learn adequate knowledge from its similar HRLs before training.
Therefore, we estimate each LRL's \emph{HRLs-evaluated Competence} by the LRL-specific HRLs' \emph{Self-evaluated Competence}.
Specifically, we propose two methods for calculating the \emph{HRLs-evaluated Competence}, i.e., \emph{maximal} ($\text{CCL-M}_\text{max}$) and \emph{weighted average} ($\text{CCL-M}_\text{avg}$).
The $\text{CCL-M}_\text{max}$ only migrates the knowledge from the HRL that is most similar to the LRL, so we calculate \emph{maximal} \emph{HRLs-evaluated Competence} $\hat{c}_{\text{max}}$ for each LRL $j$ as
\begin{equation}
\hat{c}_{\text{max}}(j)= c_{\argmax_{i \in \mathcal{S}_\text{HRLs}} e_{ij}} ,
\end{equation}
where $\mathcal{S}_{\text{HRLs}}$ is the set of the HRLs.
On the other side, the $\text{CCL-M}_\text{avg}$ method pays attention to all HRLs.
In general, the higher the language similarity, the more knowledge an LRL can migrate from HRLs.
Therefore, we calculate \emph{weighted average} \emph{HRLs-evaluated Competence} $\hat{c}_{\text{avg}}$ for each LRL $j$ as
\begin{equation}
\hat{c}_{\text{avg}}(j)= \sum_{i \in \mathcal{S}_\text{HRLs}} \left ( \frac{e_{ij}}{\sum_{k \in \mathcal{S}_\text{HRLs}} e_{kj}} \cdot c_i \right ) .
\end{equation}
\subsection{The CCL-M Algorithm}
Now we detailly describe the \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning for \textbf{M}ultilingual Machine Translation}, namely the CCL-M algorithm. The algorithm is divided into two parts: 1) curriculum learning scheduling framework, guiding when to add a language to the training set; 2) competence-aware dynamic balancing sampling, guiding how to sample languages in the training set.
First, we present how to schedule which languages on the directed bipartite language graph should be added to the training set according to the two competence-based evaluation metrics as shown in Figure \ref{ccl-m} and Algorithm \ref{alg:the_alg}, where $\mathcal{S}_{\text{LRLs}}$ is the set of LRLs, and $f(\cdot)$ is the function calculating the \emph{HRLs-evaluated Competence} $\hat{c}$ for LRLs.
Initialized as Line \ref{lst:1}, we add all languages on the HRLs side to the training set $\mathcal{S}_\text{selected}$ at the beginning of training, leaving all languages on the LRLs side in the candidate set $\mathcal{S}_\text{candidate}$.
Then, we regularly sample the development corpora of different languages and calculate current \emph{HRLs-evaluated Competence} of the languages in the candidate set $\mathcal{S}_\text{candidate}$ as shown in Line \ref{lst:8} and \ref{lst:9}.
Further, the "if" condition in Line \ref{lst:13} illustrates that we would add the LRL to the training set $\mathcal{S}_\text{selected}$ when its \emph{HRLs-evaluated Competence} is greater than a pre-defined threshold $t$.
However, as the calculation of Equation \ref{self-competence}, the upper bound of the \emph{Self-evaluated Competence} for a specific language may not always be 1 at multilingual training.
This may cause that some LRLs remain out of the training set $\mathcal{S}_\text{selected}$ for some thresholds.
To ensure the completeness of our algorithm, we will directly add the languages still in the candidate set $\mathcal{S}_\text{candidate}$ to the training set $\mathcal{S}_\text{selected}$ after a long enough number of steps, which is described between Line \ref{lst:22} and Line \ref{lst:32}.
\begin{algorithm}[!t]
\SetAlgoLined
\KwIn{Randomly initialized model $\theta$; language graph $G$; \emph{benchmark} losses $\mathcal{L}_i^*$; training corpora $\mathcal{D}_\text{Train}$; development corpora $\mathcal{D}_\text{Dev}$;}
\KwOut{The converged model $\theta^*$;}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{HRLs}$, $\mathcal{S}_\text{candidate} \gets \mathcal{S}_\text{LRLs}$,
$\psi \gets 0$\; \label{lst:1}
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
$\psi_i \gets \frac{1}{\vert \mathcal{S}_\text{\normalfont{selected}} \vert}$\; \label{lst:3}
}
\While{$\theta$ \normalfont{not converge}}{
train the model on $\mathcal{D}_\text{Train}$ for some steps with sampling weight $\psi$\;
\For{$i \in \mathcal{S}_\text{\normalfont{selected}} \cup \mathcal{S}_\text{\normalfont{candidate}}$}{
sample $\mathcal{D}_\text{Dev}$ and calculate $\mathcal{L}_i$\; \label{lst:8}
$c_i \gets 2^{\mathcal{L}_i^* - \mathcal{L}_i}$\; \label{lst:9}
}
\For{$i \in \mathcal{S}_\text{\normalfont{candidate}}$}{
$\hat{c}_i \gets f(G, i, c_{\mathcal{S}_\text{HRLs}})$\;
\If{$\hat{c}_i \geq t$}{ \label{lst:13}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{selected} \cup \{ i \}$\;
$\mathcal{S}_\text{candidate} \gets \mathcal{S}_\text{candidate} \setminus \{ i \} $\;
}
}
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
$\psi_i \gets \frac{1}{c_i}$\;
}
}
\If{$\mathcal{S}_\text{\normalfont{candidate}} \neq \varnothing$}{ \label{lst:22}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{selected} \cup \mathcal{S}_\text{candidate}$\;
\While{$\theta$ \normalfont{not converge}}{
train the model on $\mathcal{D}_\text{Train}$ for some steps with sampling weight $\psi$\;
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
sample $\mathcal{D}_\text{Dev}$ and calculate $\mathcal{L}_i$\;
$c_i \gets 2^{\mathcal{L}_i^* - \mathcal{L}_i}$\;
$\psi_i \gets \frac{1}{c_i}$\;
}
}
} \label{lst:32}
\caption{The CCL-M Algorithm}
\label{alg:the_alg}
\end{algorithm}
Then, we introduce our competence-aware dynamic balancing sampling method, which is based on the \emph{Self-evaluated Competence}.
For languages in the training set $\mathcal{S}_\text{selected}$, we randomly select samples from the development corpora and calculate their \emph{Self-evaluated Competence}.
Those languages with low \emph{Self-evaluated Competence} should get more attention, therefore we simply assign the sampling weight $\psi_i$ to each language $i$ in the training set to the reciprocal of its \emph{Self-evaluated Competence}, as follows
\begin{equation}
\psi_i \propto \frac{1}{c_i} = 2^{\mathcal{L} - \mathcal{L^*}} .
\end{equation}
Notice that the uniform sampling is used for the training set $\mathcal{S}_\text{selected}$ at the beginning of training as a balancing cold-start strategy.
The corresponding pseudo code can be found in Line \ref{lst:3}.
\section{Experiments}
\begin{table*}[!t]
\centering
{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{M2O}} & \multicolumn{2}{c}{\textbf{O2M}} \\
& \textbf{Related} & \textbf{Diverse} & \textbf{Related} & \textbf{Diverse} \\
\midrule
Bitext Models & 20.37 & 22.38 & 15.73 & 17.83 \\
Uniform Sampling $(\tau = \infty)$ & 22.63 & 24.81 & 15.54 & 16.86 \\
Temperature-Based Sampling $(\tau = 5)$ & 24.00 & 26.01 & 16.61 & 17.94 \\
Proportional Sampling $(\tau = 1)$ & 24.88 & 26.68 & 15.49 & 16.79 \\
\midrule
MultiDDS \cite{wang-etal-2020-balancing} & 25.26 & 26.65 & 17.17 & 18.40 \\
MultiDDS-S \cite{wang-etal-2020-balancing} & 25.52 & 27.00 & 17.32 & 18.24 \\
\midrule
$\text{CCL-M}_\text{max}$ (Ours) & 26.59** & 28.29** & \textbf{18.89}** & \textbf{19.53}** \\
$\text{CCL-M}_\text{avg}$ (Ours) & \textbf{26.73}** & \textbf{28.34}** & 18.74** & \textbf{19.53}** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the baselines and our methods.
$\text{CCL-M}_\text{max}$ is the CCL-M algorithm using \emph{maximal HRLs-evaluated Competence}, $\text{CCL-M}_\text{max}$ is the CCL-M algorithm using \emph{weighted average HRLs-evaluated Competence}.
Bold indicates the highest value.
"$**$" indicates significantly \citep{koehn-2004-statistical} better than MultiDDS-S with t-test $p < 0.01$.
}
\label{tab:results}
\end{table*}
\subsection{Dataset Setup}
Following \citet{wang-etal-2020-balancing}, we use the 58-languages-to-English TED talks parallel data \cite{qi-etal-2018-pre} to conduct experiments.
Two sets of language pairs with different levels of language diversity are selected: \emph{related} (language pairs with high similarity) and \emph{diverse} (language pairs with low similarity).
Both of them consist of 4 high resource languages (HRLs) and 4 low resource languages (LRLs).
For the \emph{related} language set, we select 4 HRLs (Turkish: "tur", Russian: "rus", Portuguese: "por", Czech, "ces") and its related LRLs (Azerbaijani: "aze", Belarusian: "bel", Glacian: "glg", Slovak: "slk"). For the \emph{diverse} language set, we select 4 HRLs (Greek: "ell", Bulgarian: "bul", French: "fra", Korean: "kor") and 4 LRLs (Bosnian: "bos", Marathi: "mar", Hindi: "hin", Macedonian: "mkd") as \citep{wang-etal-2020-balancing}.
Please refer to Appendix for a more detailed description.
We test two kinds of multilingual machine translation scenarios for each set: 1) \emph{many-to-one} (M2O): translating 8 languages to English; 2) \emph{one-to-many} (O2M): translating English to 8 languages.
The data is preprocessed by SentencePiece\footnote{\url{https://github.com/google/sentencepiece}} \citep{kudo-richardson-2018-sentencepiece} with a vocabulary size of 8k for each language.
Moreover, we add a target language tag before the source and target sentences in O2M as \citep{johnson-etal-2017-googles}.
\subsection{Implementation Details} \label{model}
\paragraph{Baseline.} We select three static heuristic strategies: uniform sampling, proportional sampling, and temperature-based sampling ($\tau = 5$), and the bitext models for the baseline.
In addition, we compare our approach with the previous state-of-the-art sampling method, MultiDDS-S \citep{wang-etal-2020-balancing}. All baseline methods use the same model and the same set of hyper-parameters as our approach.
\paragraph{Model.} We validate our approach upon the multilingual Transformer \citep{vaswani2017attention} implemented by fairseq\footnote{\url{https://github.com/pytorch/fairseq}} \citep{ott-etal-2019-fairseq}.
The number of layers is 6 and the number of attention heads is 4, with the embedding dimension $d_{\text{model}}$ of 512 and the feed-forward dimension $d_{\text{ff}}$ of 1024 as \citep{wang-etal-2020-balancing}.
For training stability, we adopt Pre-LN \citep{xiong2020layer} for the layer-norm \citep{ba2016layer} module.
For M2O tasks, we use a shared encoder with a vocabulary of 64k.
Similarly, for O2M tasks, we use a shared decoder with a vocabulary of 64k.
\paragraph{Training Setup.} We use the Adam optimizer \citep{kingma2014adam} with $\beta_1 = \text{0.9}$, $\beta_2 = \text{0.98}$ to optimize the model.
Further, the same learning rate schedule as \citet{vaswani2017attention} is used, i.e., linearly increase the learning rate for 4000 steps to 2e-4 and decay proportionally to the inverse square root of the step number.
We accumulate the batch size to 9,600 and adopt half-precision training implemented by apex\footnote{\url{https://github.com/NVIDIA/apex}} for faster convergence \citep{ott-etal-2018-scaling}.
For regularization, we also use a dropout \citep{srivastava2014dropout} $p = \text{0.3}$ and a label smoothing \citep{szegedy2016rethinking} $\epsilon_{ls} = \text{0.1}$.
As for our approach, we sample 256 candidates from each languages' development corpora every 100 steps to calculate the \emph{Self-evaluated Competence} $c$ for each language and \emph{HRLs-evaluated Competence} $\hat{c}$ for each LRL.
\paragraph{Evaluation.} \label{sec:eval}
In practice, we perform a grid search for the best threshold $t$ in \{0.5, 0.6, 0.7, 0.8, 0.9, 1.0\}, and select the checkpoints with the lowest weighted loss\footnote{This loss is calculated by averaging the loss of each samples in development corpora of all languages, which is equivalent to taking the proportional weighted average of the loss for each language.} on the development sets to conduct the evaluation.
The corresponding early stopping patience is set to 10.
For target sentence generation, we set the beam size to 5 and a length penalty of 1.0.
Following \citet{wang-etal-2020-balancing}, we use the SacreBLEU \citep{post-2018-call} to evaluate the model performance.
In the end, we compare our result with MultiDDS-S using paired bootstrap resampling \citep{koehn-2004-statistical} for significant test.
\subsection{Results}
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Related M2O}} & \multicolumn{2}{c}{\textbf{Diverse M2O}} \\
& \textbf{LRLs} & \textbf{HRLs} & \textbf{LRLs} & \textbf{HRLs} \\
\midrule
Bi. & 10.45 & \textbf{30.29} & 11.18 & \textbf{33.58} \\
MultiDDS-S & 22.51 & 28.54 & 22.72 & 31.29 \\
\midrule
$\text{CCL-M}_\text{max}$ & 23.14* & 30.04** & 23.31* & 33.26** \\
$\text{CCL-M}_\text{avg}$ & \textbf{23.30}* & 30.15** & \textbf{23.55}* & 33.13** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the HRLs and the LRLs for the best baselines and our methods in M2O tasks. Bitext models (``Bi." for short) and MultiDDS-S are selected from the baselines since ``Bi." performs better on the HRLs and MultiDDS-S performs better on the LRLs. Bold indicates the highest value.
"$*$" and "$**$" indicates significantly better than MultiDDS-S with t-test $p < 0.05$ and $p < 0.01$, respectively.
}
\label{tab:m2o hrl and lrl}
\end{table}
\paragraph{Main Results.} \label{main}
The main results are listed in Table \ref{tab:results}.
As we can see, both methods significantly outperform the baselines and MultiDDS with averaged BLEU scores of over +1.07 and +1.13, respectively, indicating the superiority of our approach.
Additionally, the $\text{CCL-M}_\text{avg}$ is slightly better than the $\text{CCL-M}_\text{max}$ in more cases.
This is because the $\text{CCL-M}_\text{avg}$ can get more information provided by the HRLs, and can more accurately estimate when to add an LRL into the training.
Moreover, we find that O2M tasks are much more complicated than M2O tasks since decoders shared by multiple languages might generate tokens in wrong languages.
Consequently, the BLEU scores of O2M tasks are more inferior than M2O tasks by a large margin.
\paragraph{Results on HRLs and LRLs in M2O.}
We further study the performance of our approach on LRLs and the HRLs in M2O tasks and list the results in Table \ref{tab:m2o hrl and lrl}.
As widely known, the bitext model performs poorly on LRLs while performs well on HRLs.
Also, we find our method performs much better than MultiDDS-S, both on LRLs and HRLs.
Although our method does not strictly match the performance of the bitext model on HRLs, the gap between them is much smaller than that of MultiDDS-S and bitext models.
All of the above proves the importance of balancing learning competencies of different languages.
\paragraph{Results on HRLs and LRLs in O2M.}
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Related O2M}} & \multicolumn{2}{c}{\textbf{Diverse O2M}} \\
& \textbf{LRLs} & \textbf{HRLs} & \textbf{LRLs} & \textbf{HRLs} \\
\midrule
Bi. & 8.25 & \textbf{23.22} & 7.82 & \textbf{27.83} \\
MultiDDS-S & 15.31 & 19.34 & 13.98 & 22.52 \\
\midrule
$\text{CCL-M}_\text{max}$ & \textbf{16.54}** & 21.24** & \textbf{14.36}* & 24.71** \\
$\text{CCL-M}_\text{avg}$ & 16.33** & 21.14** & 13.82 & 25.42** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the HRLs and the LRLs for the best baselines and our methods in O2M tasks.
Bitext models (``Bi." for short) and MultiDDS-S are selected from the baselines since ``Bi." performs better on the HRLs and MultiDDS-S performs better on the LRLs. Bold indicates the highest value.
"$*$" and "$**$" indicates significantly better than MultiDDS-S with t-test $p < 0.05$ and $p < 0.01$, respectively.
}
\label{tab:o2m hrl and lrl}
\end{table}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{threshold.png}
\caption{Weighted losses on development sets and average BLEU scores (\%) on test sets for different thresholds (the abscissa) in four scenarios. The blue line represents $\text{CCL-M}_\text{max}$, the green line represents $\text{CCL-M}_\text{avg}$.
The yellow dotted line represents MultiDDS-S \citep{wang-etal-2020-balancing}.}
\label{grid}
\end{figure*}
As shown in Table \ref{tab:o2m hrl and lrl}, our approach also performs well on the more difficult scenario, i.e., the O2M.
Apparently, our approach almost doubles the performance of the LRLs from bitext models.
Consistently, there is a roughly -2 and -3 BLEU decay for the HRLs in the \emph{related} and \emph{diverse} language sets.
Compared to MultiDDS-S, both our approach in the LRLs and the HRLs are significantly better.
This again proves the importance of balancing the competencies of different languages.
Additionally, the performance on HRLs in O2M task has a larger drop from the bitext model than that in M2O task.
This is because the decoder shares a 64k vocabulary for all languages in O2M tasks, but each language has only 8k vocabulary.
Thus, it is easier for the model to output misleading tokens that do not belong to the target language during inference.
\section{Analysis}
\subsection{Effects of Different Threshold $t$}
\label{threshold}
We firstly conduct a grid search for the best \emph{HRLs-evaluated Competence} threshold $t$.
As we can see from Figure \ref{grid}, the more HRLs are trained (the larger the threshold $t$ is), the better the model's performance is in M2O tasks.
This phenomenon again suggests that M2O tasks are easier than O2M tasks.
The curriculum learning framework performs better in the \emph{related} set than that in the \emph{diverse} set in M2O tasks, because languages in the \emph{related} set are more similar.
Still, our method is better than MultiDDS-S, as shown in Figure \ref{grid}.
This again demonstrates the positive effect of our curriculum learning framework.
Experimental results also reveal that the optimal threshold $t$ for O2M tasks may not be 1 because more training on HRLs would not produce optimal overall performance.
Furthermore, the optimal threshold for the \emph{diverse} language set is lower than that for the \emph{related} language set as the task in the \emph{diverse} language set is more complicated.
\subsection{Effects of Different Sampling Methods}
\begin{table}[t]
\resizebox{\columnwidth}{!} {
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{M2O}} & \multicolumn{2}{c}{\textbf{O2M}} \\
& \textbf{Related} & \textbf{Diverse} & \textbf{Related} & \textbf{Diverse} \\
\midrule
$\text{CCL-M}_\text{avg}$ & 26.73 & 28.34 & \textbf{18.74} & \textbf{19.53} \\
\ \ \ $+$ Uni. & 24.59 & 27.13 & 18.29 & 18.21 \\
\ \ \ $+$ Temp. & 25.28 & 27.50 & 18.65 & 19.28 \\
\ \ \ $+$ Prop. & \textbf{27.21} & \textbf{28.72} & 18.20 & 18.80 \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets by the $\text{CCL-M}_\text{avg}$ algorithm using our dynamic sampling method and three static sampling methods. "Uni." refers to the uniform sampling, "Temp." refers to the temperature-based sampling ($\tau = 5$), and "Prop." refers to the proportional sampling. Bold indicates the highest value.
}
\label{tab:sample}
\end{table}
We also analyze the effects of different sampling methods.
Substituting our competence-aware dynamic sampling method in the $\text{CCL-M}_\text{avg}$ with three static sampling methods, we get the results in Table \ref{tab:sample}.
Consistently, our method performs best among the sampling methods in O2M tasks, which shows the superiority of sampling by language-specific competence.
Surprisingly, we find that proportional sampling surpasses our proposed dynamic method in M2O tasks.
This also indicates that more training on HRLs has a positive effect in M2O tasks, since proportional sampling would train more on the HRLs than the dynamic sampling we proposed.
In addition, all three static sampling methods outperform their respective baselines in Table \ref{tab:results}. Some of them are even better than the previous state-of-the-art sampling method, i.e., MultiDDS-S.
This shows that our curriculum learning approach has a strong generability.
\section{Related Work}
Curriculum learning was first proposed by \citet{bengio2009curriculum} with the idea of learning samples from easy to hard to get a better optimized model.
As a general method for model improvement, curriculum learning has been widely used in a variety of machine learning fields \citep{gong2016multi, kocmi-bojar-2017-curriculum, hacohen2019power, platanios-etal-2019-competence, narvekar2020curriculum}
There are also some previous curriculum learning researches for machine translation.
For example, \citep{kocmi-bojar-2017-curriculum} divide the training corpus into smaller buckets using some features such as sentence length or word frequency and then train the buckets from easy to hard according to the predefined difficulty.
\citet{platanios-etal-2019-competence} propose competence-based curriculum learning for machine translation, which treats the model competence as a variable in training and samples the training corpus in line with the competence.
In detail, they believes that competence is positively related to the training steps, and uses linear or square root functions for experiments.
We bring the concept of competence and redefine it in this paper with a multilingual context.
Further, we define \emph{Self-evaluated Competence} and \emph{HRLs-evaluated Competence} as the competence of each language pair to capture the model's multilingual competence more accurately.
\section{Conclusion}
In this paper, we focus on balancing the learning competencies of different languages in multilingual machine translation and propose a competence-based curriculum learning framework for this task.
The experimental results show that our approach brings significant improvements over baselines and the previous state-of-the-art balancing sampling method, MultiDDS-S.
Furthermore, the ablation study on sampling methods verifies the great generalibility of our curriculum learning framework.
\section*{Acknowledgements}
We would like to thank anonymous reviewers for their suggestions and comments. This work was supported by the National Key Research and Development Program of China (No. 2020YFB2103402).
|
\section{Conclusion $\&$ Future Work}
\vspace{-0.3cm}
In this work, we propose a novel co-motion pattern, a second-order local motion descriptor in order to detect whether the video is deep-faked. Our method is fully interpretable and pretty robust to slight variations such as video compression and noises. We have achieved superior performance on the latest datasets under classification and anomaly detection settings, and have comprehensively evaluated various characteristics of our method including robustness and generalizability. In the future, an interesting direction is to investigate whether a more accurate motion estimation can be achieved as well as how temporal information can be integrated within our method.
\clearpage
\bibliographystyle{splncs04}
\section{Experiments}
\label{sect:Exp}
In this section, extensive experiments are conducted to empirically demonstrate the feasibility of our co-motion pattern, coupled with the advantages over other methods. We first describe the experiment protocol, followed by the choice of hyperparameters. The quantitative performance of our method evaluated on different datasets is reported and analyzed in Sect.~\ref{sec:quantitative}. Subsequently, we interpret the composition of the co-motion pattern, showing how it can be used for determining the genuineness of any given sequence or even individual estimated motion set. Finally, we demonstrate the transferability and robustness of our method under different scenarios.
\subsubsection{Dataset}
We evaluate our method on FaceForensics++~\cite{FaceForensics} dataset which consists of four sub-databases that produce face forgery via different methods, i.e. Deepfake~\cite{deepfake}, FaceSwap~\cite{faceswap}, Face2Face~\cite{F2F} and NeuralTexture~\cite{NeuralTexture}. In addition, we utilize the real set from~\cite{Google_dataset} to demonstrate the similarity of co-motion patterns from real videos.
Since each sub-database contains 1,000 videos, we form 2,000 co-motion patterns with each composed of picking $N$ $\rho$ matrices for training and testing respectively.
We use c23 and c40 to indicate the quality of datasets, which are compressed by H.264~\cite{H264} with 23 and 40 as constant rate quantization parameters.
Unless otherwise stated, all of our performance reported are achieved on c23.
The validation set and testing set are split before any experiments to ensure no overlapping would interfere the results.
\subsubsection{Implementation}
In this section, we specify hyperparameters and other detailed settings in order to reproduce our method. The local motion estimation procedure is accomplished by integrating \cite{opticalflow} as the estimator and \cite{Landmark} as the landmark detector, both with default parameter settings as reported in the original papers. For the facial landmarks, we only keep the last 51 landmarks out of 68 in total as the first 17 denotes the face boundary which is usually not manipulated. During the calculation of co-motion, we constrain $K$ to be at most 8 as only 8 facial components, thus avoiding unnecessary computation.
Since a certain portion of frames do not contain sufficient motion, we only preserve co-motion patterns with $p\%$ motion features having greater magnitude than the total $p\%$ of others, i.e. $p = 0.5$ with magnitude $\geq 0.85$, where the number is acquired by randomly sampling a set of 100 videos. An AdaBoost~\cite{AdaBoost} classifier is employed for all supervised classification tasks.
For Gaussian smoothing, we set $\hat{k} = 3$ for all experiments.
\subsection{Quantitative Results}
\label{sec:quantitative}
\begin{table}[t!]
\caption{Accuracy of our method on all four forgery databases, with each treated as a binary classification task against the real videos. Performance of \cite{OpenWorld} is estimated from figures in the paper.
}
\begin{center}
\begin{tabular}{l|c|c|c|c|c}
\hline
Method/Dataset & Deepfakes & FaceSwap & Face2Face & NeuralTexture & Combined \\ \hline
Xception~\cite{FaceForensics} & 93.46\% & 92.72\% & 89.80\% & N/A & \textbf{95.73\%} \\
R-CNN~\cite{RCNN} & 96.90\% & 96.30\% & \textbf{94.35\%} & N/A & N/A \\
Optical Flow + CNN~\cite{OFCNN} & N/A & N/A & 81.61\% & N/A & N/A \\
FacenetLSTM~\cite{OpenWorld} & 89\% & 90\% & 87\% & N/A & N/A \\ \hline
$N$ = 1 (Ours) & 63.65\% & 61.90\% & 56.50\% & 56.65\% & 57.05\% \\
$N$ = 10 (Ours) & 82.80\% & 81.95\% & 72.30\% & 68.50\% & 71.30\% \\
$N$ = 35 (Ours) & 95.95\% & 93.60\% & 85.35\% & 83.00\% & 88.25\% \\
$N$ = 70 (Ours) & \textbf{99.10\%} & \textbf{98.30\%} & 93.25\% & \textbf{90.45\%} & 94.55\% \\ \hline
\end{tabular}
\end{center}
\end{table}
In this section, we demonstrate the quantitative results of our method under different settings. At first, we show that the co-motion pattern can adequately separate forged and real videos in classification tasks as shown in Tab.~1. Comparing with other state-of-the-art forensic methods in terms of classification accuracy, we have achieved competent performance and have outperformed them by a large margin on Deepfakes~\cite{deepfake} and FaceSwap~\cite{faceswap}, respectively $99.10\%$ and $98.30\%$. In \cite{OFCNN}, while the researchers have similarly attempted establishing a forensic pipeline on top of motion features, we have outperformed its performance by approx. 12$\%$. It is noteworthy that \cite{RCNN,OpenWorld,FaceForensics} are all exploiting deep features that are learned in an end-to-end manner and consequently cannot be properly explained. By contrast, as interpretability is one of the principal factors to media forensics, our attention lies on proposing a method such that it can be justified and make no effort on deliberately outperforming deep learning based methods.
Equally importantly, as forgery methods are various and targeting each is expensive, we demonstrate that the proposed co-motion pattern can also be employed for anomaly detection tasks, where only the behaviors of real videos require to be modeled, and forged videos can be separated if an appropriate threshold is selected. As presented in Fig.~\ref{fig:ROCs}, we show receiver operating characteristic (ROC) curves on each forgery database with increasing $N$. The real co-motion template is constructed of 3,000 randomly selected $\rho$ matrices for each co-motion pattern (real or fake) to compare against during evaluation. In general, our method can be used for authenticating videos even without supervision. In the next section, we exhibit that the co-motion pattern is also robust to random noise and data compression.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/DF.jpg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/FS.jpg}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/F2F.jpg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/NT.jpg}
\end{subfigure}
\caption{Anomaly detection performance of our co-motion patterns. }
\label{fig:ROCs}
\end{figure}
\vspace{-0.3cm}
\subsection{Robustness Analysis}
\label{sec:robustness}
In this section, we demonstrate the robustness of our proposed method against noises or data compression and the generalizability of co-motion patterns. Experiments about whether the compression rate of the video and noise would affect the effectiveness of co-motion patterns are conducted and the results are shown in Tab. 2. Empirically, co-motion has demonstrated great robustness against heavy compression (c40) and random noise, i.e. $N(\mu,\sigma^2)$ with $\mu = 0$ and $\sigma = 1$. Such results verify our proposed co-motion patterns exploiting high-level temporal information are much less sensitive to pixel-level variation, while statistics based methods as reviewed in Sect.~2.2 do not possess this property.
\vspace{-0.7cm}
\begin{table}
\caption{Robustness experiment for demonstrating that co-motion can maintain its characteristics under different scenarios. All experiments are conducted on Deepfake~\cite{deepfake} with $N = 35$. Classification accuracy and area under curve (AUC) are reported respectively. }
\begin{center}
\begin{tabular}{l|c|c|c|c}
\hline
Setting / Dataset & Original &c23&c40& c23+noise \\ \hline
Binary classification & 97.80\% & 95.95\% & 91.60\% & 91.95\% \\ \hline
Anomaly detection & 98.57 & 96.14 & 93.76 & 92.60 \\ \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.5cm}
In addition to demonstrating the robustness, we also investigate in whether the modeled co-motion patterns are generalizable, as recorded in Tab.~3. It turns out that co-motion patterns constructed on relatively high-quality forgery databases such as NeuralTextures~\cite{NeuralTexture} and Face2Face~\cite{F2F} can easily be generalized for classifying other low-quality databases, while the opposite results in inferior accuracy. This phenomenon is caused by that videos forged by NeuralTextures are generally being more consistent, thus the inconsistency learned is more narrowed down and specific, while the types of inconsistency vary greatly in low-quality databases, which can be hard to model.
\vspace{-0.5cm}
\begin{table}
\caption{Experiments for demonstrating generalizability of co-motion patterns. Same experiment setting was employed as in Tab. 1. }
\begin{center}
\begin{tabular}{l|c|c|c|c}
\hline Test on / Train on & Deepfakes & FaceSwap & Face2Face & NeuralTexture \\ \hline
Deepfakes & N/A & 92.15\% & 93.45\% & 95.85\% \\
FaceSwap & 84.25\% & N/A & 76.75\% & 84.95\% \\
Face2Face & 70.30\% & 64.85\% & N/A & 81.65\% \\
NeuralTexture & 76.20\% & 65.15\% & 77.85\% & N/A \\ \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-1cm}
\subsection{Abnormality Reasoning}
\label{sec:reasoning}
In this section, we explicitly interpret the implication of each co-motion pattern for an intuitive understanding. A co-motion example of real videos can be found in Fig. ~6. As we illustrated, the local motion at 51 facial landmarks are estimated as features, where the order of landmarks are preserved identically in all places on purpose for better visual understanding. It is noteworthy that the order of landmarks do not affect the performance as long as they are aligned during experiments.
Consequently, each co-motion pattern describes the relationship of any pair of two local motion features, where features from the same or highly correlated facial component would instinctively have greater correlation. For instance, it is apparent that two eyes would generally move in the same direction, as the center area highlighted in Fig. ~6. Similarly, a weak yet stable high correlation of the first 31 features is consistently observed on all real co-motion patterns, which conforms to the accordant movement of facial components on upper and middle face area. We also observe strong negative correlation, indicating opposite movements, between upper lip and lower lip. This credits to the dataset containing a large volume of videos with people talking, while in forged videos such a negative correlation is undermined, usually due to the fact that the videos are synthesized in a frame-by-frame manner, thus the temporal relationship is not well-preserved. Moreover, the co-motion is normalized in range $[0, 1]$ for visualization purpose which leads to the weakened difference between real and fake co-motion patterns, while in original scale the difference can be more magnificent, verified by the experiments.
\begin{figure}[t!]
\centering
\includegraphics[width=0.55\textwidth, height=0.48\textwidth]{eccv2020kit/Interpret.png}
\caption{An example of interpreting co-motion patterns. }
\label{fig:interpret}
\end{figure}
For an explicit comparison, we also average 1,000 $\rho$ matrices from each source to illustrate the distinction and which motion pattern in specific was not well-learned as in Fig.~7. Evidently, co-motion patterns from forged videos fail to model the negative correlation between upper lip and lower lip. Moreover, in Deepfake and FaceSwap, the positive correlation between homogeneous components (e.g. eyes and eyebrows) is also diluted, while in reality it would be difficult to control them having uncorrelated motion. We also attempt to construct co-motion patterns on another set of real videos~\cite{Google_dataset} to illustrate the commonality of co-motion patterns over all real videos. Additionally, we show that visually, the structure of co-motion pattern could quickly converge as illustrated in Fig. 8, which sustains our choices of building second-order pattern as it is less sensitive to intra-instance variation.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/real_cooccurrence.jpg}
{{\small Real videos}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/deepfakes_cooccurrence.jpg}
{{\small Deepfakes}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/faceswap_cooccurrence.jpg}
{{\small FaceSwap}}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/actor_cooccurrence.jpg}
{{\small Real videos from \cite{Google_dataset}}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/face2face_cooccurrence.jpg}
{{\small Face2Face}}
\label{fig:mean and std of net44}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/neuraltextures_cooccurrence.jpg}
{{\small NeuralTexture}}
\end{subfigure}
\caption{Averaged co-motion pattern from different sources. Two real co-motion patterns (leftmost column) collectively present component-wise motion consistency while forged videos fail to maintain that property. }
\label{fig:Cooccurrences}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.68\textwidth, height=0.3\textwidth]{eccv2020kit/Convergence.png}
\caption{Co-motion pattern comparison on the same video (original and deep-faked based on the original one). As $N$ increases, both co-motion patterns gradually converge to the same structure. }
\label{fig:framework}
\end{figure}
\section{Introduction}
Media forensic, referring to judge the authenticity, detect potentially manipulated region and reason its decision of the given images/videos, plays an important role in real life to prevent media data from being edited and utilized for malicious purposes, e.g., spreading fake news~\cite{FakeNews,WorldLeader}. Unlike traditional forgery methods (e.g., copy-move and slicing) which can falsify the original content with low cost but are also easily observable, the development of deep generative models such as generative adversarial net (GAN)~\cite{GAN} makes the boundary between realness and forgery more blurred than ever, as deep models are capable of learning the distribution from real-world data so well. In this paper, among all the forensic-related tasks, we focus on exposing forged videos produced by face swapping and manipulation applications~\cite{FastFaceSwap,DVP,F2F,FSGAN,MakeAFace,NeuralTexture}. These methods, while initially designed for entertainment purposes, have gradually become uncontrollable in particular when the face of celebrities, who possess greater social impact such as Obama~\cite{obama}, can be misused at no cost, leading to pernicious influence.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\textwidth, height=0.51\textwidth]{eccv2020kit/clear_comparison.png}
\caption{Example of motion analysis results by our method. \textbf{Landmarks} with the same color are considered having analogous motion patterns, which are consistent with facial structure in real videos but not in deep-faked videos. We compactly model such patterns and utilize them to determine the authenticity of given videos.}
\label{fig:clear_comparison}
\end{figure}
Traditional forensic methods focusing on detecting specific traces remained ineluctably during the editing (e.g., inconsistency in re-sampling~\cite{Resampling}, shadowing~\cite{shadow}, reflection~\cite{Reflection}, compression quality~\cite{CompressionQuality} and noise pattern~\cite{Noise}) fail to tackle the indistinguishable DNN-generated images/videos due to the powerful generative ability of existing deep models.
Therefore, the demand for forensic approaches explicitly against deep-faked videos is increasing.
Existing deep forensic models can be readily categorized into three branches including real-forged binary classification-based methods~\cite{XRay,TwoStep,RCNN,MesoNet}, anomaly image statistics detection based approaches~\cite{ColorComponent,FaceArtifict,PRNU,Unmasking,AttributeGAN} and high-level information driven cases~\cite{headpose,exposelandmark,blinking}.
However, no matter which kind of methods, their success heavily relies on a high-quality, uncompressed and well-labeled forensic dataset to facilitate the learning. Once the given data are compressed or in low-resolution, their performance is inevitably affected. More importantly, these end-to-end deep forensic methods are completely unexplainable, no explicit reason can be provided by these methods to justify based on what a real or fake decision is made.
To overcome the aforementioned issues, in this paper, we propose a video forensic method based on motion features to explicitly against deep-faked videos. Our method aims to model the conjoint patterns of local motion features from real videos, and consequently spot the abnormality of forged videos by comparing the extracted motion pattern against the real ones. To do so, we first estimate motion features of keypoints that are commonly shared across deep-faked videos. In order to enhance the generalizability of obtained motion features as well as eliminate noises introduced by inaccurate estimation results, we divide motion features into various groups which are further reformed into a correlation matrix as a more compact frame-wise representation. Then a sequence of correlation matrices are calculated from each video, with each weighted by the grouping performance to form the co-motion pattern which describes the local motion consistency and correlation of the whole video. In general, co-motion patterns collected from real videos obey the movement pattern of facial structures and are homogeneous with each other regardless of the video content variation, while it becomes less associated across fake videos.
To sum up, our contributions are four-fold: (1) We propose co-motion pattern, a descriptor of consecutive image pairs that can be used to effectively describe local motion consistency and correlation. (2) The proposed co-motion pattern is being entirely explainable, robust to video compression/pixel noises and generalizes well. (3) We conduct experiments under both classification and anomaly detection settings, showing that the co-motion pattern is able to accurately reveal the motion-consistency level of given videos. (4) We also evaluate our method on datasets with different quality and forgery methods, with the intention to demonstrate the robustness and transferability of our method.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth, height=0.45\textwidth]{eccv2020kit/framework.png}
\caption{The pipeline of our proposed co-motion pattern extraction method. As illustrated, we firstly estimate the motion of corresponding keypoints, which are then to be grouped for analysis. On top of that, we construct co-motion pattern as a compact representation to describe the relationship between motion features. }
\label{fig:framework}
\end{figure}
\section{Related Work}
\vspace{-0.25cm}
\subsection{Face Forgery by Media Manipulation}
\vspace{-0.15cm}
First of all, we review relevant human face forgery methods. Traditionally, methods such as copy-move and slicing, if employed for face swapping tasks, can hardly produce convincing result due to the inconsistency caused by image quality~\cite{Resampling,quantization,jpeg_ghosts}, lighting changing~\cite{lighting,complex_lighting} and noise patterns~\cite{Noise,estimate_noise} between the tampered face region and other regions. With the rapid expeditious development of deep generative models~\cite{GAN}, the quality of generated images has significantly improved. The success of ProGAN~\cite{pggan} makes visually determining the authenticity of generated images pretty challenging if only focusing on the face region. Furthermore, the artifacts remained in boundary regions whose corresponding distribution in training datasets are relatively disperse are also progressively eliminated by \cite{StyleGANV1,StyleGANV2,glow,BigGAN}. Although these methods have demonstrated appealing generating capability, they do not focus on a certain identity but generate faces with random input.
Currently, the capability of deep neural networks has also been exploited for human-related tasks such as face swapping~\cite{deepfake,faceswap,FastFaceSwap,F2F,NeuralTexture,FSNET,FSGAN,DeformAE}, face expression manipulation~\cite{MakeAFace,F2F,x2face,NFE} and facial attribute editing~\cite{NFE,AttGAN,DA_Face_M,SMIT,MulGAN} majorly for entertainment purposes at the initial stage (samples of deep-faked face data are shown in Fig.~\ref{fig:deepfake_samples}.). However, since the face swapping methods in particular have already been misused for commercial purposes, homologous techniques should be studied and devised as prevention measures before it causing irreparable adverse influence.
\vspace{-15pt}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth, height=0.45\textwidth]{eccv2020kit/fake_samples.png}
\caption{Samples to illustrate what ``Deepfake'' is. Top left~\cite{StyleGANV2}: high fidelity generated faces. Top right~\cite{jim}: face swapping. Bottom left~\cite{MakeAFace}: face expression manipulation, original image on top and expression manipulated on bottom. Bottom right~\cite{MulGAN}: face attribute editing, original images on top and edited on bottom. }
\label{fig:deepfake_samples}
\end{figure}
\vspace{-15pt}
\subsection{Deep-faked Manipulation Detection}
While media forensic has been a long existing field, the countermeasures against deep-faked images and videos are scarce. As we mentioned earlier, existing methods can be categorized into three genres, respectively by utilizing a deep neural network~\cite{XRay,FaceForensics,RCNN,MesoNet,TwoStep,OFCNN,Incremental,DetectF2F,OpenWorld}, by exploiting the unnatural low-level statistics and by detecting the abnormality of high-level information. In the very first category, it has been usually considered as a binary classification problem where a classifier is constructed to learn the boundary between original and manipulated data via hand-crafted or deep features. As one of the earliest works in this branch, \cite{MesoNet} employs an Inception~\cite{Inception} with proper architecture improvement to directly classify each original or edited frame. Later, in order to consider the intra-frame correlation, \cite{RCNN} constructed a recurrent convolutional neural network that learns from temporal sequences. Due to the variety of video content and the characteristics of neural network, a sufficiently large dataset is required. To overcome this problem, \cite{OFCNN} attempted using the optical flow as input to train a neural network. While high classification accuracy achieved, since the features learned directly by neural networks yet to be fully comprehended, the decision of whether the input data has been manipulated cannot be appropriately elucidated.
Regarding the second category, \cite{Unmasking,PRNU,AttributeGAN,CameraFingerprint} have all utilized the characteristics that the current deep generated images can barely learn the natural noise carried with untampered images, hence using the noise pattern for authentication. In \cite{ColorComponent}, the diminutive difference of color components between original and manipulated images for classification. While effective, these methods are also exceedingly susceptible to the quality of dataset. Our method lies in the third category and is constructed based upon high-level information~\cite{headpose,exposelandmark}, which are generally being more explainable and robust to the miniature pixel change introduced by compression or noise. Furthermore, as co-motion pattern is derived by second-order statistics, it is being more robust than ~\cite{headpose,exposelandmark} to instance-wise variation.
\section{Methodology}
In this section, we elaborate on the details of our proposed video forensic method based on co-motion pattern extraction from videos and the overall pipeline of our method is illustrated in Fig.~2. Firstly, we obtain aligned local motion feature describing the movement of specific keypoints from the input videos (Sect.~\ref{sect:LME}). To eliminate the instance-wise deviation, we then design high-order patterns among the extracted local motion features. Subsequently, we demonstrate how to construct co-motion patterns that describe the motion consistency over each video, as well as its usage altogether in Sect.~\ref{sect:CMP}.
\subsection{Local Motion Estimation}
\label{sect:LME}
The fundamental of constructing co-motion pattern is to extract local motion features firstly. Since each co-motion pattern is comprised by multiple independent correlation matrices (explained in Sect.~\ref{sect:CMP}), we expound on how to obtain local motion features from two consecutive frames in this section first.
Denote a pixel on image $I$ with coordinate $(x, y)$ at time $t$ as $I(x, y, t)$, according to brightness constancy assumption, we have~\cite{HS,opticalflow}:
\begin{equation}
I(x, y, t) = I(x + \Delta x, y + \Delta x, t + \Delta t)
\end{equation}
where $\Delta x, \Delta y$ and $\Delta t$ denote the displacements on $\mathbb{R}^3$ respectively. $\Delta t$ is usually 1 to denote two consecutive frames. This leads to the optical flow constraint:
\begin{equation}
\frac{\partial I}{\partial x} \Delta x + \frac{\partial I}{\partial y} \Delta y + \frac{\partial I}{\partial t} = 0
\end{equation}
However, such a hard constraint can lead motion estimation result to be sensitive to even slight changes in brightness, and therefore gradient constancy assumption is proposed~\cite{gradient,opticalflow}:
\begin{equation}
\nabla I(x, y, t) = \nabla I(x + \Delta x, y + \Delta y, t + 1)
\end{equation}
where
\begin{equation}
\nabla = (\partial x, \partial y)^\intercal
\end{equation}
Based on above constraints, the objective function can be formulated as:
\begin{equation}
\underset{\Delta x, \Delta y}{\min} E_{total}(\Delta x, \Delta y) = E_{brightness} + \alpha E_{smoothness}
\end{equation}
where:
\begin{equation}
\begin{split}
E_{brightness} = \iint & \psi(I(x, y, t) - I(x + \Delta x, y + \Delta y, t + 1)) ~ + \\
& \psi(\nabla I(x, y, t) - \nabla I(x + \Delta x, y + \Delta y, t + 1)) dxdy
\end{split}
\end{equation}
$\alpha$ denotes a weighting parameter and $\psi$ denotes a concave cost function, and $E_{smoothness}$ penalization term is introduced to avoid too significant motion displacement:
\begin{equation}
E_{smoothness} = \iint \psi(|\nabla x|^2 + |\nabla y|^2) dxdy
\end{equation}
In our approach, we utilize Liu's~\cite{celiu} dense optical flow to estimate motion over frame pairs. However, while the intra-frame movement is estimable, it cannot be used directly as motion features because the content of each video varies considerably which makes the comparison between the estimated motion of different videos unreasonable~\cite{OFCNN}. Moreover, the estimated motion cannot be pixel-wise accurate due to the influence of noises and non-linear displacements.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth, height=0.38\textwidth]{eccv2020kit/LME.png}
\caption{Illustration of local motion estimation step.}
\label{fig:lme}
\end{figure}
To overcome the above problems, we propose to narrow the region of interests via finding facial landmarks for comparison. By employing an arbitrary facial landmark detector $f_{D}$, we are able to obtain a set of spatial coordinates $L$ as:
\begin{equation}
f_D(I) = L_I = \{l^i_I | l_I^i \in \mathbb{R}^2, 1 \leq i \leq n \}
\end{equation}
so that the local motion features $M_I$ can be denoted as:
\begin{equation}
M_I = \{m_I^i | m_I^i = I_{\Delta x, \Delta y} \oplus \mathcal{N}(l_I^i \pm \hat{k}), l_I^i \in L_I\}
\end{equation}
representing the Gaussian-weighted average of estimated motion map $I_{\Delta x, \Delta y}$ centered on $(l_i^x, l_i^y)$ with stride $\hat{k}$. The Gaussian smoothing is introduced to further mitigate the negative impact by inaccurate estimation result. By doing so, we align the motion feature extracted from each video for equitable comparison. An intuitive illustration of this step is presented in Fig.~\ref{fig:lme}.
Due to the lack of sufficient motion in some $I_{\Delta x, \Delta y}$, we abandon these with trivial magnitude by setting a hyperparameter as threshold where the detailed choice will be discussed in Sect.~\ref{sect:Exp}.
\subsection{Co-motion Patterns}
\label{sect:CMP}
Depending merely on local motion features obtained above would require an incredibly large-scale dataset to cover as many scenarios as possible, which is redundant and costly. Based on the observation that a human face is an articulated structure, the intra-component correlation can also depict the motion in an efficient manner. Inspired by the co-occurrence feature~\cite{Cooccurrence}, which has been frequently employed in texture analysis, we propose to further calculate the second-order statistics from extracted local motion features.
\subsubsection{Grouping Intra-Correlated Motion Features} \hfill \break
\noindent In this step, we group analogous $m_I^i \in M_I$ to estimate articulated facial structure by motion features since motion features that are collected from the same facial component would more likely to share consistent movement. Meanwhile, the negative correlation can also be represented where motion features having opposite directions (e.g. upper lip and lower lip) would be assigned to disjoint groups.
As $m_I^i \in \mathbb{R}^2$ denotes motion on two orthogonal directions, we construct the affinity matrix $A_I$ on $M_I$ such that:
\begin{equation}
A_I^{i, j} = m_I^i \cdot m_I^j
\end{equation}
We here choose the inner product over other metrics such as cosine and euclidean since we wish to both emphasize the correlation instead of difference and to lighten the impact of noise within $M_I$. In specific, using inner product can ensure the significance of two highly correlated motions that both possess certain magnitude to be highlighted, while noises with trivial magnitude would relatively affect less. The normalized spectral clustering~\cite{spectral,tutorial} is then performed, where we calculate the degree matrix $D$ such that:
\begin{equation}
D_I^{i, j} =
\begin{cases}
\sum^n_{j} A_I^{i, j} & \text{if $i = j$}\\
0 & \text{if $i \neq j$}\\
\end{cases}
\end{equation}
and the normalized Laplacian matrix $\mathcal{L}$ as:
\begin{equation}
\mathcal{L} = (D_I)^{-\frac{1}{2}}(D_I - A_I)(D_I)^{\frac{1}{2}}
\end{equation}
In order to split $M_I$ into $K$ disjoint groups, the first $K$ eigenvectors of $\mathcal{L}$, denote as $\textbf{V} = \{\nu_k | k \in [1, K]\}$, are extracted to form matrix $F \in \mathbb{R}^{n \times K}$. After normalizing $F$ by dividing the corresponding L2-norms row-wisely, a K-Means clustering is used to separate $P = \{p_i | p_i = F^i \in \mathbb{R}^{K}, i \in [1, n]\}$ into $K$ clusters where $C_k = \{i | p_i \in C_k\}$. However, since $K$ is not directly available in our case, we will demonstrate how to determine the optimal $K$ in the next step.
\subsubsection{Constructing Co-motion Patterns} \hfill \break
As previously stated, determining a proper $K$ can also assist in describing the motion pattern more accurately. A straightforward approach is to iterate through all possible $K$ such that the Calinski-Harabasz index~\cite{CH} is maximized:
\begin{equation}
\operatorname*{arg\,max}_{K \in [2, n]} ~ f_{CH}(\{C_k | k \in [1, K]\}, K)
\end{equation}
where
\begin{equation}
f_{CH}(\{C_k | k \in [1, K]\}, K) = \frac{tr(\sum^K_y \sum_{p_i \in C_y} (p_i - C_y^{\mu})(p_i - C_y^{\mu})^\intercal)}{tr(\sum^K_y |C_y| (C_y^{\mu} - M_I^{\mu})(C_y^{\mu} - M_I^{\mu})^\intercal)} \times \frac{n - K}{K - 1}
\end{equation}
with $C_y^{\mu}$ is the centroid of $C_y$, $M_I^{\mu}$ is the center of all local motion features and $tr$ denotes taking the trace of the corresponding matrix. After all the efforts, the motion correlation matrix $\rho_{I_t, I_{t+1}}$ of two consecutive frames $I_t$ and $I_{t+1}$ can be calculated as:
\begin{equation}
\rho_{I_t, I_{t+1}}^{i, j} =
\begin{cases}
1 & \text{if $(m_i \in C_k ~\&~ m_j \in C_k ~|~ \exists C_k)$}\\
0 & \text{otherwise}\\
\end{cases}
\end{equation}
and consequently, the co-motion pattern of sequence $S = \{I_1, ..., I_T\}$ is calculated as the weighted average of all correlation matrices:
\begin{equation}
f_{CP}(S) = \sum^T_t k_{I_t, I_{t+1}} \times f_{CH}(\{C_k | k \in [1, K]\}, k_{I_t, I_{t+1}}) \times \rho_{I_t, I_{t+1}}
\end{equation}
where the weighting procedure is also to reduce the impact of noise: the greater the $f_{CH}(\{C_k | k \in [1, K]\}, K)$, naturally the more consistent the motions are; simultaneously, co-motion pattern constructed on noisy estimated local motion would scatter more sparse, which should be weighted as less important.
\subsubsection{Usage of Co-motion Patterns} \hfill \break
The co-motion pattern can be utilized as a statistical feature for comparison purposes. When used for supervised classification, each co-motion must be normalized by its L1 norm:
\begin{equation}
\dot f_{CP}(S) = \frac{f_{CP}(S)}{\sum |f_{CP}(S)|}
\end{equation}
and $\dot f_{CP}(S)$ can be used as features for arbitrary objectives.
In order to illustrate that our co-motion pattern can effectively distinguish all forgery types by only modeling on real videos, we also conduct anomaly detection experiments where a real co-motion pattern is firstly built as template. Then, co-motion patterns from real and forgery databases are all compared against the template where the naturalness is determined by the threshold.
Jensen–Shannon divergence is suggested to be employed as distance measure between any two co-motion patterns:
\begin{equation}
d_{KL}(f_{CP}(S_1), f_{CP}(S_2)) = \sum_i \sum_j^{i-1} f_{CP}(S_1)^{i, j} log(\frac{f_{CP}(S_1)^{i, j}}{f_{CP}(S_2)^{i, j}})
\end{equation}
\begin{equation}
d_{JS}(f_{CP}(S_1), f_{CP}(S_2)) = \frac{1}{2} d_{KL}(f_{CP}(S_1), \overline{f_{CP}}_{S_1, S_2}) + \frac{1}{2} d_{KL}(f_{CP}(S_2), \overline{f_{CP}}_{S_1, S_2})
\end{equation}
where $\overline{f_{CP}}_{S_1, S_2} = \frac{f_{CP}(S_1) + f_{CP}(S_2)}{2}$ and $S1, S2$ denote two sequences.
|
\section{Introduction}
The tremendous performance of deep learning models has led to rampant application of these systems in practice. However, these models can be manipulated by introducing minor perturbations {\cite{szegedy2013intriguing, goodfellow2014explaining, wang2020you, wang2020adversarial, zhang2022local}}. This process is called adversarial attacks. In case of person re-identification, for a given query input $x$, a target model $f$ and a gallery, the attack is defined as,
\begin{align}
&\lVert f(\mathbf{x}+\boldsymbol{\delta}) - f(\mathbf{x}_g)\rVert_2 > \lVert f(\mathbf{x}+\boldsymbol{\delta}) - f(\bar{\mathbf{x}}_g)\rVert_2 \;\;\;\textit{s.t.}\; \lVert \boldsymbol{\delta} \rVert_p \leq \epsilon, \nonumber\\
&\mathbf{x}_g \ni topk(\mathbf{x}+\boldsymbol{\delta}), ID(\mathbf{x}) = ID(\mathbf{x}_g) \neq ID(\bar{\mathbf{x}}_g) \nonumber
\end{align}
where $\mathbf{x}_g$ and $\bar{\mathbf{x}}_g$ are gallery samples belonging to different identity and $\boldsymbol{\delta}$ is the adversarial perturbation with an $l_p$ norm bound of $\epsilon$. {\textit{topk}($\cdot$)} refers to the top $k$ retrieved images for the given argument.
Adversarial attacks have been extensively investigated under classification setting \cite{akhtar2021advances} and also studied in other domains \cite{li2021concealed, li2021simple, jia20203d} in the recent times. However, {to the best of our knowledge}, there are very few works which study these attacks in person re-identification domain. In the following we briefly discuss some classical attacks under classification setting. Szegedy \etal~\cite{szegedy2013intriguing} proposed the first work on generation of adversarial sample for deep neural networks using L-BFGS. Goodfellow \etal~\cite{goodfellow2014explaining} proposed an efficient adversarial sample generation method using fast gradient sign method (FGSM). Kurakin \etal~\cite{kurakin2016adversarial} proposed an iterative FGSM method. Other prominent works include \cite{madry2017towards,carlini2017towards,papernot2016limitations,dong2018boosting,croce2020reliable,wang2021feature}.
In person re-id {\cite{zhou2019omni,chang2018multi,li2019cross,yang2021pixel}}, both white-box and black box attacks have been proposed in \cite{yang2021learning, ding2021beyond, wang2020transferable, li2021qair}. These attacks use a labeled source dataset and show that the attacks are transferable under cross-dataset or cross-model, or both settings. However, transferabilty of attacks in the challenging cross-dataset and cross-model setting is still an issue. In this work, we propose to use a mask and meta-learning for better transferability of attacks. We also investigate adversarial attacks in a completely new setting where the source dataset does not have any labels and the target model structure or parameters are unknown.
\section{Related Works}
In \cite{9226484}, authors propose white box and black box attacks. The black box attack only assumes that the victim model is unknown but the dataset is available. \cite{wang2019advpattern} introduces physically realizable attacks in white box setting by generating adversarial clothing pattern. \cite{li2021qair} proposes a query based attack wherein the images obtained by querying the victim model are used to form triplets for triplet loss. \cite{bouniot2020vulnerability} proposes white box attack using self metric attack; wherein the positive sample is obtained by adding noise to the given input and obtain negative sample from other images. In \cite{yang2021learning}, authors propose a meta-learning framework using a labeled source and extra association dataset. This method generalizes well in cross-dataset scenario. In \cite{ding2021beyond}, Ding~\etal ~proposed to use a list-wise attack objective function along with model agnostic regularization for better transferability. A GAN based framework is proposed in \cite{wang2020transferable}. Here the authors generate adversarial noise and mask by training the network using triplet loss.
In this work we use a GAN network to generate adversarial sample. In order to achieve better transferability of attack across models, we suppress the pixels that generate large gradients. Suppressing these gradients allows the network to focus on other pixels. In this way, the network can focus on pixels that are not explicitly salient with respect to the model used for attack. We further use meta learning \cite{finn2017model} which also allows incorporation of an additional dataset to boost the transferability. We refer this attack as Meta Generative Attack (MeGA). Our work is closest in spirit to \cite{yang2021learning, wang2020transferable}, however, the mask generation and application of meta learning under GAN framework are quite distinct from these works.
\iffalse
\textbf{Adversarial Defense} Countering adversarial attacks, the goal of adversarial defense is to achieve the accuracy comparable to that of untargeted model. The defense methods either use adversarial examples during training or modify the network itself. Adversarial training is often considered as a first line of defense \cite{szegedy2013intriguing, goodfellow2014explaining, moosavi2016deepfool} and also demonstrates the strongest defense. Among other class of defenses which modify the network are defensive distillation \cite{papernot2016distillation}, gradient regularization \cite{ross2018improving}, biologically inspired models \cite{nayebi2017biologically, krotov2018dense}, convex ReLU relaxation \cite{wong2018provable}, image enhancement \cite{mustafa2019image}, image restoration \cite{zhao2021removing}.
\fi
\section{Methodology}
In this work we address both white-box and black-box attacks. We need that the attack is transferable across models and datasets. Thus if we obtain the attack sample using a given model $f$, the attack is inherently tied to $f$ \cite{wang2021feature}. In order that attack does not over-learn, we apply a mask that can focus on regions that are not highly salient for discrimination. This way the network can focus on less salient but discriminative regions thereby increasing the generalizability of attack to other models. On the other hand, meta learning has been efficiently used in adversarial attacks \cite{yuan2021meta, yang2021learning, feng2021meta} to obtain better transferability across datasets. However meta learning has not been explored with generative learning for attacks in case of PRID. We adapt the MAML meta learning framework \cite{finn2017model} in our proposed method. While the black box attack works assume the presence of a labeled source dataset, we additionally present a more challenging setting wherein no labels are available during attack.
\begin{figure}
\centering
\includegraphics[width = .45\textwidth]{prid_images/Copy_of_arch.png}
\caption{Model architecture. Mask $\mathbf{M}$ is generated using model $f$ and is used to mask the input $\mathbf{x}$. GAN is trained using a meta learning framework with an adversarial triplet loss and GAN loss.}
\label{fig:architecture}
\end{figure}
Our proposed model is illustrated in Figure \ref{fig:architecture}. In case of white-box setting,
the generator $\mathcal{G}$ is trained using the generator loss, adversarial triplet loss and meta learning loss while the discriminator $\mathcal{D}$ is trained with the classical binary cross-entropy discriminator loss. The mask is obtained via self-supervised triplet loss. The network learns to generate adversarial image. While the GAN loss itself focuses on generating real samples, the adversarial triplet loss guides the network to generate samples that will be closer to negative samples and farther away from positive samples.
\subsection{GAN training}
Given a clean sample $\mathbf{x}$, we use the generator $\mathcal{G}$ to create the adversarial sample $\mathbf{x}_{adv}$. The overall GAN loss is given by, $\mathcal{L}_{GAN} = E_{\mathbf{x}}\log \mathcal{D}(\mathbf{x}) + E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$.
Here $\Pi(.)$ denotes the projection into $l_{\infty}$ ball of $\epsilon$-radius within $\mathbf{x}$ and $\mathbf{x}_{adv} = \Pi(\mathcal{G}(\mathbf{x}))$. In order to generate adversarial samples, a deep mis-ranking loss is used \cite{wang2020transferable},
\begin{align}
\mathcal{L}_{adv-trip}(\mathbf{x}_{adv}^{a}, \mathbf{x}_{adv}^{n}, \mathbf{x}_{adv}^{p}) &= \max(\lVert \mathbf{x}_{adv}^{a} - \mathbf{x}_{adv}^n\rVert_2 \label{eq:adv-triplet} \\ \nonumber
&- \lVert \mathbf{x}_{adv}^{a} - \mathbf{x}_{adv}^p\rVert_2 + m,0)
\end{align}
where $m$ is the margin. {$\mathbf{x}_{adv}^{a}$ is the adversarial sample obtained from anchor sample $\mathbf{x}^{a}$. Similarly, $\mathbf{x}_{adv}^{p}$ and $\mathbf{x}_{adv}^{n}$ are the adversarial samples obtained from respective positive and negative samples $\mathbf{x}^{p}$ and $\mathbf{x}^{n}$.} This loss tries to push the negatives closer to each other and pulls the positives farther away. Thus the network learns to generate convincing adversarial samples.
\subsection{Mask Generation}
Attack obtained using the given model $f$ leads to poor generelization to other networks. In order to have a better tranferability, we first compute the gradients with respect to self-supervised triplet loss $\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p)$, where $\mathbf{x}^p$ is obtained by augmentation of $\mathbf{x}$ and $\mathbf{x}^n$ is the sample in the batch which lies at a maximum Euclidean distance from $\mathbf{x}$. Here, the large gradients are primarily responsible for loss convergence. Since this way of achieving convergence is clearly coupled with $f$, we mask the large gradients. Thus, the convergence is not entirely dependent on the large gradients and focuses on other smaller ones which can also potentially posses discriminative nature. Thus the overfitting can be reduced by using the mask. To obtain the mask, we compute,
\begin{equation}
\mathbf{grad}_{adv-triplet} = \nabla_{\mathbf{x}}\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p)
\label{eq:grad}
\end{equation}
Note that, we use the real samples in Eq. \ref{eq:grad}.
The mask is given by $\mathbf{M} = sigmoid(\lvert \mathbf{grad}_{adv-triplet} \rvert)$, where $\lvert \cdot \rvert$ denotes absolute value. We mask $\mathbf{x}$ before feeding as an input to the generator $\mathcal{G}$. The masked input is given as $\mathbf{x} = \mathbf{x}\odot (1-\mathbf{M})$, where $\odot$ denotes Hadamard product.
{Masking techniques have also been explored in \cite{parascandolo2020learning, shahtalebi2021sand} where the idea is to learn the model such that it does not overfit to the training distribution. Our masking technique is motivated from the idea that an adversarial example should be transferbale across different reid models. Our technique is distinct and can be applied to an individual sample. Whereas, masking technique in \cite{parascandolo2020learning, shahtalebi2021sand} seeks agreement among the gradients obtained from all the samples of a batch.
This technique in \cite{parascandolo2020learning, shahtalebi2021sand} also suffers from the drawback of tuning hyperparameter. Further, the masking technique of \cite{parascandolo2020learning} is boolean while ours is continuous.}
\subsection{Meta Learning}
Meta optimization technique allows to learn from multiple datasets for different tasks while generalizing well on a given task. One of the popular meta learning approaches, MAML \cite{finn2017model}, applies two update steps. The first update happens in an inner loop with a meta-train set while the second update happens in outer loop with a meta-test set. In our case, we perform the inner loop update on the discriminator and generator parameters using the meta-train set and the outer loop update is performed on the parameters of generator using a meta-test set.
\begin{algorithm}[h]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Datasets $\mathcal{T}$ and $\mathcal{A}$, model $f$}
\Output{Generator network $\mathcal{G}$ parameters $\boldsymbol{\theta}_g$}
\BlankLine
\While{not converge}{
\For{samples in $\mathcal{T}$}{
\tcc*[h]{Obtain the mask}\\
$\mathbf{M}$ $\leftarrow$ $\sigma$($\lvert \nabla_{\mathbf{x}}{\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p) } \rvert$)\\
\tcc*[h]{Meta train update using $\mathcal{T}$}\\
$\boldsymbol{\theta}_d \leftarrow \argmax_{\boldsymbol{\theta}_d} E_{\mathbf{x}}\log \mathcal{D}(\mathbf{x}) + E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$ \\
$\boldsymbol{\theta}_g \leftarrow \argmin_{\boldsymbol{\theta}_g} \mathcal{L}_{\mathcal{G}}^{\mathcal{T}} + \lambda \mathcal{L}_{adv-trip}^{\mathcal{T}}(\mathbf{x}_{adv}^a,\mathbf{x}_{adv}^n,\mathbf{x}_{adv}^p)$\\
$\boldsymbol{\delta} = \mathbf{x} - \Pi(G(\mathbf{x}))$\\
\tcc*[h]{Meta test loss using $\mathcal{A}$}\\
Sample triplets from meta-test set $\mathcal{A}$ and compute $\mathcal{L} = \mathcal{L}_{adv-trip}^{\mathcal{A}}(\mathbf{x}^a - \boldsymbol{\delta},\mathbf{x}^n,\mathbf{x}^p)$\\
}
\tcc*[h]{Meta test update}\\
$\boldsymbol{\theta}_g \leftarrow \argmin_{\boldsymbol{\theta}_g} \lambda \mathcal{L}$\\
}
\caption{{Training for MeGA}}\label{algo_disjdecomp}
\end{algorithm}
More formally, given a network $\mathcal{D}$ parametrized by $\boldsymbol{\theta}_d$ and $\mathcal{G}$ parametrized by $\boldsymbol{\theta}_g$, we perform the meta-training phase to obtain the parameters $\boldsymbol{\theta}_d$ and $\boldsymbol{\theta}_g$. The update steps are given in Algorithm \ref{algo_disjdecomp}.
We also obtain the adversarial perturbation as, $\boldsymbol{\delta} = \mathbf{x} - \Pi(G(\mathbf{x}))$.
We then apply the meta-testing update using the additional meta-test dataset ${\mathcal{A}}$. In Algorithm \ref{algo_disjdecomp},
$\mathcal{L}_{\mathcal{G}}^{\mathcal{T}} = E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$. We discriminate the datasets using superscripts $\mathcal{T}$ for meta-train set and $\mathcal{A}$ for meta-test set. $\mathcal{L}_{adv-trip}^{\mathcal{A}}$ draws its samples $\mathbf{x}$ from $\mathcal{A}$. At the inference stage, we only use $\mathcal{G}$ to generate the adversarial sample.
\subsection{Training in absence of labels}
Deep mis-ranking loss can be used \cite{wang2020transferable} when the labels are available for $\mathcal{T}$. In this scenario, we present the case where no labels are available. In the absence of labels and inspired by unsupervised contrastive loss \cite{wang2021understanding}, we generate a positive sample $\mathbf{x}_{adv}^p$ by applying augmentation to the given sample $\mathbf{x}_{adv}^a$. The negative sample $\mathbf{x}_{adv}^n$ is generated using batch hard negative sample strategy, {that is we consider all samples except the augmented version of $\mathbf{x}_{adv}^a$ as negative samples and choose the one which is closest to $\mathbf{x}_{adv}^a$}. We then use {Eq. \ref{eq:adv-triplet}} to obtain the adversarial triplet loss.
\section{Experimental Results}
\subsection{Implementation Details} We implemented the proposed method in Pytorch framework. The GAN architecture is similar to that of the GAN used in \cite{xiao2018generating, isola2017image}. We use the models from Model Zoo \cite{modelzoo} - OSNet \cite{zhou2019omni}, MLFN \cite{chang2018multi}, HACNN \cite{li2018harmonious}, ResNet-50 and ResNet-50-FC512. We also use AlignedReID \cite{zhang2017alignedreid, AlignedReID}, LightMBN \cite{herzog2021lightweight}, and PCB \cite{sun2018beyond, PCB}.
We use an Adam optimizer with a learning rate = $10^{-5}$, $\beta_1$ = $0.5$ and $\beta_2 = 0.999$ and train the model for 40 epochs. We set $m=1$, {$\lambda = 0.01$}, and $\epsilon = 16$. In order to stabilize GAN training, we apply label flipping with 5\% flipped labels. We first present the ablation for mask and meta learning.
\subsection{Effect of mask $\mathbf{M}$}
We find that when we use mask for Resnet50 and test for different models like MLFN \cite{chang2018multi} and HACNN \cite{li2018harmonious}, there is a substantial gain in the performance as shown in Table \ref{tab:resnet50_mask}. In terms of R-1 accuracy, introduction of mask gives a boost of 42.10\% and 4.8\% for MLFN and HACNN respectively. This indicates that mask provides better transferability. Further, when we evaluate on Resnet50 itself, there is a minor change in performance which could be because mask is learnt using Resnet50 itself.
\begin{table}[H]
\caption{Trained on Market-1501 \cite{zheng2015scalable}. Setting Market-1501 $\rightarrow$ Market-1501. $l$ indicates Market-1501 labels are used for training. $\mathbf{M}$ indicates the incorporation of mask. 'Before' indicates accuracy on clean samples.}
\label{tab:resnet50_mask}
\centering
{
\begin{tabular}{c|c c | c c | c c }
\hline
Model &\multicolumn{2}{c|}{Resnet50} &\multicolumn{2}{c|}{MLFN} &\multicolumn{2}{c}{HACNN} \\
& mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
Before & 70.4& 87.9 & 74.3 &90.1 & 75.6& 90.9\\
$l$ & {0.66} & {0.41} & 3.95 &3.23 & 32.57& 42.01 \\
{$l+\text{AND}$}&{0.56} & {0.35} & 5.39 & 4.55 & 35.13 &44.20\\
{$l+\text{SAND}$}&\textbf{0.51} & \textbf{0.33} & 6.01 & 4.89 & 37.50 &45.11\\
$l+\mathbf{M}$ &0.69 & 0.50 &\textbf{2.80} & \textbf{1.87} & \textbf{31.73} & \textbf{39.99} \\
\hline
\end{tabular}
}
\end{table}
\subsection{Effect of meta learning}
We demonstrate the effect of meta learning in Table \ref{tab:resnet50_meta}. In the case of cross-dataset (Resnet50) as well as cross-dataset cross-model (MLFN) setting, we observe that introduction of meta learning gives a significant performance boost. In terms of R-1 accuracy, there is a boost of 69.87\% and 69.29\% respectively for Resnet50 and MLFN. We further observe that Resnet50 does not have a good transferability towards HACNN. This could be due to two reasons. First, Resnet50 is a basic model compared to other superior PRID models. Second, HACNN is built on Inception units \cite{szegedy2017inception}.
\begin{table}[H]
\caption{Trained on Market-1501 using MSMT-17 \cite{wei2018person} as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID \cite{zheng2017unlabeled}. $\mathcal{A}$ indicates incorporation of meta learning.}
\label{tab:resnet50_meta}
\centering
{
\begin{tabular}{c|c c | c c | c c }
\hline
{Model} &\multicolumn{2}{c|}{Resnet50} &\multicolumn{2}{c|}{MLFN} &\multicolumn{2}{c}{HACNN} \\
& mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
Before & 58.9 & 78.3 & 63.2& 81.1 & 63.2&80.1 \\
$l$ & 17.96 & 24.86 & 18.25& 24.10 & \textbf{42.75} &\textbf{58.48} \\
$l+\mathcal{A}$ &\textbf{5.80} & \textbf{7.49} & \textbf{6.15} & \textbf{7.4} & 43.12& 58.97\\
\hline
\end{tabular}
}
\end{table}
\subsection{Adversarial attack performance}
We first present the results for cross-model attack in Table \ref{tab:aligned_source_market}. We use AlignedReID model, Market-1501 \cite{zheng2015scalable} as training set and MSMT-17 \cite{wei2018person} as meta test set. The results are reported for Market-1501 and DukeMTMC-reID \cite{zheng2017unlabeled}. In case of Market-1501, it is clearly evident that the proposed method is able to achieve a strong transferability. We can see that incorporation of meta test set leads to less than halving the mAP and R-1 results compared to case when only labels are used. For instance, mAP and R-1 of AlignedReID goes down from 7.00\% and 6.38\% to 3.51\% and 2.82\% respectively. This is consistently observed for all three models. Further, the combined usage of mask and meta learning ($l+\mathbf{M}+\mathcal{A}$), denoted as MeGA, achieves best results in cross-model case of PCB and HACNN. The respective R-1 improvements are 10.00\% and 9.10\%. Thus our method is extremely effective in generating adversarial samples.
\begin{table}[H]
\caption{AlignedReID trained on Market-1501 with MSMT-17 as meta test set. M is Market-1501 and D is DukeMTMC-reID. MeGA denotes $l+\mathbf{M}+\mathcal{A}$.}
\label{tab:aligned_source_market}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c|c| c c | c c |c c }
\hline
& {Model} &\multicolumn{2}{c|}{AlignedReID} &\multicolumn{2}{c|}{PCB} &\multicolumn{2}{c}{HACNN} \\
& & mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
M $\rightarrow$ M & Before & 77.56 & 91.18 & 78.54 & 92.87 & 75.6&90.9 \\
\cline{2-8}
&$l$ & 7.00 & 6.38 & 16.46 & 29.69 & 16.39 & 20.16\\
\cline{2-8}
&$l$ + $\mathbf{M}$ & 6.62& 5.93 & 15.96 & 28.94 & 16.01 & 19.47\\ \cline{2-8}
&$l+\mathcal{A}$ & \textbf{3.51} & \textbf{2.82} & 8.07 & 13.86 & 5.44& 5.28 \\
\cline{2-8}
&MeGA& 5.50 & 5.07 & \textbf{7.39} &\textbf{12.47} & \textbf{4.85} & \textbf{4.80} \\
\hline
M $\rightarrow$ D& $l$ & 16.04 & 21.14 & 13.35 & 15.66 & 15.94 & 21.85 \\
\cline{2-8}
&$l+\mathbf{M}$ & 16.23 & 21.72 & 13.70 & 15.97 & 16.43 & 22.17 \\
\cline{2-8}
&$l+\mathcal{A}$ & \textbf{4.69} & \textbf{5.70} & \textbf{11.10} & \textbf{12.88} & 5.40 & 6.55\\
\cline{2-8}
&MeGA & 7.70 & 9.47 & 11.81 & 14.04& \textbf{4.73} & \textbf{5.40} \\
\hline
\end{tabular}
}
\end{table}
In case of Market-1501 to DukeMTMC-reID, we observe that simply applying the meta learning ($l+\mathcal{A}$) generalizes very well. In case of AlignedReID, mAP and R-1 of 4.60\% and 5.70\% respectively, are significantly lower compared to results obtained via $l$ or $l+\mathbf{M}$ settings. The combined setting of mask and meta learning yields better results for HACNN compared to AlignedReID and PCB. This may be because of the fact that learning of mask is still tied to training set and thus may result in overfitting.
\iffalse
\begin{table}[H]
\caption{AlignedReID trained on Market with MSMT-17 as meta test set. Results are reported for DukeMTMC-reID. Cross dataset and model setting. }
\label{tab:aligned_test_duke}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c| c c c| c c c|c c c}
\hline
\hline
{Model} &\multicolumn{3}{c|}{AlignedReID} &\multicolumn{3}{c|}{PCB} &\multicolumn{3}{c}{HACNN} \\
& mAP &R-1 &R-10 & mAP &R-1 &R-10& mAP &R-1 &R-10 \\
\hline
$l+\mathbf{M}$ & 16.23 & 21.72 & 37.79 & 13.70 & 15.97 & 36.13 & 16.43 & 22.17 & 35.77 \\
\hline
$l+\mathcal{A}$ & 4.69 & 5.70 & 12.11 & 11.10 & 12.88& 29.30 & 5.40 & 6.55 & 14.13 \\
\hline
MeGA & 7.70 & 9.47 & 19.16 & 11.81 & 14.04 &31.23 & 4.73 & 5.40 & 11.57 \\
\hline
\end{tabular}
}
\end{table}
\fi
In Table \ref{tab:market-msmt_meta_duke} we discuss the results for cross-dataset and cross-model case against more models. Here also we can see that both AlignedReID and PCB lead to strong attacks against other models in a different dataset.
In Table \ref{tab:aligned_msmt}, we present the results for MSMT-17. Here, the model is trained using AlignedReID and PCB using Market-1501 and DukeMTMC-reID as meta test set. When trained and tested using AlignedReID, the R-1 accuracy drops from 67.6\% on clean samples to 17.69\%. On the other hand when trained using PCB and tested on AlignedReID, the performance drops to 16.70\%. This shows that our attack is very effective in large scale datasets such as MSMT-17.
\tabcolsep=4pt
\begin{table*}[tb]
\caption{AlignedReID and PCB trained on Market with MSMT-17 as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID.}
\label{tab:market-msmt_meta_duke}
\centering
{
\begin{tabular}{c| c c c |c c c | c c c | c c c | c c c| c c c}
\hline
{Model} &\multicolumn{3}{c|}{OSNet} & \multicolumn{3}{c|}{{LightMBN}} & \multicolumn{3}{c|}{ResNet50} & \multicolumn{3}{c|}{MLFN} & \multicolumn{3}{c|}{ResNet50FC512} & \multicolumn{3}{c}{HACNN} \\
& mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 \\
\hline
Before & 70.2& 87.0 & - & 73.4 & 87.9 & - & 58.9 & 78.3& - & 63.2& 81.1 &-& 64.0 & 81.0& -& 63.2 & 80.1& - \\\hline
AlignedReID & 15.31 & 22.30 & 35.00 & 16.24 & 24.13 &39.65 & 5.17 & 6.64 & 13.77 & 12.28& 16.38 & 29.39 &6.97 & 9.69&19.38 & 4.77 & 5.61 & 11.98\\
\hline
PCB & 12.27 & 14.45 & 27.49 & 12.88 & 15.70 & 28.54& 7.14 & 8.55 & 20.01 & 11.95 & 16.54 & 30.92 & 9.45 & 11.46 & 23.90 & 3.97 & 4.66 & 10.00 \\
\hline
\end{tabular}
}
\end{table*}
\begin{table}[H]
\caption{Trained on Market-1501 using DukeMTMC-reID as meta test set. Setting Market-1501 $\rightarrow$ MSMT-17.}
\label{tab:aligned_msmt}
\centering
{
\begin{tabular}{c| c c c }
\hline
{Model} &\multicolumn{3}{c}{AlignedReID} \\
& mAP &R-1 &R-10 \\
\hline
MeGA (AlignedReID)& 9.37 & 17.69 & 33.42 \\
\hline
MeGA (PCB) & 8.82 & 16.70 & 31.98\\
\hline
\end{tabular}
}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width = 0.8cm,height = .9cm, cfbox=red 1pt 1pt]{prid_images/original_img.png} \hspace{4mm}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_5.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_5_seq_11.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_7.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_10.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_20.png}\\
\includegraphics[width = 0.8cm,height = .9cm, cfbox=blue 1pt 1pt]{prid_images/market-mask.png} \hspace{4mm}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_5_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_5_query_11.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_7_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_10_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_20_query_4.jpeg}
\caption{{Left column: Red and blue box show the given image from Market-1501 and its mask ($1-M$) respectively.
Right column:} Attacked (top) and clean (bottom) images from MSMT-17}
\label{fig:subjective}
\end{figure}
\subsection{Comparison with SOTA models}
In Table \ref{tab:comparison_aligned_TCIAA} we present the comparison with TCIAA \cite{wang2020transferable}, UAP \cite{li2019universal} and Meta-attack \cite{yang2021learning}. We observe that our method outperforms TCIAA by a huge margin. We can also see that when mis-ranking loss is naively applied in case of TCIAA$^\dagger$ \cite{yang2021learning}, the model' performance degrades. Our attack has better performance compared to both TCIAA and Meta-attack.
\begin{table}[H]
\caption{AlignedReID trained on Market with MSMT-17 as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID. $^\dagger$ uses PersonX \cite{sun2019dissecting} as extra dataset.$^*$ uses PersonX for meta learning. }
\label{tab:comparison_aligned_TCIAA}
\centering
{
\begin{tabular}{c| c c c }
\hline
{Model} &\multicolumn{3}{c}{Aligned reid} \\
& mAP &R-1 &R-10 \\
\hline
Before & 67.81 &80.50 & 93.18 \\
\hline
TCIAA \cite{wang2020transferable}&14.2 & 17.7 & 32.6 \\
{MeGA$^*$ (Ours)} & {11.34} & {12.81} & {24.11} \\
MeGA (Ours) & \textbf{7.70} & \textbf{9.47} & \textbf{19.16} \\
\hline
& \multicolumn{3}{c}{PCB} \\
Before & 69.94 &84.47 & - \\
\hline
TCIAA \cite{wang2020transferable} & 31.2 & 45.4 & - \\
TCIAA$^\dagger$ \cite{wang2020transferable} & 38.0 & 51.4 & - \\
UAP \cite{li2019universal} & 29.0 & 41.9 & - \\
Meta-attack$^*$ ($\epsilon = 8$) \cite{yang2021learning} &26.9 & 39.9 & \\
\hline
{MeGA$^*$ ($\epsilon = 8$) (Ours)} & {22.91} & {31.70} & - \\
MeGA ($\epsilon = 8$) (Ours) & \textbf{18.01} & \textbf{21.85} & 44.29 \\
\hline
\end{tabular}
}
\end{table}
\iffalse
\begin{table}[H]
\caption{PCB-P4. Test on Duke. MSMT used for meta learning. $^*$ uses PersonX for meta learning.}
\label{tab:pcb_duke}
\centering
{
\begin{tabular}{c| c c c | }
\hline
\hline
\multirow{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
Before & 69.94 &84.47 & - \\
\hline
TCIAA \cite{wang2020transferable} & 31.2 & 45.4 & - \\
TCIAA$^*$ \cite{wang2020transferable} & 38.0 & 51.4 & - \\
UAP \cite{li2019universal} & 29.0 & 41.9 & - \\
Meta-attack$^*$ ($\epsilon = 8$) \cite{yang2021learning} &26.9 & 39.9 & \\
\hline
MeGA ($\epsilon = 8$) & 18.01 & 21.85 & 44.29 \\
\hline
\end{tabular}
}
\end{table}
\fi
\iffalse
\begin{table}[!htb]
\caption{Test on market. MSMT-17 used for meta learning. PCB Same dataset and same model}
\label{tab:meta_market}
\centering
{
\begin{tabular}{c| c c c | }
\hline
\hline
\multirow{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
Before & 78.54 & 92.87 & - \\
\hline
\hline
w/ label + mask & & & \\
\hline
w/ label + mask + meta & 4.26 & 6.20 & 13.12 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}[H]
\caption{Test on market. MSMT-17 used for meta learning. Same dataset and cross model. Train on aligned and test on PCB and HACNN}
\label{tab:meta_market}
\centering
{
\begin{tabular}{c| c c c | }
\hline
{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
w/ label + mask & & & \\
\hline
w/ label + mask + meta + PCB& 7.39 & 12.47 & 24.04 \\
\hline
w/ label + mask + meta + HACNN & 4.85 & 4.80 & 11.69 \\
\end{tabular}
}
\end{table}
\fi
\subsection{Subjective Evaluation}
We show the example images obtained by our algorithm in Figure \ref{fig:subjective} and top-5 retrieved results in Figure \ref{fig:retrieved_results} for the OSNet model. We can see that in the case of clean samples the top-3 retrieved images match the query ID, however, none of the retrieved images match query ID in the presence of our attack.
\begin{figure}[h]
\centering
\includegraphics[width = .9cm,height = 1.1cm, cfbox=blue 1pt 1pt]{retrieved_images/query_top000_name_0458_c1s6_032271_00.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top001_name_0458_c4s6_032891_03.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top002_name_0458_c5s3_081437_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top003_name_0458_c5s3_081637_05.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/clean_top004_name_0001_c1s2_037091_02.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/clean_top005_name_0001_c1s6_011741_02.jpg}\\
\includegraphics[width = .9cm,height = 1.1cm]{retrieved_images/emptimage.png}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top001_name_0431_c5s1_105373_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top002_name_0431_c2s1_104821_01.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top003_name_0431_c2s1_104746_02.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top004_name_0000_c3s1_081467_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top005_name_0431_c5s1_105323_03.jpg}
\caption{Query image marked in blue border. Top 5 {retrieved} mages from OSNet for Market-1501 (top). Green colored boxes are correct match and red ones are incorrect. Retrieved images after attacking query sample (bottom).}
\label{fig:retrieved_results}
\end{figure}
\subsection{Attack using unlabelled source}
In this section we discuss the attack when source dataset $\mathcal{T}$ is unlabeled and neither the victim model nor the dataset used for training victim model are available. This is a very challenging scenario as supervised models cannot be used for attack. Towards this, we use unsupervised trained models on Market-1501 and MSMT-17 from \cite{ge2020self}. In Table \ref{tab:train_msmt_test_market}, we present results for training using MSMT-17 and testing on Market. We observe that IBNR50 obtains a mAP and R-1 accuracy of 40.7\% and 52.34\% when both labels and mask are not used. When mask is incorporated there is a substantial boost of 3.82\% in mAP and 4.81\% in R-1 accuracy in case of OSNet. These gains are even higher for MLFN and HACNN.
In case of Market-1501 to MSMT-17 in Table \ref{tab:market-msmt}, we see that the attack using only mask performs reasonably well compared to that of attacks using label or both label and mask. Due to the comparatively small size of Market-1501, even the attacks using labels are not very efficient.
\begin{table}[H]
\caption{MSMT-17 $\rightarrow$ Market-1501. R50 denotes Resnet50.}
\label{tab:train_msmt_test_market}
\centering
{
\begin{tabular}{c| c c | c c| c c}
\hline
{Model} &\multicolumn{2}{c|}{OSNet} & \multicolumn{2}{c|}{MLFN} & \multicolumn{2}{c}{HACNN} \\
& mAP &R-1 & mAP &R-1 & mAP &R-1 \\
\hline
Before &82.6 & 94.2 & 74.3 & 90.1 & 75.6 & 90. 9 \\
\hline
$l$ (R50) & 30.50 & 39.45 & 26.37 & 38.03 & 31.15 & 39.34\\
$l+\mathbf{M}$ (R50) &24.50 &33.07 & 21.76 & 32.18 & 18.81&23.66 \\
$\mathbf{M}$ (R50) & 36.5 &47.56 & 34.92& 52.61 &31.15 &39.34 \\
\hline
\hline
IBN R50 & 40.7 & 52.34 & 40.62 & 61.46 & 35.44 & 44.84 \\
\hline
$\mathbf{M}$ (IBN R50) & 36.88 & 47.53 & 35.01 & 52.79 & 30.98& 38.98 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}[H]
\caption{ Market-1501 $\rightarrow$ MSMT-17.}
\label{tab:market-msmt}
\centering
{
\begin{tabular}{c| c c | c c| c c}
\hline
{Model} &\multicolumn{2}{c|}{OSNet} & \multicolumn{2}{c|}{MLFN} & \multicolumn{2}{c}{HACNN} \\
& mAP &R-1 & mAP &R-1 & mAP &R-1 \\
\hline
Before & 43.8 & 74.9 & 37.2 & 66.4 & 37.2 &64.7\\
$l$ (R50) & 31.78 & 60.43 & 25.17 & 49.33 & 28.9&54.91\\
$l+\mathbf{M}$ (R50) &29.04 &56.11 & 22.02 & 43.57 &28.26 &53.53 \\
\hline
$\mathbf{M}$ (R50) & 35.16 & 66.28 &29.16 & 56.65 &29.69& 57.81 \\
\hline
\end{tabular}
}
\end{table}
\section{Conclusion}
We present a generative adversarial attack method using mask and meta-learning techniques. The mask allows better transferability across different networks, whereas, meta learning allows better generalizability. We present elaborate results under various settings. Our ablation also shows the importance of mask and meta-learning. Elaborate experiments on Market-1501, MSMT-17 and DukeMTMC-reID shows the efficacy of the proposed method.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
While the first traffic signals were controlled completely in open loop, various approaches have been taken to adjust the green light allocation based on the current traffic situation. To mention a few, SCOOT~\cite{robertson1991optimizing}, UTOPIA~\cite{mauro1990utopia} and SCATS~\cite{sims1980sydney}. Also, learning based approaches have been taken, e.g.,~\cite{JIN20175301}.
However, these approaches lack of formal stability, optimality, and robustness guarantees. In~\cite{nilsson2015entropy, nilsson2017generalized}, a decentralized feedback controller for traffic control was proposed, refereed to as Generalized Proportional Allocation (GPA) controller, which has both stability and maximal throughput guarantees. In those papers, an average control action for traffic signals in continuous time is given. Since the controller has several desired properties, it is motivated to investigate if this controller performs well in a micro-simulator with more realistic traffic dynamics. First of all, under the assumptions that the controller can measure the whole queue lengths at each junction, the averaged controller is throughput optimal from a theoretical perspective. With this, we mean that when the traffic dynamics is modeled as a simple system of point queues there exists no controller that can handle larger constant exogenous inflows to a network than this controller. This property of throughput-optimality also means that there are formal guarantees that the controller will not create gridlock situations in the network. As exemplified in~\cite{varaiya2013max}, feedback controllers that perform well for a single isolated junction may cause gridlock situations in a network setting.
At the same time, this controller requires very little information about the network topology and the traffic flow propagation. All information the controller needs to determine the phase activation in a junction is the queue lengths on the incoming lanes to a junction and the static set of phases. Those requirements on information make the controller fully distributed, i.e., to compute the control action in one junction, no information is required about the state in the other junctions.
The proposed traffic signal controller also has the property that it adjusts the cycle lengths depending on the demand. The fact that during higher demands, the cycle lengths should be longer to waste less service time due to phase shifts, has been suggested previously for open loop traffic signal control, see e..g~\cite{roess2011traffic}.
Another feedback control strategy for traffic signal control is the MaxPressure controller~\cite{Varaiya:13, varaiya2013max}. The MaxPressure controller utilizes the same idea as the BackPressure controller, proposed for communication networks in~\cite{tassiulas1992stability}. While the BackPressure controller controls both the routing (to which packets the should proceed after received service) and the scheduling (which subset of queues that should be severed), the MaxPressure controller only controls the latter, i.e., the phase activation but not the routing. More recently, due to the rapid development of autonomous vehicles, it has been proposed in~\cite{zaidi2018backpressure} to utilize the routing control from the BackPressure controller in traffic networks as well. The MaxPressure controller is also throughput optimal, but it requires information about the tuning ratios at each junction, i.e., how the vehicles (in average) propagate from one junction to the neighboring junctions. Although various techniques for estimating those turning ratios have been made, for example~\cite{coogan2017traffic}, with more and more drivers or autonomous vehicles doing their path planning through some routing service, it is likely to believe that the turning ratios can change in an unpredictable way when a disturbance occurs in the traffic network.
If the traffic signal controller has information about the turning ratios, other control strategies are possible as well, for instance, MPC-like as proposed in~\cite{hao2018modelI, hao2018modelII, grandinetti2018distributed} and robust control as proposed in~\cite{bianchin2018network}.
In~\cite{nilsson2018} we presented the first discretization and validation results of the GPA in a microscopic traffic simulator. Although, the results were promising, the validations were only performed on an artificial network and only compared with a fixed timed traffic signal controller. Moreover, the GPA was only discretized in a way such that the full cycle is activated. In this paper, we extend the results in~\cite{nilsson2018} by showing another discretization that does not have to utilize the full cycle and we also perform new validations. The new validations both compare the GPA to the MaxPressure controller on an artificial network (the reason for chosen a artificial network will be explained later), but also validate the GPA controller in a realistic scenario, namely for the Luxembourg city during a whole day.
The outline of the paper is as follows: In Section~\ref{sec:problem} we present the model we are using for traffic signals, together with a problem formulation of the traffic signal control problem. In Section~\ref{sec:controllers} we present two different discretization of the GPA that we are using in this study, but also give a brief description of the MaxPressure controller. In Section~\ref{sec:comparision} we compare the GPA controller with the MaxPressure controller on an artificial Manhattan-like grid, and in Section~\ref{sec:lust} we investigate how the GPA controller performs in a realistic traffic scenario. The paper is concluded with some ideas about further research.
\subsection{Notation}
We let $\mathbb{R}_+$ denote the non-negative reals. For a finite sets $\mathcal A, \mathcal B$, we let $\mathbb{R}_+^{\mathcal A}$ denote non-negative vectors indexed by the elements in $\mathcal A$, and $\mathbb{R}_+^{\mathcal A \times \mathcal B}$ the matrices indexed by elements $\mathcal A$ and $\mathcal B$.
\section{Model and Problem Formulation}\label{sec:problem}
In this section, we describe the model for traffic signals to be used throughout the paper together with the associated control problem.
We consider an arterial traffic network with signalized junctions. Let $\mathcal J$ denote the set of signalized junctions. For a junction $j \in \mathcal J$, we let $\mathcal L^{(j)}$ be the set of incoming lanes, on which the vehicles can queue up. The set of all signalized lanes in the whole network will be denoted by $\mathcal L = \cup_{j \in \mathcal J} \mathcal L^{(j)}$. For a lane $l \in \mathcal L^{(j)}$, the queue-length at time $t$ --measured in the number of vehicles-- is denoted by $x_l(t)$.
Each junction has a predefined set of \emph{phases} $\mathcal P^{(j)}$ of size $n_{p_j}$. For simplicity, we assume that phases $p_i \in \mathcal P^{(j)}$ are indexed by $i = 1, \ldots, n_{p_j}$. A phase $p \in \mathcal P^{(j)}$ is a subset of incoming lanes to the junction $j$ that can receive green light simultaneously. Throughout the paper, we will assume that for each lane $l \in \mathcal L$, there exists only one junction $j \in \mathcal J$ and at least one phase $p \in \mathcal P^{(j)}$ such that $l \in p$.
The phases are usually constructed such that the vehicles paths in a junction do not cross each other. This to avoid collisions.
Examples of this will be shown later in this paper. After a phase has been activated, it is common to signalize to the drivers that the traffic signal is turning red and give time for vehicles that are in the middle of the junction to leave it before the next phase are activated. Such time is usually referred to as clearance time. Throughout the paper we shall refer to those phases only containing red and yellow traffic light as \emph{clearance phases} (in contrast to phases, that models when lanes receives green traffic light). We will assume that each phase activation is followed by a clearance phase activation. While we will let the phase activation time vary, we will make the quite natural assumption that the clearance phases has to be activated for a fixed time.
For a given junction $j \in \mathcal J$, the set of phases can be described through a phase matrix $P^{(j)}$, where
$$P_{il}=\left\{\begin{array}{lcl}1&\text{ if }&\text{lane }l\text{ belongs to the }i\text{-th phase}\\ 0&\text{ if }&\text{otherwise\,.}\end{array}\right.$$
While the phase matrix does not contain the clearance phases, to each phase $p \in \mathcal P^{(j)}$ we will associate a clearance phase, denoted $p'$. We denote the set of real phases and their corresponding clearance phases $\bar{\mathcal P}^{(j)}$.
The controller's task in a signalized junction is to define a \emph{signal program}, $\mathcal T^{(j)} = \{ (p, t_\text{end} ) \in \bar{\mathcal P}^{(j)} \times \mathbb{R}_+ \}$, where the phase $p$ is activated until $t_\text{end}$. When $t = t_\text{end}$, the phase $p'$, where $(p', t_\text{end}) \in \mathcal T^{(j)}$, with smallest $t_\text{end} > t$ is activated. Formally, we can define the function $c^{(j)}(t)$ that gives the phase that is activated at time $t$ as follows
\begin{align*}
c^{(j)} (t) = \{ & p : (p, t_\text{end}) \in { \mathcal T}^{(j)} \mid \\ & t_\text{end} > t \text{ and } t_\text{end} \leq t'_\text{end} \textrm{ for all } (p', t'_\text{end}) \in { \mathcal T}^{(j)} \} \, .
\end{align*}
What $c^{(j)} (t)$ is doing is to find the phase with the smallest end-time greater than the current time.
\medskip
\begin{example} \label{ex:phasesandprogram}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[scale=0.5]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\draw [->, thick] (-1, -0.5) to [bend right] (0.3, 1);
\draw [->, thick] (-1, -0.5) to (1, -0.5);
\draw [->, thick] (-1, -0.5) to [bend left] (-0.7, -1);
\draw [->, thick] (1, 0.5) to [bend left] (0.7, 1);
\draw [->, thick] (1, 0.5) to (-1, 0.5);
\draw [->, thick] (1, 0.5) to [bend right] (-0.3, -1);
\node (l1) at (-1.5, -0.5) {$l_1$};
\node (l2) at (0.5, -1.5) {$l_2$};
\node (l3) at (1.5, 0.5) {$l_3$};
\node (l4) at (-0.5, 1.5) {$l_4$};
\end{scope}
\begin{scope}[scale=0.5, shift={(7, 0)}]
\begin{scope}[rotate=90]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\draw [->, thick] (-1, -0.5) to [bend right] (0.3, 1);
\draw [->, thick] (-1, -0.5) to (1, -0.5);
\draw [->, thick] (-1, -0.5) to [bend left] (-0.7, -1);
\draw [->, thick] (1, 0.5) to [bend left] (0.7, 1);
\draw [->, thick] (1, 0.5) to (-1, 0.5);
\draw [->, thick] (1, 0.5) to [bend right] (-0.3, -1);
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\node (l1) at (-1.5, -0.5) {$l_1$};
\node (l2) at (0.5, -1.5) {$l_2$};
\node (l3) at (1.5, 0.5) {$l_3$};
\node (l4) at (-0.5, 1.5) {$l_4$};
\end{scope}
\begin{scope}[scale=0.5]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\end{tikzpicture}
\caption{The phases for the junction in Example~\ref{ex:phasesandprogram}. This junction has four incoming lanes and two phases, $p_1 = \{l_1, l_3\}$ and $p_2 = \{l_2, l_4\}$. Hence there is no specific lane left-turning left.}
\label{fig:phasesexamplejunc}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.2]
\draw[->] (0, 0) -- (6.5, 0) node[right] {$t$};
\draw[-] (0, 0.1) -- (0, -0.1) node[below] {$0$} ;
\draw[-] (2.5, 0.1) -- (2.5, -0.1) node[below] {$25$} ;
\draw[-] (3, 0.1) -- (3, -0.1) node[below] {$30$} ;
\draw[-] (5.5, 0.1) -- (5.5, -0.1) node[below] {$55$} ;
\draw[-] (6, 0.1) -- (6, -0.1) node[below] {$60$};
\node (c) at (0, 0.4) {$c(t)$};
\node (p1) at (1.25, 0.4) {$p_1$};
\node (p1p) at (2.75, 0.4) {$ p_1'$};
\node (p2) at (4.25, 0.4) {$p_2$};
\node (p2p) at (5.75, 0.4) {$ p_2'$};
\begin{scope}[scale=0.20, shift={(6, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[mygreen] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[mygreen] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myred] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myred] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(13.75, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myyellow] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myyellow] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myred] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myred] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(21.25, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myred] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myred] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[mygreen] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[mygreen] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(28.75, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myred] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myred] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myyellow] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myyellow] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(6, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(13.75, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(21.25, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(28.75, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\end{tikzpicture}
\caption{Example of a signal program for the junction in Example~\ref{ex:phasesandprogram}. In this example the signal program is $\mathcal T = \{ (p_1, 25), (p_1', 30), (p_2, 55), (p_2', 60)\}$.}
\label{fig:signaltiming}
\end{figure}
Consider the junction in Fig.~\ref{fig:phasesexamplejunc} with the incoming lanes numbered as in the figure. In this case the drivers turning left have to solve the collision avoidance by themselves. The phase matrix is
$$P = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{bmatrix} \, .$$
An example of signal program is shown in Fig.~\ref{fig:signaltiming}. Here the program is $\mathcal T = \{ (p_1, 25), (p_1', 30), (p_2, 55), (p_2', 60)\}$. which means that both the phases are activated for $25$ seconds each, and the clearance phases are activated for $5$ seconds each.
\end{example}
\medskip
Moreover, we let
$$T^{(j)} = \max\{t_\text{end} \mid (p, t_\text{end}) \in {\mathcal P}^{(j)} \}$$
denote the time when the signal program for junction $j$ ends, and hence a new signal timing program has to be determined.
\section{Feedback Controllers}\label{sec:controllers}
In this section, we present three different traffic signal controllers that all determine the signal program. The first two are discretization of the GPA controller, where the first one makes sure that all the clearance phase are activated during one cycle, and the second one only activates the clearance phases if their corresponding phase has been activated. The third controller is the MaxPressure controller.
All the three controllers are feedback-based, i.e., when one signal program has reached its end, the current queue lengths are used to determine the upcoming signal program. Moreover, the GPA controllers are fully distributed, in the sense that to determine the signal program in one junction, the controller only needs information about the queue-lengths on the incoming lanes for that junction. The MaxPressure controller is also distributed in the sense that it does not requires network wide information, but it requires queue length information from the neighboring junctions as well.
For all of the controller presented in this section, we assume for simplicity of the presentation that after a phase has been activated, a clearance phase has to be activated for a fixed amount of time $T_w > 0$, that is independent of which phase that has just been activated.
\subsection{GPA with Full Clearance Cycles} \label{sec:GPAfull}
For this controller, we assume that all the clearance phases have to be activated for each cycle. When $t = T^{(j)}$, a new signal program is computed by solving the following convex optimization problem:
\begin{equation}\label{eq:gpa}
\begin{aligned}
\optmax{\begin{matrix} \hspace{0.2em} \nu\in\mathbb{R}_+^{n_{p_j}} \\ w\in\mathbb{R}_+ \end{matrix}} & \sum_{l \in \mathcal L^{(j)}} x_l(t) \log\left( (P^T\nu)_l \right) + \kappa \log(w) \, , \\
\text{subject to}\quad & \sum_{1 \leq i \leq n_{p_j}} \nu_i + w = 1 \,, \\
& w \geq \bar{w} \, .
\end{aligned}
\end{equation}
In the optimization problem above, $\kappa > 0$ and $\bar{w} \geq 0$ are tuning parameters for the controller, and their interpretation will be discussed later.
The vector $\nu$ in the solution of the optimization problem above, determines the fraction of the cycle time that each phase should be activated, where each element in $\nu$ contains this fraction. The variable $w$ tells how large fraction of the cycle time that should be allocated to the clearance phases. Observe that as long as the queue lengths are finite $w$ will be strictly greater than zero. Since we assume that each clearance phase has to be activated for a fixed amount of time, $T_w > 0$, the total cycle length $T_\text{cyc}$ for the upcoming cycle can be computed by
$$T_\text{cyc} = \frac{n_{p_j} T_w}{w} \, .$$
With the knowledge of the full-cycle length, the signal program for the upcoming cycle can be computed according to Algorithm~\ref{algo:gpafull}.
Although the optimization problem can be solved in real-time using convex solvers, the optimization problem can also be solved analytically in the spacial cases. One such case is when the phases are orthogonal, i.e., every incoming lane only belongs to one phase. If the phases are orthogonal, then $P^T \mathbbm{1} = \mathbbm{1}$. In the case of orthogonal phases and $\bar{w} = 0$, the solution to the optimization problem in~\eqref{eq:gpa} is given by
\begin{equation} \label{eq:gpaorthogonal}
\begin{aligned}
\nu_i (x(t)) &= \frac{\sum_{l \in \mathcal L^{(j)}}P_{il}x_l(t)}{\kappa+\sum_{l \in \mathcal L^{(j)}} x_l(t)}\,,\qquad i=1,\ldots,n_{p_j}\,, \\
w(x(t)) &= \frac{\kappa}{\kappa+\sum_{l \in \mathcal L^{(j)}} x_l(t)} \,.
\end{aligned}
\end{equation}
From the expression of $w$ above, a direct expression for the total cycle length can be obtained
\begin{equation*}
\displaystyle T_\text{cyc} = T_w n_{p_j} +\frac{T_w n_{p_j}}{\kappa}{\sum_{l \in \mathcal L^{(j)}} x_l(t)} \, .\label{eq:cycletime}
\end{equation*}
From the expressions above we can observe a few things. First, we see that the fraction of the cycle that each phase is activated is proportional to the queue lengths in that phase, and this explains why we done this control strategy generalized proportional allocation. Moreover, we get an interpretation of the tuning parameter $\kappa$, it tells how the cycle length $T_\text{cyc}$ should scale with the current queue lengths. If $\kappa$ is small, even small queue lengths will cause longer cycles, while if $\kappa$ is large the cycles will be short even for large queues. Hence, a too small $\kappa$ may give too long cycles, which can result in that lanes get more green-light than needed and the controller ends up giving green light to empty lanes, while vehicles in other lanes are waiting for service. On the other hand, a too large $\kappa$ may make the cycle lengths so short, so that the fraction of the cycle that each phase gets activated is too short for the drivers to react on.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{GPA with Full Clearance Cycles}\label{algo:gpafull}
\DontPrintSemicolon
\KwData{Current time $t$, local queue lengths $x^{(j)}(t)$, phase matrix $P^{(j)}$, clearance time $T_w$, tuning parameters $\kappa, \bar w$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
$(\nu, w)$ $\leftarrow$ Solution to~\eqref{eq:gpa} given $x^{(j)}(t), P^{(j)}, \kappa, \bar w$\;
$T_\text{cyc} \leftarrow n_p \cdot T_w / w$ \;
$t_\text{end} \leftarrow t$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j}$}{
$t_\text{end} \leftarrow t_\text{end} + \nu_i \cdot T_\text{cyc}$ \;
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t_\text{end})$ \Comment*[r]{Add phase $p_i$}
$t_\text{end} \leftarrow t_\text{end} + T_w$ \;
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t_\text{end})$ \Comment*[r]{Add clearance phase $p_i'$}
}
\end{algorithm}
\end{figure}
\begin{remark}
In~\cite{nilsson2017generalized} we showed that the averaged continuous time GPA controller can stabilize, and hence keep the queue-lengths bounded, the network. Moreover, this averaged version is throughput-optimal, which means that no controller can handle more exogenous inflow to network than this controller.
\end{remark}
However, when the controller is discretized, the following example shows that an upper bound on the cycle length, i.e., $\bar{w} > 0$ is required to guarantee stability even for an isolated junction.
\begin{example}\label{ex:unstableunboundedcycle}
Consider a junction with two incoming lanes with unit flow capacity, both having their own phase, and let the exogenous inflows $\lambda_1 = \lambda_2 = \lambda$, $T_w = 1$, $\bar w = 0$, $x_1(0) = A > 0$, and $x_2(0) = 0$. The control signals and the cycle time for the first iteration is then given by
\begin{align*}
u_1(x(0)) &= \frac{A}{A+\kappa} \, , \\
u_2(x(0)) &= 0 \, , \\
T(x(0)) &= \frac{A+\kappa}{\kappa}.
\end{align*}
Observe that the cycle time $T(x(0))$ is strictly increasing with $A$. After one full service cycle, i.e., at $t_1 = T(x(0))$ the queue lengths are
\begin{align*}
x_1(t_1) &= A + T(x(0)) \left(\lambda - \frac{A}{A+\kappa} \right)= \overbrace{A + \lambda \frac{A+\kappa}{\kappa} - \frac{A}{\kappa}}^{f(A)} \, , \\
x_2(t_1) &= T(x(0)) \lambda = \lambda \left( \frac{A+ \kappa}{\kappa} \right).
\end{align*}
If $x_1(t_1) = 0$, then due to symmetry, the analysis of the system can be repeated in the same way with a new initial condition. To make sure that one queue always get empty during the service cycles, it must hold that $f(A) \leq 0$. Moreover, to make sure that the other queue grows, it must also hold that $x_2(t_1) > A$ which can be equivalently expressed as
\begin{align*}
A \kappa + \lambda(A + \kappa) - A &\leq 0 \, , \\
A \kappa - \lambda(A+\kappa) &< 0 \, .
\end{align*}
The choice of $\lambda = \kappa = 0.1$ and $A= 1$ is one set of parameters satisfying the constraints above, and will hence make the queue lengths and cycle times grow unboundedly. How queue lengths and cycle times evolve in this case is shown in Fig.~\ref{fig:unstableunboundedcycle}.
\end{example}
\begin{figure}
\centering
\input{tikzpictures/exampleblowup.tikz}
\caption{How the traffic volumes evolve in time together with the cycle times for the system in Example~\ref{ex:unstableunboundedcycle}. We can observe that the cycle length increases for each cycle.}
\label{fig:unstableunboundedcycle}
\end{figure}
\medskip
Imposing an upper bound on the cycle length, and hence a lower bound on $w$ will then shrink the throughput region. An upper bound of the cycle length may occurs naturally, due to the fact that the sensors cover a limited area and hence the measurements will saturate. However, we will later observe in the simulations that $\bar{w} > 0$ may improve the performance of the controller when it is simulated in a realistic scenario, even when saturation of the queue length measurements is possible.
\subsection{GPA with Shorted Cycles}\label{sec:GPAshorted}
One possible drawback of the controller in Section~\ref{sec:GPAfull} is that it has to activate all the clearance phases in one cycle. This property implies that if the junction is empty when the signal program is computed, it will take $n_{p_j} T_w$ seconds until a new signal program is computed. Motivated by this, we also present a version of the GPA where only the clearance phases get activated if their corresponding phases have been activated. If we let $n_{p_j}'$ denote the number of phases that will be activated during the upcoming cycle, the total cycle time is given by
$$T_\text{cyc} = \frac{n_{p_j}' T_w}{w} \, .$$
How to compute the signal program in this case, is shown in Algorithm~\ref{algo:gpashorted}.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{GPA with Shorted Cycles}\label{algo:gpashorted}
\DontPrintSemicolon
\KwData{Current time $t$, local queue lengths $x^{(j)}(t)$, phase matrix $P^{(j)}$, clearance time $T_w$, tuning parameters $\kappa, \bar w$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
$(\nu, w)$ $\leftarrow$ Solution to~\eqref{eq:gpa} given $x^{(j)}(t), P^{(j)}, \kappa, \bar w$\;
\Comment*[l]{Compute the number of phases to be activated}
$n_{p_j}' \leftarrow 0$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j}$}{
\If{$\nu_i > 0$}{
$n_{p_j} ' \leftarrow n_{p_j}' + 1$ \;
}
}
\uIf{$n_{p_j} ' > 0$}{
\Comment*[l]{If vehicles are present on some phases, activate those}
$T_\text{cyc} \leftarrow n'_{p_j} \cdot T_w / w$ \;
$t_\text{end} \leftarrow t$ \;
\For{$i\leftarrow 1$ \KwTo $n_p$}{
\If{$\nu_i > 0$} {
$t_\text{end} \leftarrow t_\text{end} + \nu_i \cdot T_\text{cyc}$ \;
\Comment*[l]{Add phase $p_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t_\text{end})$ \;
$t_\text{end} \leftarrow t_\text{end} + T_w$ \;
\Comment*[l]{Add clearance phase $p'_i$ }
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t_\text{end})$
}}
}
\Else{
\Comment*[l]{If no vehicles are present, hold a clearance phase for one time unit}
$\mathcal T^{(j)} \leftarrow (p'_1, t+1)$
}
\end{algorithm}
\end{figure}
\subsection{MaxPressure}
As mentioned in the introduction, the MaxPressure controller is another throughput optimal feedback controller for traffic signals. The controller computes the difference between the queue lengths and their downstream queue lengths in each phase, to determine each phase's pressure. It then activates the phase with the most pressure for a fixed time interval. To compute the pressure, the controller needs information about where the outflow from every queue will proceed. To model this, we introduce the routing matrix $R \in \mathbb{R}_+^{\mathcal E \times \mathcal E}$, whose elements $R_{ij}$ tells the fraction of vehicles that will proceed from lane $i$ in the current junction to lane $j$ in a downstream junction.
With the knowledge of the routing matrix and under the assumption that the flow rates are the same for all phases, the pressure, $w_i$, for each phase $p_i \in \mathcal P^{j}$ can then be computed as
$$w_i = \sum_{l \in p_i} \biggl( x_l(t) - \sum_k R_{lk} x_k(t) \biggr) \, .$$
The phase that should be activated is then any phase in the set $ \argmax_i w_i \,.$
Apart from the routing matrix, the MaxPressure controller has one tuning parameter, the phase duration $d > 0$. That parameter tells how long a phase should be activated, and hence how long it should take until the pressures are resampled, and a new phase activation decision is made.
How to compute the signal program with the MaxPressure controller is shown in Algorithm~\ref{algo:maxpressure}.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H] \DontPrintSemicolon
\caption{MaxPressure}\label{algo:maxpressure}
\KwData{Current time $t$, local queue lengths $x(t)$, phase matrix $P^{(j)}$, routing matrix $R$, phase duration $d$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j} $}{
\For{$l \in \mathcal L^{(j)}$} {
\If{$l \in p_i^{(j)}$} {
$w_i \leftarrow w_i + x_l(t) - \sum_{k} R_{lk} x_k(t)$
}
}
}
$i \leftarrow \argmax_i w_i$ \;
\Comment*[l]{Add phase $p_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t + d)$ \;
\Comment*[l]{Add clearance phase $p'_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t+ d + T_w)$ \;
\end{algorithm}
\end{figure}
\section{Comparison Between GPA and MaxPressure} \label{sec:comparision}
\begin{figure}
\centering
\input{tikzpictures/manhattangrid.tikz}
\caption{The Manhattan-like network used in the comparison between GPA and MaxPressure. }
\label{fig:network}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\input{tikzpictures/2by2junction.tikz}
&
\input{tikzpictures/2by3junction.tikz} \\
2 by 2 junction & 2 by 3 junction
\\
& \\
\input{tikzpictures/3by2junction.tikz}
&
\input{tikzpictures/3by3junction.tikz} \\
3 by 2 junction & 3 by 3 junction
\end{tabular}
\caption{The four different types of junctions present in the Manhattan grid, together with theirs phases.} \label{fig:junction}
\end{figure}
\subsection{Simulation setting}
To compare the proposed controller and the MaxPressure controller, we simulate both controllers on an artificial Manhattan-like grid with artificial demand.
The simulator we are using is open source micro simulator SUMO~\cite{SUMO2012}, which is a simulator that simulates every single vehicle's behavior in the traffic network.
A schematic drawing of the network is shown in Fig.~\ref{fig:network}. In a setting like this, we can elaborate with the tuning ratios, and provide the MaxPressure controller both correct and incorrect turning ratios. This allows us to investigate the robustness properties of both the controllers.
The Manhattan grid in Fig.~\ref{fig:network} has ten bidirectional north to south streets (indexed A to J) and ten bidirectional east to west streets (indexed 1 to 10). All streets with an odd number or indexed by letter A, C, E, G or I consist of one lane in each direction, while the others consist of two lanes in each direction. The speed limit on each lane is 50 km/h. The distance between each junction is three hundred meters. Fifty meters before each junction, every street has an additional lane, reserved for vehicles that want to turn left. Due to the varying number of lanes, four different junction topologies exist, all shown in Fig.~\ref{fig:junction}, together with the set of possible phases. Each junction is equipped with sensors on the incoming lanes that can measure the number of vehicles queuing up to fifty meters from the junction. The sensors measure the queue lengths by the number of stopped vehicles.
Since the scenario is artificial, we can generate demand with prescribed turning ratios and hence let the MaxPressure controller to run in an ideal setting. For the demand generation, we assume that at each junction a vehicle will with probability $0.2$ will turn left, with probability $0.6$ go straight and with probability $0.2$ turn right. We do assume that all vehicles depart from lanes connected to the boundary of the network, and all vehicles will also end their trips when they have reached the boundary of the network. In other words, no vehicles will depart or arrive inside the grid. We will study the controllers' performance for three different demands, where the demand determined by the probability that a vehicle will depart from each boundary lane each second. We denote this probability $\delta$, where the probabilities for the three different demands are $\delta = 0.05$, $\delta = 0.1$ and $\delta = 0.15$. We generate vehicles for $3600$ seconds and then simulate until all vehicles have left the network.
We also compare the results for the GPA controller and the MaxPressure controller with a standard fixed time (FT) controller and a proportional fair (PF) controller, i.e., the GPA controller with full clearance cycles, but with $\kappa =0$ and a prescribed fixed cycle length. For the fixed time controller, the phases which contain a straight movement are activated for $30$ seconds and phases only containing left or right turn movements are activated for $15$ seconds. The clearance time for each phase is still set to $5$ seconds. This means that the cycle lengths for each of the four types of junctions will be $110$ seconds. This is also the fixed cycle time we are using for the proportional fairness controller.
\subsection{GPA Results}
Since the phases in this scenario are all orthogonal, the expressions in~\eqref{eq:gpaorthogonal} can be used to solve the optimization problem in~\eqref{eq:gpa}. The tuning parameter $\bar{w}$ is set to $\bar{w} = 0$ for all simulations. In Table~\ref{tab:gpamanhattan} we show how the total travel time varies for the GPA controller with shorted cycles for different values of $\kappa$. For the demand $\delta = 0.15$ and $\kappa =1$ a gridlock situation occurs, probably due to the fact that vehicles back-spills into upstream junctions. We can see that a $\kappa =10$ seems to be the best choice for $\delta = 1$ and $\delta = 0.15$, while a higher $\kappa$ slightly improves the total travel time for the lowest demand investigated. Letting $\kappa = 10$ has been shown to be reasonable for other demand scenarios in the same network setting, as observed in~\cite{nilsson2018}. How the total queue lengths varies with time for $\kappa =5$ and $\kappa = 10$ is shown in Fig.~\ref{fig:gpamanhattan}.
\begin{table}
\centering
\caption{GPA with Shorted Cycles - Manhattan Scenario}
\label{tab:gpamanhattan}
\begin{tabular}{rcc}
$\kappa$ & $\delta$ & Total Travel Time [h] \\ \hline \hline
$1$ & $0.05$ & $1398$ \\
$5$ & $0.05$ & \phantom{0}$715$ \\
$10$ & $0.05$ & \phantom{0}$699$ \\
$15$ & $0.05$ & \phantom{0}$696$ \\
$20$ & $0.05$ & \phantom{0}$690$ \\
$1$ & $0.10$ & $7636$ \\
$5$ & $0.10$ & $1898$ \\
$10$ & $0.10$ & $1992$ \\
$15$ & $0.10$ & $2263$ \\
$20$ & $0.10$ & $2495$ \\
$1$ & $0.15$ & $+\infty$ \\
$5$ & $0.15$ & $5134$ \\
$10$ & $0.15$ & $4498$ \\
$15$ & $0.15$ & $5140$ \\
$20$ & $0.15$ & $6050$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6000, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.15.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.05.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.10.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.15.csv};
\legend{GPA $\kappa=10 \, \delta = 0.05$, GPA $\kappa=10 \, \delta = 0.10$, GPA $\kappa=10 \, \delta = 0.15$ , GPA $\kappa=5 \, \delta = 0.05$, GPA $\kappa=5 \, \delta = 0.10$, GPA $\kappa=5 \, \delta = 0.15$ }
\end{axis}
\end{tikzpicture}
\caption{How the queue length varies with time when the GPA with shorted cycles are used in Manhattan grid. The GPA is tested with two different values of $\kappa=5,10$ for the three demand scenarios $\delta = 0.05, 0.10, 0.15$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:gpamanhattan}
\end{figure}
\subsection{MaxPressure Results}
The MaxPressure controller decides its control action not only based on queue-lengths on the incoming lanes, but also on the downstream lanes. It is not always clear in which downstream lane a vehicle will end up in after leaving the junction. If a vehicle can choose between several lanes that are all valid for its path, the vehicle's lane choice will be determined during the simulation, and depend upon how many other vehicles that are occupying the possible lanes. Because of this, we assume that if a vehicle can choose between several lanes, it will try to join the shortest one. To exemplify how the turning ratios are estimated in those situations, assume that Moreover, assume that the overall probability that a vehicle is turning right is $0.2$, and going straight is $0.6$. If a vehicle going straight can choose between lane $l_1$, $l_2$, but $l_2$ is also used by vehicles turning right, the probability that the vehicle going straight will queue up in lane $l_1$ is assumed to be $0.4$ and that the probability that the vehicle will queue up in lane $l_2$ is estimated to be $0.2$.
To also investigate the MaxPressure controller's robustness with respect to the routing information, we perform simulations both when the controller has the correct information about the turning probabilities, i.e., that a vehicle will turn right with probability $0.2$, continue straight with probability $0.6$ and turn left with probability $0.2$. For the simulations when the MaxPressure has the wrong turning information, the controller instead has the information that with probability $0.6$ the vehicle will turn right, with probability $0.3$ the vehicle will proceed straight and with probability $0.1$ the vehicle will turn left. In the simulations, we consider three different phase durations, $d=10$ seconds, $d=20$ seconds and $d=30$ seconds.
How the total queue lengths vary over time for the different demands is shown in Fig.~\ref{fig:mpmanhattand0.05}, Fig.~\ref{fig:mpmanhattand0.10}, and~Fig.~\ref{fig:mpmanhattand0.15}. The total travel time, both when the MaxPressure controller is operating with the right, and the wrong turning ratios are shown in Table~\ref{tab:mpmanhattan}. From these results, we can conclude that a shorter phase duration, i.e., $d = 10$, is the most efficient for all demands. This probably has to do with a longer phase duration the activation time is becoming larger than the time it takes to empty the measurable part of the queue. Another interesting observation is that if the MaxPressure controller has wrong information about the turning ratios, its performance does not decrease significantly.
\begin{table}
\centering
\caption{MaxPressure - Manhattan Scenario}
\label{tab:mpmanhattan}
\begin{tabular}{cccc}
$d$ & $\delta$ & TTT correct TR [h] & TTT incorrect TR [h] \\ \hline \hline
$10$ & $0.05$ & 858 & 856\\
$20$ & $0.05$ & 1 079 & 1 102 \\
$30$ & $0.05$ & 1 172 & 1 193 \\
$10$ & $0.10$ & 1 865 & 1 864 \\
$20$ & $0.10$ & 2 254 & 2 312 \\
$30$ & $0.10$ & 2 690 & 2 718 \\
$10$ & $0.15$ & 3 511 & 3 488 \\
$20$ & $0.15$ & 3 992 & 4 102 \\
$30$ & $0.15$ & 5 579 & 5 590 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.05.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.05.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.05.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.05.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.05.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.05$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.05}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.10.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.10.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.10.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.10.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.10.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.10$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.10}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.15.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.15.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.15.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.15.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.15.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.15.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.15$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.15}
\end{figure}
\subsection{Summery of the Comparison}
To better observe the difference between the GPA and MaxPressure, we have plotted the total queue length with the GPA controller with $\kappa = 5$ and $\kappa = 10$, and the best MaxPressure configuration with $d = 10$. The results are shown in Fig.~\ref{fig:comparisonl0.05}, Fig.~\ref{fig:comparisonl0.10} and Fig.~\ref{fig:comparisonl0.15}. In the figures we have also included for reference the total queue lengths for the fixed time controller and the proportional fair controller. The total travel travel times for those controllers are given in Table~\ref{tab:fixedmanhattan}. When the demand is $\delta = 0.15$, a gridlock situation occurs with the proportional fair controller, just as happened with the GPA controller with $\kappa = 1$. From the simulations, we can conclude that, for this scenario, during high demands, the MaxPressure controller performs better than the GPA controller, while during low demands the GPA performs better. One explanation for this could be that during low demands, adopting the cycle length is critical, while during high demands when almost all the sensors are covered, it is more important to keep the queue balanced between the current and downstream lanes. The proportional fair controller that does not adopt its cycle length, performs always the worst, and in most of the cases a fixed time controller performs second worst. It is just for the demand $\delta = 0.15$, and during the draining phase that the fixed time controller performs better than the GPA controller.
\begin{table}
\centering
\caption{Fixed Time and Proportional Fair Control - Manhattan Scenario}
\label{tab:fixedmanhattan}
\begin{tabular}{ccc}
Controller & $\delta$ & Total Travel Time [h] \\ \hline \hline
FT & $0.05$ & $1201$ \\
FT & $0.10$ & $2555$ \\
FT & $0.15$ & $4642$ \\
PF & $0.05$ & $1694$ \\
PF & $0.10$ & $4165$ \\
PF & $0.15$ & $+\infty$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.05.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.05.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.05.csv};
\addplot[mark=none, color=black, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf_fixed_l0.05.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time, PF}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.05$.o improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.05}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5500, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.10.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.10.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.10.csv};
\addplot[mark=none, color=black, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf_fixed_l0.10.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time, PF}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.10$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.10}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6500, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.15.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.15.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.15.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.15.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.15$. Since the proportional fair controller (PF) creates a gridlock, it is not included in the comparison. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.15}
\end{figure}
\section{LuST scenario} \label{sec:lust}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{luxnetwork.png}
\caption{The traffic network of Luxembourg city}
\label{fig:lust}
\end{figure}
To test the proposed controller in a realistic scenario, we make use of the Luxembourg SUMO Traffic (LuST) scenario presented in~\cite{codeca2017luxembourg}\footnote{The scenario files are obtained from \url{https://github.com/lcodeca/LuSTScenario/tree/v2.0}}. The scenario models the city center of Luxembourg during a full day, and the authors of~\cite{codeca2017luxembourg} have made several adjustments from some given population data when creating the scenario, to make it as realistic as possible.
The LuST network is shown in Fig.~\ref{fig:lust}. To each of the $199$ signalized junctions, we have added a lane area detector to each incoming lane. The length of the detectors are $100$ meters, or as long as the lane is if it is shorter than $100$ meters. Those sensors are added to give the controller real-time information about the queue-lengths at each junction.
As input to the system, we are using the Dynamic User Assignment demand data. For this data-set, the drivers try to take their shortest path (with respect to time) between their current position and destination. It is assumed that $70$ percent of the vehicles can recompute their shortest path while driving, and will do so every fifth minute. This rerouting possibility is introduced in order to model the fact that more and more drivers are using online navigation with real-time traffic state information, and will hence get updates about what the optimal route choice is.
In the LuST scenario, the phases are constructed in a bit more complex way and are not always orthogonal. For non-orthogonal phases, it is not always the case that all lanes receive yellow light when a clearance phase is activated. If the lane receives a green light in the next phase as well, it will receive green light during the clearance phase too. This property makes it more difficult to shorten the cycle, and for that reason, we choose to implement the controller which activates all the clearance phases in the cycle, i.e., the controller given in Section~\ref{sec:GPAfull}.
As mentioned, the phases in the LuST scenario are not orthogonal in each junction. Hence we have to solve the convex optimization problem in~\eqref{eq:gpa} to compute the phase activation. The computation is done by using the solver CVXPY\footnote{https://cvxpy.org} in Python. Although the controller can be implemented in a distributed manner, the simulations are in this paper performed on a single computer. Despite the size of the network, and that the communication via TraCI between the controller written in Python and SUMO slows down the simulations significantly, the simulations are still running about $2.5$ times faster than real-time. This shows that there is no problem with running this controller in a real-time setting.
Since the demand is high during the peak-hours in the scenario, gridlock situations occur. Those kinds of situations is unavoidable since there will be conflicts in the car following model. To make the simulation continue to run, SUMO has a teleporting option that is utilized in the original LuST scenario. The original LuST scenario is configured such that if that a vehicle has been looked for more than $10$ minutes, it will teleport along its route until there is free space. It is therefore important when we evaluate the control strategies that we keep track of the number of teleports, to make sure that the control strategy will not create a significantly larger amount of gridlocks, compared to the original fixed time controller. In Table~\ref{tab:lust} the number of teleports are reported for each controller. It is also reported how many of those teleports that are caused directly due to traffic jam, but one should have in mind that e.g., a gridlock caused by that two vehicles want to swap lanes, is often a consequence of a congestion.
The total travel time and the number of teleports for different choices of tuning parameters are shown in Table~\ref{tab:lust}. For the fixed time controller, we keep the standard fixed time plan provided with the LuST scenario. How the queue lengths vary with time for different $\bar w$ is shown in Fig.~\ref{fig:lustkappa5} for $\kappa =5$ and in Fig.~\ref{fig:lustkappa10} for $\kappa = 10$.
From the results, we can see that any controller with $\kappa = 10$ and $\bar{w}$ within the range of investigation will improve the traffic situation. However, the controller that yields the overall shortest total travel time is the one with $\kappa =5$ and $\bar{w} = 0.40$. This result suggests that tuning the GPA only with respect to $\kappa$, and keep $\bar{w} = 0$, may not lead to the best performance with respect to total travel time, although it gives higher theoretical throughput.
\begin{table}
\caption{Comparison of the different control strategies}
\label{tab:lust}
\centering
\begin{tabular}{lcccc}
& $\kappa$ & $\bar{w}$ & Teleports (jam) & Total Travel Time [h] \\ \hline \hline
GPA & $10$ & $0$ & 76 (6) & 49 791 \\
GPA & $10$ & $0.05$ & 65 (1) & 49 708 \\
GPA & $10$ & $0.10$ & 37 (0) & 49 519 \\
GPA & $10$ & $0.15$ & 57 (19) & 49 408 \\
GPA & $10$ & $0.20$ & 50 (10) & 49 380 \\
GPA & $10$ & $0.25$ & 35 (0) & 49 265\\
GPA & $10$ & $0.30$ & 30 (0) & 48 930\\
GPA & $10$ & $0.35$ & 25 (1) & 48 922\\
GPA & $10$ & $0.40$ & 51 (0) & 48 932 \\
GPA & $10$ & $0.45$ & 49 (5) & 49 076 \\
GPA & $10$ & $0.50$ & 42 (15) & 49 383 \\
GPA & $5$ & $0$ & 668 (76) & 57 249 \\
GPA & $5$ & $0.05$ & 234 (62) & 54 870 \\
GPA & $5$ & $0.10$ & 68 (10) & 52 038 \\
GPA & $5$ & $0.15$ & 47 (9) & 50 696 \\
GPA & $5$ & $0.20$ & 50 (6) & 49 904 \\
GPA & $5$ & $0.25$ & 41 (3) & 49 454 \\
GPA & $5$ & $0.30$ & 23 (0) & 48 964 \\
GPA & $5$ & $0.35$ & 30 (1) & 48 643 \\
GPA & $5$ & $0.40$ & 35 (5) & 48 445 \\
GPA & $5$ & $0.45$ & 39 (1) & 48 503 \\
GPA & $5$ & $0.50$ & 42 (10) & 48 772 \\
Fixed time & -- & -- & 122 (80) & 54 103\\ \hline
\end{tabular}
\end{table}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=8cm, ylabel={Total Queue Length [m] }, xlabel={Time}, legend pos=north west, xmin=0, xmax=24.00, xtick={0, 4, 8, 12, 16, 20, 24},
x filter/.code={\pgfmathparse{#1/3600+0}},
xticklabel={
\pgfmathsetmacro\hours{floor(\tick)}%
\pgfmathsetmacro\minutes{(\tick-\hours)*0.6}%
\pgfmathprintnumber{\hours}:\pgfmathprintnumber[fixed, fixed zerofill, skip 0.=true, dec sep={}]{\minutes}%
},
legend columns=2, legend style={at={(0.5,-0.25)},anchor=north}
]
\addplot[mark=none, color=mycolor1] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.0.csv};
\addplot[mark=none, color=mycolor2] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.1.csv};
\addplot[mark=none, color=mycolor3] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.2.csv};
\addplot[mark=none, color=mycolor4] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.30.csv};
\addplot[mark=none] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.40.csv};
\addplot[mark=none, color=black, dotted] table [x index=0, y index=1]{plotdata/csv/queue_static.csv};
\legend{GPA $\bar{w} = 0$, GPA $\bar{w} = 0.1$, GPA $\bar{w} = 0.2$, GPA $\bar{w} = 0.3$, GPA $\bar{w} = 0.4$, Fixed Time }
\end{axis}
\end{tikzpicture}
\caption{How the queue lengths varies with time when the traffic lights in the LuST scenario are controlled with the GPA controller and the standard fixed time controller. For the GPA controller the paramters $\kappa = 5$ and different values of $\bar{w}$ are tested. In order to improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:lustkappa5}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=8cm, ylabel={Total Queue Length [m] }, xlabel={Time}, legend pos=north west, xmin=0, xmax=24.00, xtick={0, 4, 8, 12, 16, 20, 24},
x filter/.code={\pgfmathparse{#1/3600+0}},
xticklabel={
\pgfmathsetmacro\hours{floor(\tick)}%
\pgfmathsetmacro\minutes{(\tick-\hours)*0.6}%
\pgfmathprintnumber{\hours}:\pgfmathprintnumber[fixed, fixed zerofill, skip 0.=true, dec sep={}]{\minutes}%
},
legend columns=2, legend style={at={(0.5,-0.25)},anchor=north}
]
\addplot[mark=none, color=mycolor1] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.0.csv};
\addplot[mark=none, color=mycolor2] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.1.csv};
\addplot[mark=none, color=mycolor3] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.2.csv};
\addplot[mark=none, color=mycolor4] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.30.csv};
\addplot[mark=none] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.40.csv};
\addplot[mark=none, color=black, dotted] table [x index=0, y index=1]{plotdata/csv/queue_static.csv};
\legend{GPA $\bar{w} = 0$, GPA $\bar{w} = 0.1$, GPA $\bar{w} = 0.2$, GPA $\bar{w} = 0.3$, GPA $\bar{w} = 0.4$, Fixed Time }
\end{axis}
\end{tikzpicture}
\caption{How the queue lengths varies with time when the traffic lights in the LuST scenario are controlled with the GPA controller and the standard fixed time controller. For the GPA controller the paramters $\kappa = 10$ and different values of $\bar{w}$ are tested. In order to improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:lustkappa10}
\end{figure}
\section{Conclusions}
In this paper, we have discussed implementational aspects of the Generalized Proportional Allocation controller. The controller's performance was compared to the MaxPressure controller both on an artificial Manhattan-like grid and for a real scenario. In comparison with MaxPressure, it was shown that the controller performs better than the MaxPressure controller when the demand is low, but the MaxPressure performs better during high demand. Those observations hold true even if the MaxPressure controller does not have correct information about the turning ratios in each junction.
While the information about the turning ratios and the queue lengths at neighboring junctions are needed for the MaxPressure controller, the GPA controller does not require any such information. This makes the GPA controller easier to implement in a real scenario, where the downstream junction may not be signalized and equipped with sensors. We showed that it is possible to both implement the GPA controller in a realistic scenario covering the city of Luxembourg and that it improves the traffic situation compared to a standard fixed time controller.
In all simulations, we have used the same tuning parameters for all junctions in the LuST scenario, while the fixed time controller is different for different junction settings. Hence the GPA controller's performance can be even more improved by tuning the parameters specifically for each junction. Ideally, this should be done with some auto-tuning solution, but it may also be worth to take static parameters into account, such as the sensor lengths. This is a topic for future research.
\bibliographystyle{ieeetr}%
|
\section{INTRODUCTION}
It is generally supposed that the high energy emission from blazars --
ie BL Lac objects and quasars which display some evidence of relativistic
jets -- arises from Compton scattering of low energy seed photons.
However the evidence for this supposition is quite weak.
There has been remarkably little progress, despite a great deal
of observational effort, in determining the details of the high energy
emission models. Various possibilities exist, all of which require
that the scattering particles are the relativistic electrons in the
jet. The most popular hypothesis is the Synchrotron Self-Compton
(SSC) model in which the seed photons are the synchrotron photons from
the jet, up-scattered by their parent electrons. Alternatively the
seed photons may arise externally to the jet (the External Compton,
EC, process) or, in a combination of the two models, photons from the
jet may be mirrored back to the jet (the Mirror Compton, MC, model)
from a gas cloud before scattering up to high energies. The various
models make slightly different predictions about the lags between the
seed and Compton-scattered variations, and about the relative
amplitudes of the two components and so, in principle, the models can
be distiguished (eg see Ghisellini and Maraschi 1996 and Marscher 1996
for summaries of the predictions of the various models). Much
observational effort has therefore been devoted to attempting to find
correlated variability in the high and low energy bands.
\begin{figure*}
\begin{center}
\leavevmode
\epsfxsize 0.8\hsize
\epsffile{xkmmmm.ps}
\end{center}
\caption{X-ray, infrared and millimetre lightcurves.
The X-ray counts are the total from 3 PCUs of the PCA.
The 1.3mm data are from the JCMT (filled circles), with some points from
OVRO (open squares). The 3mm data are all from OVRO. }
\label{fig:lcurves}
\end{figure*}
In the SSC model, it has generally been expected that, as the peak of
the synchrotron photon number spectrum lies in the mm band for most
radio-selected blazars, the mm would provide the bulk of the seed
photons and so would be well correlated with the X-ray emission.
However in the case of 3C273, one of the brightest blazars, extensive
searches have been carried out for a connection between the X-ray and
millimetre bands on both daily (M$\rm^{c}$Hardy\, 1993) and monthly (Courvoisier
\it et al.~\rm 1990; M$\rm^{c}$Hardy\, 1996) timescales but no correlation has been
found. The SSC model may, however, be saved if the flaring synchrotron
component is self-absorbed at wavelengths longer than $\sim1$ mm. We
therefore undertook a search for a correlation between the X-ray
and infrared emission in 3C273; previous observations (eg Courvoisier
\it et al.~\rm 1990; Robson \it et al.~\rm 1993) have confirmed that infrared flares in
3C273 are due to variations in a synchrotron component. In the past,
large amplitude infrared flares have been seen only rarely in 3C273
(eg Courvoisier \it et al.~\rm 1990; Robson \it et al.~\rm 1993), partially because of
limited sampling which usually could not detect flares with overall
timescales $\sim$week. Nonetheless the previous sampling was
sufficient to show that such flare activity is not a continual
occurence. It may be relevant that the present observations,
during which large amplitude infrared variability was detected,
were made during a period when the millimetre flux from 3C273 was very
high.
Here we present what we believe is the
best sampled observation of correlated variability between the
synchrotron and Compton-scattered wavebands in any blazar. The
observations cover not just one flaring event, which could be due to
chance, unrelated, flaring in the two wavebands, but two large
variations. The observations, including the X-ray, infrared and
millimetre lightcurves, and cross-correlation of the X-ray and other
waveband lightcurves, are described in Section 2. The origin of the
X-ray seed photons is discussed in Section 3, the implications of the
observations are discussed in Section 4 and the overall
conclusions are given in Section 5.
\section{OBSERVATIONS}
\subsection{X-ray Observations}
During the 6 week period from 22 December 1996 to 5 February 1997,
X-ray observations were carried out twice a day by RXTE and nightly near
infrared service observations were made at the United Kingdom Infrared
Telescope (UKIRT).
The X-ray observations were made with the large area (0.7 m$^{2}$)
Proportional Counter Array (PCA) on RXTE (Bradt, Rothschild
and Swank 1993). Each observation lasted for
$\sim1$ksec. The PCA is a non-imaging device with a field of view of
FWHM $\sim1^\circ$ and so the background count rate was calculated
using the RXTE q6 background model. Standard selection
criteria were applied to reject data of particularly high background
contamination.
3C273 is detectable in each observation in the energy range 3-20 keV
and its spectrum is well fitted by a simple power law. As with other
PCA spectra (eg The Crab --see
http://lheawww.gsfc.nasa.gov/users/keith/pcarmf.html) the measured
energy index, $\alpha$=0.7, is 0.1 steeper than measured by previous
experiments, eg GINGA (Turner \it et al.~\rm 1990). The X-ray spectra, and
spectral variability during the present observations are discussed in
detail by Lawson \it et al.~\rm (in preparation). The average count rate of 45 counts
s$^{-1}$ (3-20 keV) (the total for 3 of the proportional counter units,
PCUs, of the PCA) corresponds to a flux of $1.5 \times 10^{-10}$ ergs
cm$^{-2}$ s$^{-1}$ (2-10 keV).
In figure~\ref{fig:lcurves} we present the count rate in the 3-20 keV
band. We see two large X-ray flares. The first flare begins on
approximately 1 January 1997, reaches a peak on 4 January and returns
to its pre-flare level on 10 January. The flare is quite smooth. The
second flare begins on 22 January and lasts until approximately 1
February. The initial rise is faster than that of the first flare, and
the overall shape indicates a superposition of a number of smaller
flares. X-ray spectral variations are seen during the flares (Lawson
\it et al.~\rm in preparation), showing that changes in the Doppler factor of the jet
cannot, alone, explain the observed variability.
\subsection{Infrared and Millimetre Observations}
In figure~\ref{fig:lcurves} we show 1.3 and 3 mm observations from the
James Clerk Maxwell Telescope (JCMT - see Robson \it et al.~\rm 1993 for
reduction details) and from the Owens Valley Radio Observatory
(OVRO); the latter data were obtained from the calibration
database. There is no evidence of flares of comparable amplitude to
those in the X-ray lightcurve, but the sampling is poorer and the
errors are larger.
We also show the K-band lightcurve derived from service observations
at the United Kingdom Infrared Telescope (UKIRT) from 1 January until
3 February 1997. The observations were made with the infrared imaging
camera IRCAM3 with typical exposures of 3 minutes. The observations
were made in a standard mosaic manner and the data were also reduced
in a standard manner. There are some gaps due to poor weather but
increases in the infrared flux at the same time as the X-ray flares
can be seen clearly. The average K error is $\sim1$mJy (ie 1 per
cent). Approximately half of the error comes from the Poisson noise
and the rest comes from calibration uncertainties.
\subsection{X-ray/Infrared Cross-Correlation}
We have cross-correlated the X-ray lightcurves with the millimetre and
K-band lightcurves using the Edelson and Krolik (1988) discrete
cross-correlation algorithm as coded by Bruce Peterson (private communication).
As found previously there is no correlation of the
X-ray emission with the millimetre emission but there is a very strong
correlation with the infrared emission (figure~\ref{fig:xcor}) with
correlation coefficient close to unity. The cross-correlation peaks
close to zero days lag but is asymetric. Although we can rule out the
infrared lagging the X-rays by more than about one day, a lag of the
infrared by the X-rays by up to 5 days is possible.
The observations presented here are the first to show a definite
correlation in 3C273 between the X-ray emission and that of any
potential seed photons.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{xir_0.5.ps}
\end{center}
\caption{Cross-correlation of the 3-20 keV X-ray lightcurve and
K-band lightcurves shown in figure~\ref{fig:lcurves}.
}
\label{fig:xcor}
\end{figure}
\section{THE ORIGIN OF THE X-RAY SEED PHOTONS}
An important question is whether the infrared photons are actually the
seed photons for the X-ray emission or whether they are simply tracers
of a more extended spectral continuum, with the X-rays arising from
scattering of another part of the continuum. Robson \it et al.~\rm (1993) state
that in 3C273 the onset and peak of flares occur more or less
simultaneously (ie lags of $<1$ day) from K-band to 1.1 mm.
Therefore although we have not adequately monitored at wavelengths
longer than 2.2$\mu$, we assume that the whole IR to mm continuum does
rise simultaneously.
We have therefore calculated the Compton scattered spectrum resulting
from the scattering of individual decades of seed photon energies,
from the infrared to millimetre bands. The seed photons are taken from
a typical photon distribution and are scattered by a typical electron
distribution. The resulting scattered spectra are shown in
figure~\ref{fig:scatter} and details of the photon and electron
distributions are given in the caption to figure~\ref{fig:scatter}.
It is assumed that the emission region is optically thin which, in
blazars, is true for the large majority of frequencies discussed in
figure~\ref{fig:scatter}. Note that although the electron and input
photon spectra are self-consistent as regards the SSC mechanism, the
result is general and applies to scattering of seed photons produced
by any mechanism. At the highest Compton scattered energies, ie GeV,
only the highest energy seed photons below the break in the photon
distribution (ie near infrared) are important. However at medium
energy X-rays we get approximately equal contributions from each
decade of seed photons. Thus scattered infrared photons probably
contribute about 20 per cent of the medium energy X-ray flux and the sum of
the scattered X-ray emission from lower energy seed photons exceeds
that from the infrared alone. These ratios can be altered slightly by
different choices of seed photon and electron spectral index, but the
general result is robust.
If the infrared is indeed a tracer of the seed photon continuum, we
can extrapolate to find the expected variability in the millimetre
band. The peak and minimum observed K fluxes during our observations
are 124 and 93 mJy respectively, ie a range of 31 mJy, although we
note that we do not have K observations at either the peak or minimum
of the X-ray lightcurves and so the true range of K-band variability
may be somewhat more. If the spectral index, $\alpha$, of the seed
spectrum is 0.75 (as reported by Robson \it et al.~\rm and Stevens \it et al.~\rm 1998)
we would then expect a rise of $\sim$3.7 Jy at 1.3 mm, which we cannot
rule out in the present observations and which would not have been
easy to detect in previous, less well sampled, monitoring
observations, explaining the lack of success of previous searches for
millimetre/X-ray correlations. At 3mm the predicted variability
amplitude would be 7 Jy. Robson \it et al.~\rm states that the 3mm rises lag
1mm rises by about 6 days, and 3mm decays are substantially longer,
which would all make them easier to detect, given our sampling
pattern. However, with the exception of the very last datapoint at
day 44, no deviations of more than 5 Jy from the mean level are
detected. The implication is that $\alpha \leq 0.75$ or that the
flaring component is self absorbed by 3mm. If the flaring component
has $\alpha=1.2$ as derived for the 1983 flare by Marscher and Gear
(1985), that component would have to be self absorbed by 1.3mm.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{compton.ps}
\end{center}
\caption{Compton scattered spectrum resulting from the scattering of
a seed photon spectrum stretching from $\nu=10^{8}$ to $10^{20}$ Hz.
At low frequencies the spectral index, $\alpha$, where flux,
$F, \propto \nu^{-\alpha}$, is 0.75 and, above a break frequency
of $10^{14}$ Hz, $\alpha = 1.5$. The electron number energy spectrum,
$N(\gamma) \propto \gamma^{-m}$, where $\gamma$ is the Lorentz factor
of the individual electrons, stretches from $\gamma=10$ to $10^{7}$,
with a slope, $m$, at low energies of 2.5 and $m=4.0$ above
$\gamma=10^{4}$. The proper Klein-Nishina cross section is used.
No bulk relativistic motion is included.
The thick line represents the total scattered spectrum. The
other lines represent the result of scattering seed photons with
only one decade of energy.
Note that, in the medium energy X-ray band (4 keV= $10^{18}$ Hz), seed
photons from all decades from cm to near infrared contribute equally
to the scattered flux, with each contributing about 20 per cent.}
\label{fig:scatter}
\end{figure}
\section{DISCUSSION}
There are two major observational constraints on the X-ray emission
mechanism: the relative amplitudes of the synchrotron and Compton
scattered components, and the time lag between them. Here we
attempt to constrain these parameters by modelling the X-ray lightcurve.
\subsection{Modelling the X-ray lightcurve}
If the X-ray emission is physically related to the infrared emission,
then we can parameterise the relationship by:
\[ X_{predicted}(t)= A \, (K_{flux}(t-\delta t) - K_{quiescent})^{N}
\, + \, X_{quiescent} \]
$K_{quiescent}$ is a non-varying K-band component. Robson \it et al.~\rm (1993)
show that such a component, steady on a timescale of years, is
contributed probably by warm dust in the broad line clouds,
heated to the point of evaporation.
Following Robson \it et al.~\rm we fix $K_{quiescent}=50$mJy.
$K_{flux}(t-\delta t)$ is the total observed K-band flux at time
$t-\delta t$ and $X_{predicted}(t)$ is then the predicted total X-ray
flux at time $t$. $X_{quiescent}$ is the part of the X-ray flux which
does not come from the flaring region. The variable $\delta t$ is
included to allow for lags between the X-ray and infrared variations.
Initially we set $\delta t = 0$ but, in section 4.2, we consider the
implications of allowing $\delta t$ to vary.
$A$ is the constant of proportionality
(containing information about the electron
density, magnetic field and the various flux conversion constants)
and $N$ contains information about the emission mechanism. For
example if the X-rays arise from variations in electron density then
we expect $N=2$ in the SSC and MC processes, but in the EC model $N=1$.
We have therefore performed a $\chi^{2}$ fit, using a standard
Levenburg-Marquardt minimisation routine, comparing the predicted
X-ray flux with the observed flux, in order to determine the three
unknowns, $A$, $X_{quiescent}$ and $N$. The errors on the predicted
X-ray flux are derived from the observed errors on the infrared flux.
The present infrared lightcurve is not well enough sampled to
determine all 3 parameters independently but, if $X_{quiescent}$ could
be determined precisely from other observations, then we could
determine $N$ to $\pm0.2$. Here $N$ varies from 0.5 for
$X_{quiescent}=0$ to 1.0 for $X_{quiescent}=23$ and 2.0 for
$X_{quiescent}=35$. The minimum observed value of the total X-ray
count rate during the present observations was 35 count s$^{-1}$.
Hence as some part of those 35 count s$^{-1}$ almost certainly comes
from X-ray components which are not associated with the flaring
activity, eg a Seyfert-like nucleus or other parts of the jet, then
the maximum allowed value of $N$ is probably just below 2. Typical
RXTE count rates outside of major flaring periods are in the range
20-25 counts s$^{-1}$ and fluxes observed by previous satellites (eg
see Turner \it et al.~\rm 1990) correspond to the same flux range. If that
count rate represents the true value of $X_{quiescent}$, then $N$ is
probably nearer unity, favouring EC models, or SSC or MC models in
which variations in the magnetic field strength play an important part
in flux variations.
\subsection{Implications of lightcurve modelling for lags}
Comparison of the best-fit predicted and observed X-ray fluxes reveals
that, in the first flare, the predicted fluxes exceed the observed
fluxes on the rise and the reverse is true on the fall. A better fit,
at least for the first flare, occurs if the predicted lightcurve is
delayed by about a day (in other words, the observed IR leads the
X-rays). We therefore introduced a variety of time shifts, $\delta
t$ above,
into the IR lightcurve, and also separately considered the first
and second flares, and refitted. We applied simple linear
interpolation to estimate the IR flux at the exact (shifted) time of
the X-ray observations. The results are shown in
figure~\ref{fig:lagschi}.
When considering all of the IR data, we obtain a plot (top panel of
figure~\ref{fig:lagschi}) which is rather similar to the
cross-correlation plot (figure~\ref{fig:xcor}), which is not too
surprising as the analysis techniques are similar, although the
modelling in principle allows us to quantify the goodness of fit.
We are cautious of overinterpreting the above datasets and so, we
prefer to plot figure~\ref{fig:lagschi} in terms of raw $\chi^{2}$
rather than probabilities which might be taken too literally. As in
many analyses where the errors are small, slight (real) differences in
data streams lead to low probabilities of agreement even though
overall agreement is very good. Here a minor variation in either
X-rays or IR from a region not associated with the flare could
provide that small difference. However the change in relative
goodness of fit can be easily seen from the $\chi^{2}$ plots.
When we consider separately the IR data from the first flare
(ie the 11 data points up to day 20 of 1997), or from the second flare
(the remaining 5 data points) we obtain much better fits. We find
that the first flare is best fitted if the IR leads the X-rays by
about 0.75 days. We are again cautious in ascribing exact errors
to the lag but changes of $\delta \chi^{2}$ of 6.4, corresponding
to 40 per cent confidence, occur in the first flare at 0.25 days from
the minimum value. A lag of the X-rays by the IR by less
than 0.25 days is ruled out at the 99.97 per cent confidence level.
The more limited data of the second flare is, however,
best fitted by simultaneous IR and X-ray variations.
Again with caution, we note that Lawson \it et al.~\rm (in preparation) find
different X-ray spectral behaviour between the two flares. In the
first flare the spectrum hardens at the flare onset but, at the peak,
the spectrum is softer than the time averaged spectrum; in the second
flare the hardness tracks the flux quite closely with the hardest
emission corresponding to the peak flux. Thus there do appear to be
differences between the two flares. However whether the observed
differences are due to differences in, for example the physical
parameters of the emitting region (eg density, magnetic field
strength), the strength of any exciting shock, or the geometry of the
emitting regions, is not yet clear but is an interesting subject
for future investigations.
Although not really intended for such analysis, blind application of
Keith Horne's Maximum Entropy Echo Mapping software to the whole
dataset also leads to the conclusion that the IR leads the X-rays by
0.75 days (Horne, private communication).
As an example we show, in figure~\ref{fig:xpred}, the observed X-ray
lightcurve and the predicted lightcurve, based on parameters derived
from fitting to just the first flare with the IR leading by 0.75 days
(the best fit). We see that such a lag does not fit the second flare
well. In particular the predicted X-ray fluxes for the second flare
all lie above the observed fluxes by about 4 counts s$^{-1}$ and the
predicted fluxes now slightly lag (by about half a day) the observed
fluxes. One possible explanation of the excess is that $X_{quiescent}$
is lower during the second flare. From our long term weekly
monitoring (in preparation) we note that the two flares shown here are
actually superposed on a slowly decreasing trend of the correct slope
to explain the excess. Inclusion of such a trend into our fitting
procedure does produce a slightly better fit for the overall dataset,
but the different lags between the first and second flare still
prevent a good overall fit from being obtained. We therefore favour
the explanation that the long term lightcurve is actually made
up of a number of short timescale (week) flares, superposed on a more
slowly varying (months) `quiescent' component, rather than proposing
that the lightcurve is made up entirely of short flares, with no
underlying `quiescent' component.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{lagschi.ps}
\end{center}
\caption{
Results of comparing the observed X-ray lightcurve with that predicted
from the infrared variations, with all parameters allowed to remain
free apart from the X-ray/infrared lag. The numbers of degrees of
freedom are 13 (both flares), 8 (first flare) and 2 (second flare).
Note that it is impossible
to obtain a good fit to both X-ray flares simultaneously but acceptable
fits can be obtained to both fits individually. However the lags are
different for the two flares with the X-rays lagging the infrared by
$\sim0.75$ days in the first flare but the X-rays and infrared being
approximately simultaneous in the second flare.
}
\label{fig:lagschi}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{xpredict.ps}
\end{center}
\caption{Observed X-ray lightcurve (histogram) and the best fit
predicted X-ray flux (filled squares) based on the parameters derived
from fitting the infrared observations to the first flare (11 data points)
only. The best-fit parameters are $A=0.47$, $N=0.98$ and $X_{quiescent}=22.1$.
Following from figure~\ref{fig:lagschi} the lead of the
IR over the observed X-rays is fixed at 0.75 days,
the best-fit value for the first flare.
The observed X-ray errorbars (see figure~\ref{fig:lcurves})
are not repeated here to avoid cluttering the diagram.
Note how the predicted X-ray fluxes for the second `flare' are then
systematically overestimated and also slightly lag the observed
X-ray fluxes.}
\label{fig:xpred}
\end{figure}
\subsection{The X-ray Emission Mechanism}
The similarity between the present infrared variations and those of
previous infrared variations where the whole IR to mm continuum varied
together (Robson \it et al.~\rm 1993), and the lack of any other likely source
of rapidly variable infrared radiation, means that the varying
component of the infrared flux is almost certainly synchrotron
radiation from the jet. The very strong correlation between the X-ray
and infrared lightcurves shows that the same electrons which produce
the infrared synchrotron emission must also produce the scattered
X-ray emission. The original version of the EC
model (Dermer and Schlickeiser 1993)
in which the high energy variations are
caused by variations in the external seed photons is thus ruled out.
The next version, in which the electrons in the jet which produce the
infrared synchrotron emission also scatter an all-pervading ambient
nuclear photon field (Sikora, Begelman and Rees 1994)
is also ruled out, at least for the first flare,
as we would then expect exactly simultaneous X-ray and
infrared variations.
The remaining possible emission mechanisms are the SSC process,
which must occur at some level, and the MC process. In the SSC process
we expect, for moderate variations such as those observed here where
the emission region probably remains optically thin, that the X-ray
flares will lag the IR flares (in the source frame) by approximately
the light travel time across the radius of the emission region. The lag is
because most photons will not be scattered where they
are produced but will typically travel the radius of the emission
region before being scattered. In this model we can therefore deduce
the radius if we know the bulk Lorentz factor of the jet.
In the MC model the low energy photons also lead the high energy
photons, in this case by approximately the light travel time between
the emission region in the jet and the cloud.
If the cloud forms part of the broad line region we
might reasonably expect lags of order days.
The EC model is ruled out by the IR/X-ray lag but both the SSC and MC
models are consistent with the lag. The parameter $N$ is not yet well
defined but the present indications are that it is closer to 1 than to
2, which, for the SSC and MC models, implies that changes in magnetic
field strength are at least partially responsible for the observed
variations. The MC compton scattered flux has a higher dependence on
the bulk Lorentz factor of the jet than does the SSC mechanism, but
that factor is very hard to measure.
\section{CONCLUSIONS}
We have demonstrated, for the first time, a strong relationship between
the X-ray emission and that in any other lower frequency band in
3C273. We have shown that the IR and X-ray emission in 3C273 are very
strongly correlated. By means of a simple calculation we have shown that
each decade of the synchrotron spectrum from the cm to IR bands probably
contributes equally (at about 20 per cent per decade) to the Compton scattered
X-ray flux. Overall the lag between the IR and X-ray bands is very small
but, in at least the first flare, the IR
leads the X-ray emission by $\sim0.75\pm0.25$ days.
This lag rules out the EC model but is consistent with either the
SSC or MC model.
We have attempted to measure the parameter $N$ which determines the
relationship between the seed photon and Compton
scattered flux. The present data do not greatly constrain $N$ although
they indicate that 2 is the absolute upper limit and that a lower
value is probable. In terms of the SSC or MC models the implication is
that changes in the magnetic field strength are responsible for
at least part of the observed variations and, for $N=1$, could
be responsible for all of the variations.
Because of their intrinsic similarity, the SSC and MC models
are hard to distinguish. However if it were possible to measure
IR/X-ray lags for a number of flares, of similar amplitude, in the
same source, then in the SSC model one would expect broadly similar
lags in each case, assuming that the emission comes from physically
similar emission regions. However in the MC model the reflecting
clouds will probably be at a variety of different distances and
so the lags should be different in each case.
We may also examine variations in optical and UV emission line
strength. If synchrotron radiation from the jet is irradiating
surrounding clouds (MC process), then we would expect the resultant
recombination line radiation to vary with similar amplitude to, and
simultaneously with, the synchrotron emission. However in
the SSC process we would expect no change in emission line
strength.
Further X-ray/IR observations with $\sim$few hour time resolution
are required to refine the lag found here and to determine whether
the lag is different in different flares.
\\
{\bf Acknowledgements} We are very pleased to thank the management and
operatonal staff of both RXTE and UKIRT for their cooperation in
scheduling and carrying out these observations.
We thank Keith Horne for running our data through his MEMECHO software.
IM$\rm ^{c}$H thanks PPARC for grant support
and APM was
supported in part by NASA Astrophysical Theory Grant NAG5-3839.
|
\subsection*{Abstract.}
``Dual composition'', a new method of constructing energy-preserving
discretizations of conservative PDEs, is introduced. It extends
the summation-by-parts approach to arbitrary differential operators
and conserved quantities. Links to pseudospectral, Galerkin,
antialiasing, and Hamiltonian methods are discussed.
}
\medskip
\subsection*{1. Introduction}
For all $u,v\in C^1([-1,1])$,
$$
\int_{-1}^1 v \partial_x w\, dx =
-\int_{-1}^1 w \partial_x v\, dx + [vw]_{-1}^1,
$$
so the operator $\partial_x$ is skew-adjoint on $\{v\in C^1([-1,1]:
v(\pm1)=0\}$ with respect to the $L^2$ inner product $\ip{}{}$. Take $n$
points $x_i$, a real function $v(x)$, and
estimate $v'(x_i)$ from the values $v_i := v(x_i)$. In vector
notation, ${\mathbf v}' = D {\mathbf v}$, where $D$ is a differentiation matrix.
Suppose that
the differentiation matrix has the form $D = S^{-1}A$, in which $S$
induces a discrete approximation
$$\ip{{\mathbf v}}{{\mathbf w}}_S := {\mathbf v}^{\mathrm T} S {\mathbf w}\approx \int vw\,dx=\ip{v}{w},$$
of the inner product. Then
\begin{equation}
\label{byparts}
\ip{{\mathbf v}}{D{\mathbf w}}_S + \ip{D{\mathbf v}}{{\mathbf w}}_S = {\mathbf v}^{\mathrm T} S S^{-1} A {\mathbf w} + {\mathbf v}^{\mathrm T} A^{\mathrm T}
S^{-\mathrm T} S {\mathbf w} = {\mathbf v}^{\mathrm T}(A+A^{\mathrm T}){\mathbf w},
\end{equation}
which is zero if $A$ is antisymmetric
(so that $D$ is skew-adjoint with respect to $\ip{\,}{}_S$),
or equals $[vw]_{-1}^1$ if $x_1=-1$, $x_n=1$, and
$A+A^{\mathrm T}$ is zero except for $A_{nn}=-A_{11}=\frac{1}{2}$.
Eq. (\ref{byparts}) is known as a ``summation by parts'' formula;
it affects the energy flux of methods built from $D$.
More generally, preserving structural features such as skew-adjointness
leads to natural and robust methods.
Although factorizations $D=S^{-1}A$ are ubiquitous in finite element
methods, they have been less studied elsewhere. They were introduced
for finite difference methods in \cite{kr-sc} (see \cite{olsson} for
more recent developments) and for spectral methods in \cite{ca-go}, in which
the connection between spectral collocation and Galerkin methods was used
to explain the skew-adjoint structure of some differentiation matrices.
Let ${\operator H}(u)$ be a continuum conserved quantity, the {\em energy.}
We consider PDEs
\begin{equation}
\label{eq:hamilt_pde}
\dot u = {\operator D}(u)\frac{\delta\H}{\delta u}
\mbox{,}
\end{equation}
and corresponding ``linear-gradient'' spatial discretizations
\cite{mclachlan2,mclachlan1,mqr:prl}, ODEs of
the form
\begin{equation}
\label{eq:lin_grad}
\dot {\mathbf u} = L({\mathbf u}) \nabla H({\mathbf u})
\end{equation}
with appropriate discretizations of $u$, ${\operator D}$, ${\operator H}$, and
$\delta/\delta u$. For a PDE of the form (\ref{eq:hamilt_pde}), if
${\operator D}(u)$ is formally skew-adjoint, then $d{\operator H}/dt$ depends only on the
total energy flux through the boundary; if this flux
is zero, ${\operator H}$ is an integral. Analogously, if
(\ref{eq:lin_grad}) holds, then
$\dot H = \frac{1}{2}(\nabla H)^{\mathrm T} (L+L^{\mathrm T}) \nabla H$,
so that $H$ cannot increase if the symmetric part of $L$ is
negative definite, and $H$ is an integral if $L$ is antisymmetric.
Conversely, all systems with an integral can be written in
``skew-gradient'' form ((\ref{eq:lin_grad}) with $L$ antisymmetric)
\cite{mqr:prl}.
Hamiltonian systems are naturally in the form
(\ref{eq:hamilt_pde}) and provide examples.
This paper summarizes \cite{mc-ro}, which contains
proofs and further examples.
\subsection*{2. Discretizing conservative PDEs}
In (\ref{eq:hamilt_pde}), we want to allow constant operators such as
${\operator D}=\partial_x^n$ and ${\operator D} = \left(
\begin{smallmatrix}0 & 1 \\ -1 & 0\\ \end{smallmatrix}
\right)$, and nonconstant ones such as
${\operator D}(u) = u\partial_x + \partial_x u$.
These differ in the class of functions and boundary conditions which make
them skew-adjoint, which suggests Defn. 1 below.
Let (${\functionspace F},\ip{}{})$ be an inner product space.
We use two subspaces ${\functionspace F}_0$ and ${\functionspace F}_1$ which can be infinite dimensional
(in defining a PDE) or finite dimensional (in defining a discretization).
We write $\{f_j\}$ for a basis of
${\functionspace F}_0$, $\{g_j\}$ for a basis of ${\functionspace F}_1$, and expand $u=u_j f_j$,
collecting the coefficients $(u_j)$ into a vector ${\mathbf u}$.
A cardinal basis is one in which $f_j(x_i) = \delta_{ij}$, so that
$u_j = u(x_j)$.
\begin{definition}
A linear operator
$${\operator D}: {\functionspace F}_0\times {\functionspace F}_1 \to {\functionspace F},\quad {\operator D}(u)v\mapsto w\mbox{,}$$
is {\em formally skew-adjoint} if there is a functional $b(u,v,w)$,
depending only on the boundary values of $u$, $v$, and $w$ and their
derivatives up to a finite order, such that
$$
\ip{v}{{\operator D}(u)w} = -\ip{w}{{\operator D}(u)v}+b(u,v,w)\quad \forall\, u\in {\functionspace F}_0
,\ \forall\, v,w\in {\functionspace F}_1 .
$$
${\functionspace F}_1$ is called a {\em domain of interior skewness} of ${\operator D}$.
If $b(u,v,w) = 0$ $\forall\,u\in{\functionspace F}_0$, $\forall\,v,w\in{\functionspace F}_1$,
${\functionspace F}_1$ is called a {\em domain of skewness} of ${\operator D}$,
and we say that ${\operator D}$ is skew-adjoint.
\end{definition}
\begin{example}\rm Let ${\functionspace F}^{\rm pp}(n,r) = \{u\in C^r([-1,1]):u|_{[x_i,x_{i+1}]}
\in {\functionspace P}_n\}$ be the piecewise polynomials of degree $n$ with $r$ derivatives.
For ${\operator D}=\partial_x$,
${\functionspace F}^{\rm pp}(n,r)$, $n,\ r\ge 0$, is a domain of interior
skewness, i.e., continuity suffices,
and $\{u\in{\functionspace F}^{\rm pp}(n,r):u(\pm 1)=0\}$ is a domain of skewness.
\end{example}
\begin{example}\rm
With $D(u) = 2(u\partial_x + \partial_x u) + \partial_{xxx}$, we have
$$
\ip{v}{{\operator D}(u)w}+\ip{w}{{\operator D}(u)v} = [w_{xx}v - w_x v_x + w v_{xx} + 2 uvw],$$
so suitable domains of interior skewness are ${\functionspace F}_0 = {\functionspace F}^{\rm
pp}(1,0)$, ${\functionspace F}_1={\functionspace F}^{\rm pp}(3,2)$, i.e., more smoothness is required
from $v$ and $w$ than from $u$.
A boundary condition which makes ${\operator D}(u)$ skew is $\{v:
v(\pm 1)=0,\ v_x(1)=v_x(-1) \}$.
\end{example}
\begin{definition} ${\functionspace F}_0$ is
{\em natural for ${\operator H}$} if $\forall u \in {\functionspace F}_0$ there exists
$\frac{\delta {\operator H}}{\delta u}\in{\functionspace F}$ such that
\[
\lim_{\varepsilon\rightarrow 0}
\frac{ {\operator H}(u+\varepsilon v) - {\operator H}(u) }{ \varepsilon }
= \ip{v}{\frac{\delta {\operator H}}{\delta u}}
\quad \forall\, v\in{\functionspace F}
\mbox{.}
\]
\end{definition}
The naturality of ${\functionspace F}_0$ often follows from the vanishing of the
boundary terms, if any, which appear of the first variation of ${\operator H}$,
together with mild smoothness assumptions.
We use appropriate
spaces ${\functionspace F}_0$ and ${\functionspace F}_1$ to generate spectral, pseudospectral, and
finite element discretizations which have discrete energy
$H:={\operator H}|_{{\functionspace F}_0}$ as a conserved quantity. The discretization of the
differential operator ${\operator D}$ is a linear operator $\overline{\operator D}
:{\functionspace F}_1\to{\functionspace F}_0$, and the discretization of the variational derivative
$\frac{\delta\H}{\delta u}$ is $\overline{\frac{\delta\H}{\delta u}}\in{\functionspace F}_1$.
Each of $\overline {\operator D}$ and $\overline\frac{\delta\H}{\delta u}$ is a weighted residual
approximation \cite{finlayson}, but each uses spaces of
weight functions different from its space of trial functions.
\begin{definition}
$S$ is the matrix of $\ip{}{}|_{{\functionspace F}_0\times{\functionspace F}_1}$, i.e.
$S_{ij} := \ip{f_i}{g_j}$.
$A(u)$ is the matrix of the linear operator ${\operator A}:(v,w)\mapsto\ip{v}{{\operator D}(u)w}$,
i.e. $A_{ij}(u) := \ip{g_i}{{\operator D}(u)g_j}$.
\end{definition}
\begin{proposition}
Let ${\functionspace F}_0$ be natural for ${\operator H}$ and let $S$ be nonsingular. Then for
every $u\in{\functionspace F}_0$ there is a unique element
$\overline{\frac{\delta {\operator H}}{\delta u}}\in{\functionspace F}_1$ such that
\[
\ip{w}{\overline{\frac{\delta {\operator H}}{\delta u}}} =
\ip{w}{\frac{\delta {\operator H}}{\delta u}} \quad \forall\, w\in{\functionspace F}_0
\mbox{.}
\]
Its coordinate representation is $S^{-1}\nabla H$ where $H({\mathbf u}):={\operator H}(u_i f_i)$.
\end{proposition}
\begin{proposition}
\label{prop:D}
Let $S$ be nonsingular. For every $v\in{\functionspace F}_1$, there exists a
unique element $\overline{\D}v\in{\functionspace F}_0$ satisfying
\[
\ip{\overline{\D}v}{w} = \ip{{\operator D} v}{w} \quad \forall\, w\in{\functionspace F}_1 \mbox{.}
\]
The map $v\mapsto\overline{\D}v$ is linear, with matrix representation $D:=S^{-\mathrm T} A$.
\end{proposition}
\begin{definition}
$\overline{{\operator D}}\overline{\frac{\delta{\operator H}}{\delta u}}:{\functionspace F}_0\to{\functionspace F}_0$
is the {\em dual composition discretization} of
${\operator D}\frac{\delta{\operator H}}{\delta u}$.
\end{definition}
Its matrix representation is $S^{-\mathrm T} A S^{-1} \nabla H$.
The name ``dual composition'' comes from the dual roles played
by ${\functionspace F}_0$ and ${\functionspace F}_1$ in defining $\overline{{\operator D}}$
and $\overline{\frac{\delta\H}{\delta u}}$
which is necessary so that their composition has the required
linear-gradient structure.
Implementation and accuracy of
dual composition and Galerkin discretizations are similar. Because
they coincide in simple cases, such methods are widely used already.
\begin{proposition}
If ${\functionspace F}_1$ is a domain of skewness, the matrix $S^{-\mathrm T} A S^{-1}$
is antisymmetric, and the system of ODEs
\begin{equation}
\label{eq:disc}
\dot{\mathbf u}
=
S^{-\mathrm T} A S^{-1} \nabla H
\end{equation}
has $H$ as an integral. If, in addition, ${\operator D}$ is constant---i.e.,
does not depend on $u$---then the system (\ref{eq:disc}) is Hamiltonian.
\end{proposition}
The method of dual compositions also yields
discretizations of linear differential operators ${\operator D}$ (by taking
${\operator H}=\frac{1}{2}\ip{u}{u}$), and discretizations of variational
derivatives (by taking ${\operator D}=1$).
It also applies to formally {\em self}-adjoint
${\operator D}$'s and to mixed (e.g. advection-diffusion) operators, where
preserving symmetry gives control of the energy.
The composition of two weighted residual discretizations is not
necessarily itself of weighted residual type. The simplest case is
when ${\functionspace F}_0={\functionspace F}_1$ and we compare the dual composition to the
{\em Galerkin discretization}, a weighted
residual discretization of ${\operator D} \frac{\delta {\operator H}}{\delta u}$ with
trial functions and weights both in ${\functionspace F}_0$. They are the same when
projecting $\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$, applying ${\operator D}$, and
again projecting to ${\functionspace F}_0$, is equivalent to directly projecting
${\operator D}\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$.
For brevity, we assume ${\functionspace F}_0={\functionspace F}_1$ for the rest of Section 2.
\begin{proposition}
\label{prop:galerkin}
$\overline{{\operator D}}\overline{\frac{\delta\H}{\delta u}}
$ is the Galerkin approximation of
${\operator D} \frac{\delta\H}{\delta u}$ if and only if
$ {\operator D} \big( \overline{\frac{\delta\H}{\delta u}} - \frac{\delta\H}{\delta u} \big) \perp {\functionspace F}_0.$
This occurs if
(i) ${\operator D}({\functionspace F}_0^\perp)\perp{\functionspace F}_0$, or
(ii) $\overline{\operator D}$ is exact and applying ${\operator D}$ and orthogonal
projection to ${\functionspace F}_0$ commute, or
(iii) $\overline{\frac{\delta {\operator H}}{\delta u}}$ is exact,
i.e., $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$.
\end{proposition}
Fourier spectral methods with ${\operator D}=\partial_x^n$ satisfy (ii), since
then ${\functionspace F}$ has an orthogonal
basis of eigenfunctions ${\mathrm e}^{ijx}$ of ${\operator D}$, and differentiating
and projecting (dropping the high modes) commute. This is illustrated
later for the KdV equation.
The most obvious situation in which $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$ is when
${\operator H}=\frac{1}{2}\ip{u}{u}$, since then
$\frac{\delta\H}{\delta u}=u\in{\functionspace F}_0$ and ${\operator D}\frac{\delta {\operator H}}{\delta u}={\operator D} u$,
and the discretization of ${\operator D}$ is obviously the Galerkin one!
When the functions $f_j$ are nonlocal, $D$ is often called the
spectral differentiation matrix. The link to standard pseudospectral
methods is that some Galerkin methods are pseudospectral.
\begin{proposition}
\label{prop:pseudo}
If ${\operator D}({\functionspace F}_1)\subseteq{\functionspace F}_1$, then $\overline{{\operator D}}v={\operator D} v$,
i.e., the Galerkin approximation of the derivative is exact.
If, further, $\{f_j\}$ is a cardinal basis,
then $D$ is the standard pseudospectral differentiation matrix,
i.e. $D_{ij} = {\operator D} f_j(x_i)$.
\end{proposition}
We want to emphasize that although $A$, $S$, and $D$ depend on the basis,
$\overline{\operator D}$ depends only on ${\functionspace F}_0$ and ${\functionspace F}_1$, i.e., it is
basis and grid independent.
In the factorization $D=S^{-\mathrm T} A$, the (anti)symmetry of $A$ and $S$ is basis
independent, unlike that of $D$. These points are well known in
finite elements, less so in pseudospectral methods.
\begin{example}[\bf Fourier differentiation\rm]\rm
Let ${\functionspace F}_1$ be the trigonometric polynomials of degree $n$, which is
closed under differentiation (so that Prop. \ref{prop:pseudo}) applies,
and is a domain of skewness of ${\operator D}=\partial_x$. In any basis, $A$ is
antisymmetric. Furthermore, the two popular bases, $\{\sin(j x)_{j=1}^n,
\cos(j x)_{j=0}^n\}$, and the cardinal basis on equally-spaced grid
points, are both orthogonal, so that $S=\alpha I$ and $D=S^{-1}A$ is
antisymmetric in both cases.
\end{example}
\begin{example}[\bf Polynomial differentiation\rm]\rm
\label{sec:cheb}
${\functionspace F}_1={\functionspace P}_n([-1,1])$ is a domain of interior skewness which is
closed under ${\operator D}=\partial_x$, so pseudospectral differentiation
factors as $D=S^{-1}A$ in any basis. For a cardinal
basis which includes $x_0=-1$, $x_n=1$, we have $(A+A^{\mathrm T})_{ij}=-1$
for $i=j=0$, $1$ for $i=j=n$, and 0 otherwise, making obvious
the influence of the boundary.
For the Chebyshev points $x_i = -\cos(i
\pi/n)$, $i=0,\dots,n$, $A$ can be evaluated first in a basis
$\left\{ T_i \right\}$ of Chebyshev polynomials:
one finds
$A_{ij}^{\rm cheb} = 2 j^2/(j^2-i^2)$ for $i-j$ odd, and
$S_{ij}^{\rm cheb} -2(i^2+j^2-1)/
[((i+j)^2-1)((i-j)^2-1)]$ for $i-j$ even, with other entries 0.
Changing to a cardinal basis by
$F_{ij} = T_j(x_i) = \cos(i j \pi/n)$, a
discrete cosine transform, gives $A=F^{-1} A^{\rm cheb} F^{-\mathrm T}$.
For example, with $n=3$
(so that $(x_0,x_1,x_2,x_3)=(-1,-\frac{1}{2}, \frac{1}{2},1)$), we have
$$ D =
{\scriptstyle \frac{1}{6}}
\left(
\begin{smallmatrix}
-19 & 24 & -8 & 3 \\
2 & -6 & -2 & 6 \\
-6 & 2 & 6 & -2 \\
-3 & 8 & -24 & 19 \\
\end{smallmatrix}
\right)
= S^{-\mathrm T} A =
{\scriptstyle \frac{1}{256}}
\left(
\begin{smallmatrix}
4096 & -304 & 496 & -1024\\
-304 & 811 & -259 & 496\\
496 & -259 & 811 & -304\\
-1024 & 496 & -304 & 4096\\
\end{smallmatrix}
\right)
{\scriptstyle \frac{1}{270}}
\left(
\begin{smallmatrix}
-135 & 184 & -72 & 23 \\
-184 & 0 & 256 & -72\\
72 & -256 & 0 & 184 \\
-23 & 72 & -184 & 135
\end{smallmatrix}
\right).
$$
$S$ and $A$ may be more amenable to study than $D$ itself.
All their eigenvalues are very well-behaved; none are spurious. The
eigenvalues of $A$ are all imaginary and, as $n\to\infty$, uniformly
fill $[-i\pi,i\pi]$ (with a single zero eigenvalue corresponding
to the Casimir of $\partial_x$).
The eigenvalues of $S$ closely approximate the
quadrature weights of the Chebyshev grid.
\end{example}
For ${\operator D}\ne\partial_x$, $\overline{{\operator D}}$ may be quite expensive
and no longer pseudospectral. (There is in general no
$S$ with respect to which the pseudospectral approximation of
${\operator D} v$ is skew-adjoint.) However, $\overline{{\operator D}}v$ can
be computed quickly if fast transforms between cardinal and
orthonormal bases exist. We evaluate ${\operator D} v$ exactly for
$v\in{\functionspace F}_1$ and then project $S$-orthogonally to ${\functionspace F}_1$.
\begin{example}[\bf Fast Fourier Galerkin method\rm]\rm
\label{fastfourier}
Let ${\operator D}(u)$ be linear in $u$, for example, ${\operator D}(u) = u\partial_x
+ \partial_x u$. Let $u,\ v\in{\functionspace F}_1$, the trigonometric
polynomials of degree $n$. Then ${\operator D}(u)v$ is
a trigonometric polynomial of degree $2n$, the first $n$ modes of
which can be evaluated exactly using antialiasing and Fourier
pseudospectral differentiation. The approximation whose error
is orthogonal to ${\functionspace F}_1$ is just these first $n$ modes, because $S=I$
in the spectral basis. That is, the antialiased
pseudospectral method is here identical to the Galerkin method, and hence
skew-adjoint. Antialiasing makes pseudospectral methods conservative.
This is the case of the linear ${\operator D}$'s of the Euler fluid equations.
\end{example}
\begin{example}[\bf Fast Chebyshev Galerkin method\rm]\rm
Let ${\operator D}(u)$ be linear in $u$ and let $u,\ v\in{\functionspace F}_1={\functionspace P}_n$.
With respect to the cardinal basis on the Chebyshev grid with $n+1$
points, $\overline{{\operator D}}(u)v$ can be computed in time ${\mathcal O}(n \log n)$ as follows:
(i)
Using an FFT, express $u$ and $v$ as Chebyshev polynomial
series of degree $n$;
(ii) Pad with zeros to get Chebyshev polynomial series of formal
degree $2n$;
(iii) Transform back to a Chebyshev grid with $2n+1$ points;
(iv) Compute the pseudospectral approximation of ${\operator D}(u)v$ on the
denser grid. Being a polynomial of degree $\le 2n$, the
corresponding Chebyshev polynomial series is exact;
(v) Convert ${\operator D}(u)v$ to a Legendre polynomial series using a fast
transform \cite{al-ro};
(vi) Take the first $n+1$ terms. This produces
$\overline{{\operator D}}(u)v$, because the Legendre
polynomials are orthogonal.
(vii) Convert to a Chebyshev polynomial series with $n+1$ terms
using a fast transform;
(viii) Evaluate at the points of the original Chebyshev grid using an FFT.
\end{example}
\subsection*{3. Examples of the dual composition method}
\begin{example}[\bf The KdV equation\rm]\rm
$ \dot u + 6 u u_x + u_{xxx}=0$ with
periodic boundary conditions has features which can be used to illustrate
various properties of the dual composition method. Consider two of its
Hamiltonian forms,
$$
\dot u = {\operator D}_1\frac{\delta\H_1}{\delta u}\mbox{, } {\operator D}_1 =
\partial_x\mbox{, } {\operator H}_1 = \int\big( -u^3+\frac{1}{2} u_x^2\big)\,dx\mbox{,}$$
and
$$
\dot u = {\operator D}_2\frac{\delta {\operator H}_2}{\delta u}\mbox{, } {\operator D}_2 =
-(2u\partial_x + 2\partial_x u + \partial_{xxx})\mbox{, } {\operator H}_2 =
\frac{1}{2}\int u^2\,dx\mbox{.}$$
In the case ${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm trig}$, $v:=\overline{\frac{\delta\H_1}{\delta u}}$
is the orthogonal projection to ${\functionspace F}_0$ of $\frac{\delta\H_1}{\delta u}=-3u^2-u_{xx}$; this can be
computed by multiplying out the Fourier series and dropping all but
the first $n$ modes, or by antialiasing.
Then $\overline{{\operator D}}_1 v = v_x$, since
differentiation is exact in ${\functionspace F}^{\rm trig}$. Since ${\operator D}_1$
is constant, the discretization is a Hamiltonian system, and since
$\overline{{\operator D}}_1$ is exact on constants, it also
preserves the Casimir ${\mathcal C}=\int u\,dx$.
In this formulation, Prop. \ref{prop:galerkin} (ii) shows that
the dual composition and Galerkin approximations of ${\operator D}_1\frac{\delta\H_1}{\delta u}$ coincide,
for differentiation does not map high modes to lower modes, i.e.,
${\operator D}_1({\functionspace F}^{{\rm trig}\perp})\perp{\functionspace F}^{\rm trig}$.
In the second Hamiltonian form, $H_2 = \frac{1}{2}{\mathbf u}^{\mathrm T} S {\mathbf u}$, $\frac{\delta\H_2}{\delta u} =
S^{-1}\nabla H_2 = {\mathbf u},$ and the Galerkin approximation of $\frac{\delta\H_2}{\delta u}$ is exact,
so that Prop. \ref{prop:galerkin} (iii) implies that the composition
$\overline{\operator D}_2\overline{\frac{\delta\H_2}{\delta u}}$ {\em also} coincides with the Galerkin
approximation. $\overline{{\operator D}}_2v$ can evaluated using antialiasing
as in Example \ref{fastfourier}. $\overline{{\operator D}}_2$ is
not a Hamiltonian operator, but still generates a skew-gradient
system with integral $H_2$. Thus in this (unusual) case,
the Galerkin and antialiased pseudospectral methods coincide and have
three conserved quantities,
$H_1$, $H_2$, and ${\mathcal C}|_{{\functionspace F}^{\rm trig}}$.
The situation for finite element methods with
${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm pp}(n,r)$ is different.
In the first form, we need $r\ge 1$ to ensure
that ${\functionspace F}_0$ is natural for ${\operator H}_1$; in the second form, naturality is
no restriction, but we need $r\ge2$ to ensure that ${\functionspace F}_1$ is a domain
of interior skewness. The first dual composition method
is still Hamiltonian with
integral $H_1$ and Casimir $C=u_i\int f_i\, dx$, but because
$\overline{\operator D}_1$ does not commute with projection to ${\functionspace F}_1$, it is {\em
not} a standard Galerkin method.
In the second form, $\frac{\delta\H_2}{\delta u}=u$ is still exact, so the
dual composition and Galerkin methods still coincide.
However, they are not Hamiltonian.
\end{example}
\begin{example}[\bf An inhomogeneous wave equation\rm]\rm
When natural and skew boundary conditions conflict, it is necessary
to take ${\functionspace F}_0\ne{\functionspace F}_1$. Consider
$ \dot q = a(x)p$, $\dot p = q_{xx}$, $q_x(\pm1,t)=0$.
This is a canonical Hamiltonian system with
$$ {\operator D} = \left(\begin{matrix}0 & 1 \\ -1 & 0 \\\end{matrix}\right),\
{\operator H} = \frac{1}{2}\int_{-1}^1 \big(a(x)p^2 + q_x^2\big)\, dx,\
\frac{\delta {\operator H}}{\delta q} = -q_{xx},\
\frac{\delta {\operator H}}{\delta p} = a(x)p.$$
Note that (i) the boundary condition is
natural for ${\operator H}$, and (ii)
no boundary conditions are required for ${\operator D}$ to be skew-adjoint in $L^2$.
Since $\overline{\frac{\delta\H}{\delta u}}$ is computed with trial functions in ${\functionspace F}_1$, we
should not include $q_x(\pm1)=0$ in ${\functionspace F}_1$, for this would be to
enforce $(-q_{xx})_x=0$.
In \cite{mc-ro} we show that a spectrally accurate dual composition method is
obtained with
$ {\functionspace F}_0 = \{ q\in {\functionspace P}_{n+2}: q_x(\pm 1)=0 \} \times {\functionspace P}_n$ and
$ {\functionspace F}_1 = {\functionspace P}_n\times {\functionspace P}_n$.
\end{example}
\subsection*{4. Quadrature of Hamiltonians}
\label{sec:quadrature}
Computing $\nabla H =\nabla{\operator H}(u_j f_j)$ is not always possible in closed form.
We would like to approximate ${\operator H}$ itself by quadratures in real space.
However, even if the discrete $H$ and its gradient are spectrally accurate
approximations, they cannot always be used to construct spectrally
accurate Hamiltonian discretizations.
In a cardinal basis,
let ${\operator H}=\int h(u)dx$ and define the
quadrature Hamiltonian $H_q:= h( u_j) w_j = {\mathbf w}^{\mathrm T} h({\mathbf u})$
where $w_j = \int f_j dx$ are the quadrature weights.
Since $\nabla H_q = W h'({\mathbf u})$, $\frac{\delta\H}{\delta u}\approx W^{-1}\nabla H_q$,
Unfortunately,
$DW^{-1}\nabla H_q$ is not a skew-gradient system, while
$D S^{-1} \nabla H_q$ is skew-gradient, but is not an accurate approximation.
$D W^{-1} \nabla H_q$ can only be a skew-gradient
system if $DW^{-1}$ is antisymmetric, which occurs in three general cases.
(i) On a constant grid, $W$ is a multiple of the identity, so
if $D$ is antisymmetric, $D W^{-1}$ is too.
(ii) On an arbitrary grid with $D=\left(
\begin{smallmatrix}
0 & I \\
-I & 0\\
\end{smallmatrix}\right)$,
$DW^{-1}$ is antisymmetric.
(iii) On a Legendre grid with ${\functionspace F}_0={\functionspace F}_1$,
$S=W$, and $D W^{-1} = W^{-1} A W^{-1}$ is antisymmetric.
The required compatibility between
$D$ and $W$ remains an intriguing and frustrating obstacle to the
systematic construction of conservative discretizations of strongly
nonlinear PDEs.
|
\section{Introduction}
Researchers studying
the amenability Thompson's group $F$ will be familiar with a distrust of experimental methods applied to this problem.
Part of this scepticism stems from the fact that (if it is amenable) $F$ is known to have a very quickly growing \emph{F\o lner function} \cite{Moore-Folner}.
However, experimental algorithms investigating amenability are rarely based on F\o lner's criteria directly, and
to date
no identification is made in the literature of a mechanism by which a quickly growing F\o lner function could interfere with a given experimental method.
In this paper we identify such a mechanism for a recent algorithm proposed by first author, A. Rechnitzer, and E.
J. Janse van Rensburg \cite{ERR}, which was designed to experimentally detect amenability via the Grigorchuk-Cohen
characterisation in terms
of the cogrowth function. We will refer to this as the ERR algorithm in the sequel.
We show that, in the ERR algorithm, estimates of the asymptotic cogrowth rate
are compromised by sub-dominant behaviour in the reduced-cogrowth function.
However,
even though sub-dominant behaviour in the cogrowth function may interfere with estimates of the asymptotic growth rate, the ERR algorithm can still be used to estimate other properties of the cogrowth function to high levels of accuracy.
In particular we are able re-purpose the algorithm to quickly estimate initial values of the cogrowth function even for groups for which the determination of the asymptotic growth rate is not
possible (for example groups with unsolvable word problem).
The present work started out as an independent verification by the second author
of the experimental results in \cite{ERR}, as part of his PhD research.
More details can be found in \cite{CamPhD}.
The article is organised as follows.
In Section~\ref{sec:prelim} we give the necessary background on amenability, random walks and cogrowth, followed by a summary of previous experimental work on the amenability of $F$. In Section~\ref{sec:R} a function quantifying
the sub-dominant properties of the reduced-cogrowth function is defined.
In Section~\ref{sec:ERRsection} the ERR algorithm is summarised, followed by an analysis of two types of pathological behaviour in Section~\ref{sec:pathological_behaviour}. The first of these is easily handled, while the second is shown to depend on sub-dominant terms in the reduced-cogrowth function.
In Section~\ref{sec:appropriation} the ERR method is modified to provide estimates of initial cogrowth values. Using this the first 2000 terms for the cogrowth function of Thompson's group $F$ are estimated.
\section{Preliminaries}\label{sec:prelim}
We begin with a definition of terms and a quick survey of experimental work done on estimating amenability.
\subsection{Characterisations of amenability}
The following characterisation of amenability is due to Grigorchuk \cite{grigorchuk1980Cogrowth} and Cohen \cite{cohen1982cogrowth}. A shorter proof of the equivalence of this criteria with amenability was provided by Szwarc \cite{Szwarc_on_grig_cohen}.
\begin{defn}
\label{defn:amenabilityCogrowth}
Let $G$ be a finitely generated non-free group with symmetric generating set $S$.
Let $c_n$ denote the number of freely reduced words of length $n$
over $S$
which are equal to the identity in $G$.
Then $G$ is amenable if and only if
$$\limsup_{n\rightarrow\infty} c_n^{1/n}=|S|-1.$$
Equivalently, let $d_n$ denote the number of words (reduced and unreduced) of length $n$ over $S$
which are equal to the identity. Then $G$ is amenable if and only if
$$\limsup_{n\rightarrow\infty} d_n^{1/n}=|S|.$$
The function $n\mapsto c_n$ is called the {\em reduced-cogrowth function} for $G$ with respect to $S$, and $n\mapsto d_n$ the {\em cogrowth function}.
\end{defn}
Kesten's criteria for amenability is given in terms of the probability of a random walk on the group returning to
its starting point.
\begin{defn}\label{defn:amenabilityKesten}
Let $G$ be a finitely generated group, and let $\mu$ be a symmetric
measure on $G$. The random walk motivated by $\mu$
is a Markov chain on the group starting at the identity where the probability of moving from $x$ to $y$ is $\mu(x^{-1}y)$.
Note the distribution after $n$ steps is given by the $n$-fold
convolution power of $\mu$, which we denote as $\mu_n$. That is, $\mu_n(g)$ is the probability that an $n$-step walk starting at $e$ ends at $g$.
By Kesten's criteria \cite{Kesten} a group is amenable
if and only if $$\limsup_{n\rightarrow\infty} (\mu_n(e))^{1/n}=1.$$
\end{defn}
Pittet and Saloff-Coste proved that the asymptotic
decay rate of the probability of return function is independent
of measure chosen, up to the usual equivalence \cite{stabilityRandomWalk}.
For finitely generated groups we can choose the
random walk motivated by the uniform probability measure on a finite generating set. This random walk is called a \emph{simple
random walk} and corresponds exactly with a random walk on the Cayley graph.
For this measure the probability of return is given by
\begin{equation}\label{eqn:mu-d}
\mu_n(e) =\frac{d_n}{|S|^n},\end{equation}
where the (reduced and non-reduced) cogrowth terms $d_n$ are calculated with
respect to the support of the measure.
Thus the cogrowth
function arises from a special case of return probabilities.
F\o lner's characterisation of amenability \cite{Folner} can be phrased in several
ways. Here we give the definition for finitely generated
groups.
\begin{defn}
\label{defn:amenabilityFolner}
Let $G$ be a group with finite generating set $S$. For each
finite subset $F\subseteq G$, we denote by $|F|$ the number of
elements in $F$. The {\em boundary} of a finite set $F$ is defined to be
$$\partial F=\lbrace
g\in G\;:\;g\notin F, gs\in F \text{ for some }s\in S
\rbrace.$$
A finitely generated group $G$ is amenable if and only if there exists
a sequence of finite subsets $F_n$ such that
$$
\lim_{n\rightarrow\infty} \frac{\vert \partial F_n \vert}{\vert F_n\vert}
=0.$$
\end{defn}
Vershik \cite{Vershik-folner-function} defined the following function as a way to quantify how much of the Cayley graph must be considered before sets with a given isoperimetric profile can be found.
\begin{defn}
The F\o lner function of a group is
$$f(n)=\min\left\lbrace |F|\;:\;\frac{|\partial F|}{|F|}<\frac{1}{n} \right\rbrace.$$
\end{defn}
Significant literature exists on F\o lner functions. It is known that
there exists finitely
presented amenable groups with F\o lner functions
growing faster than $n^{n^n}$
(\cite{KrophollerMartino} Corollary~6.3)
and finitely generated groups (iterated wreath product of $k$ copies of $\Z$) with F\o lner functions
growing faster than $\displaystyle n^{n^{\iddots}}$ of height $k$ for arbitrary $k$ \cite{ershler2003isoperimetric}.
\subsection{Experimental work on the amenability of $F$}
Richard Thompson's group $F$ is the group with
presentation
\begin{equation}
\langle a,b \mid [ab^{-1},a^{-1}ba],[ab^{-1},a^{-2}ba^2] \rangle
\label{eqn:Fpresentation}
\end{equation}
where $[x,y]=xyx^{-1}y^{-1}$ denotes the commutator of two elements. See for example \cite{CFP} for a more detailed introduction to this group.
Whether or not $F$ is amenable has attracted a large amount of interest, and has so far evaded many different attempts at a proof of both positive and negative answers.
The following is a short summary of experimental work previously done on Thompson's group $F$.
\begin{itemize}
\item[\cite{ComputationalExplorationsF}]
Burillo, Cleary and Wiest 2007.
The authors randomly choose words and reduce them to a normal form to test if they represent the identity element. From this they estimate the proportion of words of length $n$ equal to the identity, as a way to compute the asymptotic growth rate of the cogrowth function.
\item[\cite{Arzhantseva}]
Arzhantseva, Guba, Lustig, and Pr{\'e}aux 2008.
The authors study the {\em density} or least upper bound for the average vertex degree of any finite subgraph of the Cayley graph; an $m$-generated group is amenable if and only if the density of the corresponding Cayley graph is $2m$ (considering inverse edges as distinct). A computer program is run and data is collected on a range of amenable and non-amenable groups. They find a finite subset
in $F$ with density $2.89577$ with respect to the $2$ generator presentation above. (To be amenable one would need to find sets whose density approaches $4$).
Subsequent theoretical work of Belk and Brown gives sets with density approaching $3.5$ \cite{BelkBrown}.
\item[\cite{ElderCogrowthofThompsons}]
Elder, Rechnitzer and Wong 2012.
Lower bounds on the cogrowth rates of various groups are obtained by computing the dominant eigenvalue of the adjacency matrix of truncated Cayley graphs. These bounds are extrapolated to estimate the cogrowth rate.
As a byproduct the first 22 coefficients of the cogrowth series are computed exactly.
\item[\cite{Haagerup}]
Haagerup, Haagerup, and Ramirez-Solano 2015.
Precise lower bounds of certain norms of elements in the group ring of $F$ are computed, and
coefficients of the first 48 terms of the cogrowth series are computed exactly.
\item[\cite{ERR}]
Elder, Rechnitzer and van Rensburg 2015.
The {\em Metropolis Monte Carlo} method from statistical mechanics is adapted to estimate the asymptotic growth rate of the cogrowth function by running random walks on the set of all trivial words in a group. The results obtained for Thompson's group $F$ suggest it to be non-amenable.
We describe their method in more detail in Section~\ref{sec:ERRsection} below.
\end{itemize}
Justin Moore \cite{Moore-Folner} (2013) has shown that if $F$ were amenable then its F\o lner function would increase
faster than a tower of $n-1$ twos,
$$2^{2^{2^{\iddots}}}$$
This result has been proposed as an obstruction to all computational methods for approximating amenability; a computationally infeasibly large portion of the Cayley graph must be considered before sets with small boundaries can be found.
However, in all but one of the experimental algorithms listed above computing F\o lner sets was not the principle aim.
In order to understand how a bad F\o lner function affects the performance of these methods, we need to understand the connection between convergence properties of the respective limits in the various characterisations of amenability.
\section{Quantifying sub-dominant cogrowth behaviour}
\label{sec:R}
The F\o lner function
quantifies the rate of convergence of the limit in Definition~\ref{defn:amenabilityFolner}. We consider the following definitions as an attempt to quantify the rate of convergence of the limits in Definition~\ref{defn:amenabilityCogrowth}.
\begin{defn}\label{defn:R}
Let $G$ be a finitely generated group with symmetric generating set
$S$.
Let
$c_n$ be the number of all reduced
trivial words of length $n$ and let $C=\limsup c_n^{1/n}.$
Define
$$\RR(n)=\min
\left\lbrace k \;:\;\frac{c_{2k+2}}{c_{2k}}>C^2-\frac{1}{n}
\right\rbrace$$
\end{defn}
Definition \ref{defn:R} uses only even word lengths (and hence $C^2$ instead of $C$). This is necessary because group presentations with only even length relators have no odd length trivial words.
For this paper we will only consider the function $\RR$ for amenable groups, in which case $C=|S|-1$ except when the group is free (infinite cyclic).
A similar definition may be made for the cogrowth function.
\begin{defn}\label{defn:Rprime}
For $G$ a finitely generated group with symmetric generating set
$S$ we may define
$$\RR'(n)=\min
\left\lbrace k \;:\;\frac{d_{2k+2}}{d_{2k}}>D^2-\frac{1}{n}
\right\rbrace$$
where
$d_n$ be the number of all (reduced and non-reduced)
trivial words of length $n$ and $D=\limsup c_n^{1/n}.$
\end{defn}
Literature already exists studying the convergence properties of return probabilities, and we suspect that
the function $\RR'$ is a reformulation of the {\em $L^2$-isoperimetric
function} \cite{BendikovPittetSauer}.
\begin{example}
For the trivial group with some finite symmetric generating set $S$ we have $c_0=1, c_k=|S|(|S|-1)^{k-1}$ for $k\geq 1$ so
$\frac{c_{2k+2}}{c_{2k}}\geq (|S|-1)^2$ and $\RR(n)=0$.
Similarly since $d_k=|S|^k$ we have
$\RR(n)=\RR'(n)=0$.
\end{example}
Aside from the trivial group, it is usually easier to compute $\RR'$ (or its asymptotics) than it is to obtain $\RR$.
For this reason we first consider $\RR'$ functions for various groups, and then prove that for infinite, amenable, non-free groups $\RR'$ and $\RR$ have the same asymptotic behaviour.
\begin{example}
For any finite group
the rate of growth of $d_n$ is the dominant eigenvalue of the adjacency matrix of the
Cayley graph, and some simple analysis shows that $\RR'(n)$ is at most logarithmic in $n$.
\end{example}
Define $f\precsim g$ if there exist constants $a, b > 0$, such that for $x$ large enough, $f(x) \leq ag(bx)$. Then $f\sim g$ ($f$ and $g$ are asymptotic) if $f\precsim g$ and $g\precsim f$.
Table \ref{tab:differentRn} provides a sample of amenable groups for which the asymptotics of $\RR'(n)$, the F\o lner function and probabilities of return are known \cite{ershler2003isoperimetric,randomWalkWreathProducts,PittetSCsolvable2003}.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{2.5}
\begin{tabular}
{
|>{\centering\arraybackslash}p{2.7cm}
|>{\centering\arraybackslash}p{2.7cm}
|>{\centering\arraybackslash}p{3.3cm}
|>{\centering\arraybackslash}p{2.7cm}
|}
\hline
Example & $\mathcal{F}(n)$ & $\mu_n (e)$ & $\mathcal{R}'(n)$ \\
\hline \hline
trivial & $\sim$ constant & $\sim$ constant & $\sim$ constant \\
\hline
$\Z^k$ & $\sim n^{k}$ & $\sim n^{-k/2}$ & $\sim n$ \\
\hline
$BS(1,N)$ & $\sim e^n$ & $\sim e^{-n^{1/3}}$
& $\sim n^{3/2}$ \\
\hline
$\Z\wr\Z$ & $n^n$ & $\sim e^{-n^{1/3}(\ln n)^{2/3}}$ & $\sim\ln(n) n^{3/2}$\\
\hline
$\Z\wr\Z\wr\dots \wr\Z$ $(d-1)$-fold wreath product & $n^{n^{n^{\iddots^n}}}$ (tower of $d-1$ $n$'s) & $\sim e^{-n^{\frac{d}{d+2}}(\ln n)^{\frac{2}{d+2}}}$ & $\sim\ln(n) n^{(d+2)/2}$\\
\hline
\end{tabular}
\end{center}
\caption{Comparing asymptotics of the probabilities of return, the F\o lner function $\mathcal{F}$, and $\RR'$ for various
groups.
\label{tab:differentRn}}
\end{table}
The results for the asymptotics of $\RR'(n)$ were derived directly from the known asymptotics for $\mu_n$. A discussion of these methods will appear in \cite{CamPhD}. In practice however it proved quicker to guess the asymptotics and then refine using the following method.
\begin{prop}\label{prop:ProvingRn}
The asymptotic results for $\RR'(n)$ in Table \ref{tab:differentRn} are correct.
\end{prop}
\begin{proof}
For a given group suppose
$\mu_n(e)\sim g(n)$ where $g$ is a continuous real valued function, as in Table \ref{tab:differentRn}.
Then $d_n\sim |S|^n g(n)$.
Finding $\RR'(n)$ requires solving the equation
\begin{equation}\frac{d_{2k+2}}{d_{2k}}=|S|^2-\frac{1}{n}
\label{eqn:Rnkn}
\end{equation} for $k=k(n)$.
This is equivalent to solving
$$1=n\left(
|S|^2-\frac{d_{2k+2}}{d_{2k}}
\right)$$
for $k$.
Suppose $f(n)$ is a function where
\begin{equation}\label{eqn:methodRn}
L =\lim_{n\rightarrow\infty}
n
\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right)
\end{equation}
exists and is non-zero.
If $L=1$ then
$$\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right) \sim \frac{1}{n}$$
and so
$$\frac{d_{2f(n)+2}}{d_{2f(n)}} \sim |S|^2-\frac{1}{n}.$$
Then
$k(n)\sim f(n)$ satisfies Equation \ref{eqn:Rnkn}. Therefore $\RR'(n)$ is asymptotic to $f(n).$
If $L$ exists and is non-zero then
$$\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right) \sim \frac{L}{n}.$$
Then $$\left(
|S|^2-\frac{d_{2f(Ln)+2}}{d_{2f(Ln)} }
\right) \sim \frac{L}{Ln}=\frac{1}{n}$$
and so
$\RR'(n)\sim f(L n)$.
The derivations of candidates for $f(n)$ in each case in Table \ref{tab:differentRn} is performed in \cite{CamPhD}. The results in the table do not include the constant $L$ since the probabilities of return used as input are only correct up to scaling.
We leave the calculation of Equation \ref{eqn:methodRn} for the results from
Table \ref{tab:differentRn} as an exercise.
\end{proof}
\subsection{Converting from cogrowth to reduced-cogrowth}
We now prove an equivalence between the sub-dominant behaviour of
the cogrowth and reduced-cogrowth functions. This allows us to borrow the previously listed results for $\RR'$ when discussing $\RR$ and the ERR method.
The dominant and sub-dominant cogrowth behaviour can be
analysed from the generating functions for these sequences.
\begin{defn}
Let $d_n$ denote the number of trivial words of length $n$ in
a finitely generated group. The \emph{cogrowth series} is
defined to be $$D(z)=\sum_{n=0}^\infty d_n z^n.$$
Let $c_n$ denote the number of reduced trivial words. Then
$$C(z)=\sum_{n=0}^\infty c_n z^n$$ is said to be the
\emph{reduced-cogrowth series}.
\end{defn}
$D$ and $C$ are the generating functions for $d_n$ and $c_n$ respectively, and are
related in the following way.
Let $|S|=2p$ be the size of a symmetric generating set.
Then from \cite{KouksovRationalCogrowth,cogrowthConvertWoess}
\begin{equation}
C(z)=\frac{1-z^2}{1+(2p-1)z^2}D\left(
\frac{z}{1+(2p-1)z^2}
\right)
\label{eqn:cFromD}
\end{equation}
and
\begin{equation}
D(z)=\frac{1-p+p\sqrt{1-4(2p-1)z^2}}{1-4p^2z^2}
C\left(
\frac{1-\sqrt{1-4(2p-1)z^2}}{2(2p-1)z}
\right).
\label{eqn:dFromC}
\end{equation}
The dominant and sub-dominant growth properties of the cogrowth functions may be analysed by considering the
singularities of these generating functions.
For a detailed study of the relationship between singularities
of generating functions
and sub-dominant behaviours of coefficients see \cite{flajolet2009analytic}.
We now outline an example of how the composition of functions (as in Equations~\ref{eqn:cFromD} and \ref{eqn:dFromC}) effects
the growth properties of the series coefficients.
\begin{example}\label{ex:CvsD}
Consider $$f(z)=\left(
1-\frac{z}{r}
\right)^{-p}.$$
Then (for positive $p$) $f(z)$ has a singularity at $z=r$, and this defines the
radius of convergence of $f(z)$ and the asymptotic
growth rate of the series coefficients of the
expansion of $f(z)$. It
also determines the principle sub-dominant term contributing
to the growth of the coefficients.
In this example, the coefficients will grow like $ n^{p-1}r^{-n}.$
We wish to investigate what happens to this growth behaviour
when we compose the function $f$ with a function $g$.
Consider $f(g(z))$ for some function $g$ for which $g(0)=0$.
The singularities of $g$ are inherited by $f(g(z))$; if $g$ is
analytic everywhere then the only singularities of $f(g(z))$
will occur when $g(z)=r$. In this case, the new radius of convergence
will be the minimum $|z|$ such that $g(z)=r$. Importantly, however, the principle sub-dominant growth term of the
series coefficients will remain polynomial of degree $p-1$.
A variation on this behaviour will occur if there is an
$r_0$ for which $g(z)$ is
analytic on the ball of radius $r_0$, and $g(z)=r$ for some
$z$ in this region. Again, when this occurs, the new radius of
convergence is obtained by solving $g(z)=r$ and the type
of the principle sub-dominant term in the growth of the
coefficients remains unchanged.
If there does not exist such an $r_0$, the principle singularity
of $g(z)$ will dominate the growth properties of the
coefficients.
\end{example}
\begin{prop}\label{prop:nonFreeCvsD}
Let $G$ be an {infinite} amenable group generated by $p$ elements and their inverses.
Then the principle sub-dominant terms contributing to the growth of $d_n$ and $c_n$ are asymptotically equivalent, except when the group is infinite cyclic.
\end{prop}
\begin{proof}
For an amenable group generated by $p$ elements and their inverses the radius of convergence for $D(z)$ is exactly
$1/2p$. This follows immediately from Definition \ref{defn:amenabilityCogrowth}.
Now from Equation~\ref{eqn:cFromD}, the reduced-cogrowth
series is obtained by composing the cogrowth series with
$$p(z)=\frac{z}{1+(2p-1)z^2}$$ and then multiplying by
$$q(z)=\frac{1-z^2}{1+(2p-1)z^2}.$$
Both of these functions are analytic inside the ball of radius
$1/\sqrt{2p-1}$.
Now
\begin{equation}\label{eq:alternateCogrowthProof}
p\left(\frac{1}{2p-1}\right)=\frac{1}{2p},
\end{equation}
the singularity of $D(z)$. Hence, $1/(2p-1)$ is a singularity of $D(p(z))$, and hence of $C(z)$. Note that if the group is infinite cyclic, then $p=1$ and $1/(2p-1)$ and $1/\sqrt{2p-1}$ are equal. In this scenario the radius of convergence of $p(z)$ is reached at the same moment that
$p(z)$ reaches the radius of convergence of $D(z)$. This means that both $p$ and $q$ contribute to the principle singularity, and this explains why the reduced and non-reduced cogrowth functions
for the infinite cyclic group exhibit such different behaviour.
If $p>1$ then $1/(2p-1)$
is inside the ball of radius $1/\sqrt{2p-1}$ (ie, inside the region of convergence for $p$ and $q$). Thus, the singularity of $D$ is reached before $z$ approaches the singularity of $p$ and $q.$
In this case the substitutions in Equation~\ref{eqn:cFromD} change the location of the principle singularity, but do not change the type of the singularity, or the form of the principle sub-dominant
term contributing to the growth of the series coefficients.
\end{proof}
\begin{cor}\label{cor:RvsRr}
Suppose $G$ is a finitely generated, infinite amenable group that is not the infinite cyclic group. Then $\RR$ is
asymptotically
equivalent to $\RR'$.
\end{cor}
\begin{rmk}
An alternate proof of the Grigorchuk/Cohen characterisation of amenability is easily constructed from an analysis of the singularities of $C(z)$ and $D(z)$. For example, Equation \ref{eq:alternateCogrowthProof} proves the first result from Definition \ref{defn:amenabilityCogrowth}.
This argument also picks up that the infinite cyclic group presents a special case. Though amenable, $\limsup_{n\rightarrow\infty}c_n\neq |S|-1$.
For this group we have $\RR(n)\sim 0$ while $\RR'(n)\sim n$.
\end{rmk}
\subsection{Sub-dominant behaviour in the cogrowth of $F$
\label{sec:subDomInF}
}
The groups $BS(1,N)$ limit to $\Z\wr\Z$ in the space of marked groups. This implies that the growth of the function $\mathcal{R}'$ and hence $\RR$ for $BS(1,N)$ increases with $N$. This is consistent with Table \ref{tab:differentRn}, since these results do not include scaling constants. This leads to the following result.
\begin{prop}\label{prop:connectBStoThompsons}
If Thompson's group $F$ is amenable, its $\mathcal{R}$ function grows faster than the $\mathcal{R}$ function for any $BS(1,N)$. In particular, it is asymptotically super-polynomial.
\end{prop}
\begin{proof}
By the convergence of $BS(1,N)$ to $\Z\wr\Z$ in the space of marked groups we have that, for any $N$, the function $\RR'$ for $BS(1,N)$ grows slower than the corresponding function for $\Z\wr\Z$. In \cite{stabilityRandomWalk} it is proved that, for finitely generated groups, the probability of return cannot asymptotically exceed the probability of return of any finitely generated subgroup. This implies that, for finitely generated amenable groups, the $\RR'$ function of the group must grow faster than the $\RR'$ function of any finitely generated subgroup. Since there is a subgroup of $F$ isomorphic to
$\Z\wr\Z$, $\RR'(n)$ for $F$ must grow faster than $\RR'(n)$ for $\Z\wr\Z$ and hence $BS(1,N)$.
Since $F$ contains every finite depth iterated wreath products of $Z$ (\cite{GubaSapir} Corollary 20),
the probability of return for $F$ decays faster than
$$e^{-n^{\frac{d}{d+2}}(\ln n)^{\frac{2}{d+2}}}$$ for any $d$.
Taking the limit as $d$ approaches infinity of the corresponding values for $\RR'$ and then doing the conversion from $\RR'$ to $\RR$ gives the final result.
\end{proof}
Note that if $F$ is non-amenable, then even though it still contains these subgroups, they do not affect the $\RR'$ function. In this scenario it is still true that the return probability for $F$ decays faster than the interated wreath product, because $F$ would have exponentially decaying return probability. For non-amenable groups the return probability does not identify the principle sub-dominant term in $d_n$, and hence does not correlate directly with $\RR'$.
\section{The ERR algorithm}
\label{sec:ERRsection}
We start by summarising the original work
by the first author, Rechnitzer and van Rensburg. Only the details
directly pertinent to the present paper are discussed here, for a more
detailed analysis of the random walk
algorithm and a derivation
of the stationary distribution we refer the
reader to \cite{ERR}. For the sake of brevity the
random walk performed by the algorithm will be referred to as the ERR random walk.
Recall that a group presentation, denoted $\langle S \mid R \rangle$, consists
of a set $S$ of formal symbols (the generators) and a set $R$ of words written
in $S^{\pm 1}$ (the relators) and corresponds to the quotient
of the free group on $S$ by the normal closure of the
relators $R$. In our paper, as in \cite{ERR}, all groups
will be finitely presented: both $S$ and $R$ will be finite.
Furthermore, the implementation of the algorithm assumes both
$S$ and $R$ to be symmetric, that is, $S=S^{-1}$ and $R=R^{-1}$.
In addition, for convenience $R$ is enlarged to be closed under cyclic permutation.
Recall that
$c_n$ counts the
number of reduced words in $S$ of length $n$ which represent
the identity in the group (that is, belong to the normal
closure of $R$ in the free group).
\subsection{The ERR random walk}
The ERR random walk is not a random walk on the Cayley graph of a group, but instead a random walk on
the set of trivial words for the group presentation.
This makes the algorithm extremely easy to implement, since it does not require an easily computable normal
form or even a solution to the word problem.
The walk begins at the empty word, and constructs
new
trivial words from the current trivial word
using one of two moves:
\begin{itemize}
\item (conjugation by $x\in S$).
In this move an element is chosen from
$S$ according to a predetermined probability distribution.
The current word is conjugated
by the chosen generator and then freely reduced to produce
the new candidate word.
\item (insertion of a relator). In this move a relator is
chosen from $R$ according to a predetermined
distribution and inserted into the current word at a position
chosen uniformly at random . In order to maintain the detailed
balance criteria (from which the stationary distribution
is derived)
it is necessary to allow only those insertions which
can be immediately reversed by inserting the inverse of the
relator at the same position. To this end the a notion of \emph{left insertion} is introduced;
after relators are inserted free reduction is done on only the left hand side of the relator.
If after this the word is not freely reduced the move is rejected.
\end{itemize}
Transition probabilities are defined which determine whether
or not the trivial word created with these moves is accepted as the new state. These probabilities involve parameters $\alpha\in\mathbb{R}$ and
$\beta\in (0,1)$ which may be adjusted
to control the distribution of the walk.
Let the current word be $w$ and the candidate word be $w'$.
\begin{itemize}
\item If $w'$ was obtained from $w$ via a conjugation it is accepted as the new
current state with probability
$$\min \left\lbrace 1,
\left(\frac{\left\vert w'\right\vert+1}
{\left\vert w\right\vert+1}\right)^{1+\alpha}
\beta^{\left\vert w'\right\vert-\left\vert w\right\vert}
\right\rbrace.$$
\item If $w'$ was obtained from $w$ via an insertion it is accepted as the new
state with probability
$$\min \left\lbrace 1,
\left(\frac{\left\vert w'\right\vert+1}
{\left\vert w\right\vert+1}\right)^{\alpha}
\beta^{\left\vert w'\right\vert-\left\vert w\right\vert}
\right\rbrace.$$
If $w'$ is not accepted the new
state remains as $w$.
\end{itemize}
These probabilities are chosen so that the distribution
on the set of all trivial words given by
$$\pi\left(u\right)=\frac{\left(\left|u\right|+1\right)^{1+\alpha}
\beta^{\left|u\right|}}
{Z},$$
(where $Z$ is a normalizing constant) can be proved to be the unique stationary
distribution of the Markov chain, and the limiting
distribution of the random walk.
The following result is then given.
\begin{prop}[\cite{ERR}]
As $\beta$ approaches $$\beta_c = \frac1{\limsup_{n\rightarrow\infty} (c_n)^{1/n}}$$
the expected value
of the word lengths visited approaches infinity.
\end{prop}
This result leads to the following method for estimating the value of $\beta_c$. For each presentation,
random walks are run with different values of $\beta$. Average
word length is plotted against $\beta$. The results obtained for Thompson's group $F$ are reproduced in Figure \ref{fig:ERRpaperThompsons}. The values for $\beta$ at which the data points diverge gives an indication of
$\beta_c$, and hence the amenability or otherwise of the group.
\begin{figure}
\includegraphics[width=110mm]{graph_thomp1.pdf}
\caption{The results from \cite{ERR} of the ERR algorithm applied to the standard presentation of Thompson's group $F$. Each data point plots the average word length of an ERR random walk against the paramater $\beta$ used.
\label{fig:ERRpaperThompsons}
}
\end{figure}
Random walks were run on presentations for a selection of amenable and non-amenable groups, including Baumslag-Solitar groups, some free product examples whose cogrowth series are known \cite{KouksovFreeProduct}, the genus 2 hyperbolic surface group, a finitely presented group related to the basilica group, and Thompson's group $F$.
The data in Figure \ref{fig:ERRpaperThompsons} appears to show fairly convincingly that the location of $\beta_c$
is a long way from the value of $\frac13$ expected were the group amenable.
It is noted in \cite{ERR}
that a long
random walk may be split into shorter segments,
and the variation in average word lengths of the segments gives an
estimation of the error in the estimated expected word length.
\begin{rmk}\label{rmk:implementation_details}
In the original work reported in \cite{ERR}, the algorithm was coded in {c++}, words were stored as linked lists,
the GNU Scientific Library was used to generate pseudo-random numbers, and {\em parallel tempering} was
used to speed up the convergence
of the random walk. For independent verification the second author
coded the algorithm in python, kept words as strings, used the python package \emph{random}, and no
tempering was used. Results obtained were consistent with those in \cite{ERR}. The experimental analysis
and modifications described in this paper use the python version of the second author.
\end{rmk}
\section{Investigating Pathological Behaviour\label{sec:pathological_behaviour}}
The theory underpinning the ERR random walk is
complete --- the random walk is certain to converge to the stationary
distribution. This does not preclude, however, convergence
happening at a computationally
indetectible rate.
Since there are finitely presented groups with unsolvable word problem, there is no chance of deriving bounds on the
rates of convergence of the walk in
any generality.
In the process of independently verifying the results in \cite{ERR}, however, we were able to identify two properties of
group presentations which appear to slow the rate of convergence.
The first of these is unconnected with the F\o lner function, and
does not pose any problem to the implementation of the ERR algorithm to Thompson's group $F$.
It does, however, refute the claim in (\cite{ERR} Section 3.7) that the method can be successfully applied to infinite presentations.
\subsection{Walking on the wrong group}\label{subsec:wrong_group}
It is easy to see from the probabilistic selection criteria
used by the ERR random walk that moves which increase the word
length by a large amount are rejected with high probability.
This poses a problem for group presentations containing long relators since insertion moves that attempt to insert a long relator will be
accepted much less often than moves which attempt to insert a shorter relation.
The following example makes this explicit.
\begin{lem}
All presentations of the form
$$\left\langle a,b\mid abab^{-1}a^{-1}b^{-1},\;a^n b^{-n-1}\right\rangle$$
describe the trivial group.
\end{lem}
\begin{proof}
Since $a^n=b^{n+1}$ we have $a^nba= b^{n+1}ba=bb^{n+1}a=ba^{n+1}=bab^{n+1}.$
Since $aba=bab$ we have
$a^iba=a^{i-1}bab$ so $a^nba=a^{n-1}bab=a^{n-2}bab^2=\dots =bab^{n}$.
Putting these results together gives $bab^n=bab^{n+1}$ and hence $b$ is trivial. The result follows.
\end{proof}
By increasing $n$ we can make the second relator arbitrarily large
without affecting the group represented by the presentation, or the
group elements represented by the generators. This implies that
ERR random walks for each of these presentations should converge
to the same stationary distribution.
Changing the presentation, however, does change the number of
steps in the ERR random walk needed to reach certain trivial words (such as the word `$a$').
ERR random walks were performed on these presentations for
$n= 1, 2, \dots,19$. As well as recording the average
word length of words visited, the number of \emph{accepted}
insertions of each relator was recorded.
\begin{table
\begin{center}
\begin{tabular}
{|>{\centering\arraybackslash}p{1cm}|>{\centering\arraybackslash}p{2cm}|>
{\centering\arraybackslash}p{3cm}|>{\centering\arraybackslash}p{3cm}|}
\hline
$n$ & number of steps & number of accepted insertions of small
relator & number of accepted insertions for big relator \\
\hline
1 & $2.0\times 10^8$ & $2977228$ & $7022772$ \\
\hline
2 & $3.6\times 10^8$ & $4420185$ & $5579815$ \\
\hline
3 & $6.1\times 10^8$ & $6323376$ & $3676624$ \\
\hline
4 & $9.0\times 10^8$ & $8016495$ & $1983505$ \\
\hline
5 & $1.2\times 10^9$ & $9088706$ & $911294$ \\
\hline
6 & $1.4\times 10^9$ & $9621402$ & $378598$ \\
\hline
7 & $1.5\times 10^9$ & $9850251$ & $149749$ \\
\hline
8 & $1.7\times 10^9$ & $9943619$ & $56381$ \\
\hline
9 & $1.8\times 10^9$ & $9977803$ & $22197$ \\
\hline
10 & $1.9\times 10^9$ & $9991680$ & $8320$ \\
\hline
11 & $2.1\times 10^9$ & $9997122$ & $2878$ \\
\hline
12 & $2.2\times 10^9$ & $9998720$ & $1280$ \\
\hline
13 & $2.2\times 10^9$ & $9999585$ & $415$ \\
\hline
14 & $2.3\times 10^9$ & $9999938$ & $62$ \\
\hline
15 & $2.4\times 10^9$ & $10000000$ & $0$ \\
\hline
16 & $2.6\times 10^9$ & $10000000$ & $0$ \\
\hline
17 & $2.7\times 10^9$ & $10000000$ & $0$ \\
\hline
18 & $2.8\times 10^9$ & $10000000$ & $0$ \\
\hline
19 & $2.9\times 10^9$ & $10000000$ & $0$ \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:trivial_relator_acceptences} The ERR
algorithm applied to the trivial group with presentation
$\left\langle a,b \mid aba=bab,\;a^n=b^{n+1} \right\rangle$ for various $n$. As $n$ increases, the longer relator is successfully
inserted less frequently.}
\end{table}
Table \ref{tab:trivial_relator_acceptences} shows the sharp decline in the number
of accepted insertions of the second relator as $n$ increases.
Indeed, for $n>14$ there were no instances in which the longer relator
was successfully inserted. Unsurprisingly, walks for large $n$ did not converge to the same distribution as those where $n$ was small, and for large $n$ the data did not accurately predict the asymptotic growth rate of the cogrowth function. For these $n$ the ERR random walk was actually
taking place on $\langle a,b\mid abab^{-1}a^{-1}b^{-1} \rangle$, which is a presentation for the 3-stand braid group, which is non-amenable.
Note that, given enough time, the longer relator would be
successfully sampled, and that an infinite random walk is still
guaranteed to converge to the theoretical distribution for the trivial group.
Such convergence,
however, may take a computationally infeasible amount of time.
\begin{claim}
The presence of long relators in the input presentation slows
the rate at which an ERR random walk converges to the stationary distribution. Therefore, the ERR method cannot be reliably extended to accept infinite
presentations.
\end{claim}
This result is not surprising.
In \cite{BenliGrigHarpe} an infinitely presented
amenable group is given for which any truncated presentation
(removing all but a finite number of relators) is non-amenable.
The ERR method could not expect to succeed on this group even if
long relators were sampled often; since the ERR random walk can only be run for a finite time there can
never be a representative sampling of an infinite set of relators, so ERR would incorrectly conclude this group is non-amenable.
The pathological presentations of the trivial group
studied here form
a sequence of presentations for amenable (trivial) groups which
approach a non-amenable group in the space of marked groups.
The failure of the ERR method to predict amenability for
these groups suggests that one does not need
particularly elaborate or large presentations to produce pathological behaviour.
However, we remark that this behaviour is easily monitored. In addition to counting
the number of attempted moves of the walk, one should record the relative number of successful insertions of each relator.
In the case of Thompson's group $F$ the two relators have similar lengths, and in our experiments both were sampled with comparable frequency.
Further analysis of this phenomena appears \cite{CamPhD}.
\subsection{Sub-dominant behaviour in cogrowth.\label{subsec:subdom}}
Recall that the solvable Baumslag-Solitar groups $BS(1,n)=\langle a,t\mid tat^{-1}a^{-n}\rangle$
are the only two generator, single-relator, amenable groups \cite{OneRelatorAmenable}; for each of these groups $\beta_c=1/3$.
In \cite{ERR} walks were run on $BS(1,1)=\Z^2,\;BS(1,2)$ and $BS(1,3)$ and for these groups
the random walk behaved as predicted with divergence occurring at the moment when $\beta$ exceeded $\beta_c$.
It may be surprising then to see the output of some ERR walks run $BS(1,7)$ shown in Figure \ref{fig:ERR-BS17}.
\begin{figure}
\includegraphics[width=110mm]{ERRgraphBS17.pdf}
\caption{A graph (as in \cite{ERR}), of average word length of ERR random walks plotted against the parameter $\beta$. The orange points come from walks where $\alpha=3$, and the blue points come from walks where $\alpha=0$. The vertical line at $1/3$ marks the expected asymptote.
\label{fig:ERR-BS17}
}
\end{figure}
It is clear that, for this group, the divergence for $\beta>\beta_c$ predicted by the theory is not occurring. This is further seen in Figure \ref{fig:bs17-distribution}, which shows the progression over time of one of the random walks used to generate Figure \ref{fig:ERR-BS17}.
\begin{figure
\includegraphics[width=110mm]{BS1N_over_time.pdf}
\caption{\label{fig:bs17-distribution}
The distribution of ERR random walks on $BS(1,7)$ with $\alpha=3$ and $\beta =0.34$.
This is a plot of word length against number of steps taken. The data represents ten ERR random walks overlaid on top of each other. As can be seen, none of the walks diverged.
Each dot represents the average
word length over 10000 accepted relator insertions. There is no divergence at this $\beta$ value, even though the group is amenable.
}
\end{figure}
The results in Figure \ref{fig:bs17-distribution} show the
word lengths visited for ten ERR random walks (superimposed) performed on $BS(1,7)$, with $\alpha=3$ and $\beta=0.34$.
Since the group has only a single relator, which was successfully inserted into the word 10000 times, it is not an error of the type identified in Subsection~\ref{subsec:wrong_group}.
The ERR method relies on the divergence of the average word length to identify $\beta_c$, so application of the method in this case will not accurately identify the amenability of
$BS(1,7)$.
Divergence of the ERR random walk (when $\beta>\beta_c$) relies on the abundance of long trivial words. For most presentations, at all points in an ERR walk there are always more moves which lengthen the word than shorten it, but the probabilistic selection criteria ensures balance. More specifically, the parameter $\beta$ imposes a probabilistic barrier which increases exponentially with attempted increase in word length.
When $\beta >\beta_c$ this exponential cap is insufficient, and the word length diverges.
Recall that for a given word length $n$ the function $\RR(n)$ quantifies how many reduced-trivial words there are of length similar to $n$.
The results in Table \ref{tab:differentRn} imply that, for many groups, large word lengths must be reached before the asymptotic growth rate is reflected by a local abundance of longer trivial words.
We have noted in Section \ref{sec:subDomInF} that the
convergence properties of $BS(1,N)$ in the space of marked groups requires $\RR(n)$ to grow more quickly as $N$ increases. We now show that the growth rate of
$\RR(n)$ is sufficient to cause the pathological
behaviour noted above.
To this end we postulate a hypothetical cogrowth function for which
we can explicitly identify and control $\RR(n)$.
\begin{example}
\label{ex:fictional_cogrowth}
Suppose that for some group on two generators and $q>0,\;p\in (0,1)$, the reduced-cogrowth is known to be exactly $$c_n=3^{n-qn^p}.$$
Then $
\limsup_{n\rightarrow\infty} c_n^{1/n}
= 3$
and so the group is amenable.
It may easily be verfied by the methods outlined in Proposition
\ref{prop:ProvingRn} that
$$\RR(n)=\left( 9\log(3)q p2^p n\right)^{\frac{1}{1-p}}.$$
Note that as $p$ approaches $1$, the exponent ${\frac{1}{1-p}}$ approaches infinity. This increases both the degree of the polynomial in $n$, and the coefficient $ \left(9\log(3)q p2^p\right)^{\frac{1}{1-p}}$.
Even though we do not know a group presentation with
precisely this cogrowth function,
by varying $p$ and $q$
this hypothetical example models the groups listed in Table~\ref{tab:differentRn}.
Figure \ref{fig:pathological1}
shows the effect of increasing the parameter $p$ on the ERR random walk
distribution. Note that this figure is not the output of any computer simulation, rather it models the distributions for an ERR random walk on an amenable group with the hypothetical cogrowth function, for $\alpha=0,\beta=0.335$ and $q=1$.
\begin{figure
\includegraphics[width=110mm]{pathologicalExample.png}
\caption{
\label{fig:pathological1}
Graphs of $c_n(n+1) 0.335^n$ for $c_n=3^{n-n^p}$.
}
\end{figure}
Recall that for $\beta<\beta_c$ the theoretical distribution of word lengths visited by the ERR random walk is
$$\Pr(@n)=\frac{c_n(n+1)^{\alpha+1}\beta^n}{Z}$$ where $Z$ is a normalizing
constant.
For $\beta>\beta_c$ the distribution cannot be normalised. In this case the function $c_n(n+1)^{\alpha+1}\beta^n$ still
contains information about the behaviour of the walk. If the
random walk reaches a word of length $x$ then the relative heights of $c_n(n+1)^{\alpha+1}\beta^n$ either side of $x$
describe the relative probabilities of increasing or decreasing
the word length in the next move.
From Figure \ref{fig:pathological1} we see that, for $p=0.3$,
the slope of $c_n(n+1)^{\alpha+1}\beta^n$ is always positive, so at all word lengths probabilities are uniformly in favour of increasing the word length.
However, as $p$ increases (and the growth rate for $\RR(n)$ increases) a `hump' appears at short word lengths. A random walk for such a group would tend to get stuck in the `hump'.
Indeed, for $p=0.39$ the distribution looks much less like
a walk diverging towards infinite word lengths and much more like the
distributions for $BS(1,7)$ used to produce Figure~\ref{fig:ERR-BS17}, where the average word length in the ERR walk remained finite.
\end{example}
The distributions in Figure \ref{fig:pathological1} exhibit a
mechanism which can explain
anomalous behaviour previously observed.
When $\RR(n)$ increases quickly the ERR random walk may adhere to the behaviour predicted by the theory and simultaneously give anomalous results about the asymptotics of the cogrowth function. In this sense if \cite{ERR} contains incorrect answers it is because the original ERR algorithm as it was initially proposed asks the wrong question. The ERR walk does not measure asymptotic properties of the cogrowth function; it provides information about the cogrowth function
only for word lengths visited by the walk. This observation forms the basis of Section
\ref{sec:appropriation}.
Note that increasing the parameter $\alpha$ pushes the algorithm towards
longer word lengths. Thus, any pathological behaviour caused
by the growth of $\RR(n)$ could theoretically be
overcome by increasing $\alpha$.
If $\RR(n)$ is known, then it may be used to calculate
how large words have to get before divergence occurs. A method to do this is outlined by the following example.
Suppose that ERR
random walks are run on a two generator group with $\beta=0.34$ (as in Figure \ref{fig:bs17-distribution}). If we eliminate the $\alpha$ term of the stationary distribution (which, being polynomial, becomes insignificant for long word lengths) the divergence
properties are controlled by the contest between
$0.34^n$ and $c_n$. That is, divergence will occur when
$c_{2n+2}/c_{2n}>1/0.34^2=3-1/17$; the word length at which divergence will occur is
$\RR(17)$.
If this value is known $\alpha$ may be increased
until the walk visits words of this length.
This process, however, requires specific information about $\RR(n)$ including all scaling constants. It is hard to imagine a group for which the sub-dominant cogrowth behaviour was known to this level of precision, but dominant cogrowth behaviour (and hence the amenability question for the group) was still unknown.
\subsection{Reliability of the ERR results for Thompson's group $F$}
In Proposition~\ref{prop:connectBStoThompsons} we saw that the $\mathcal{R}$ function for $F$ grows faster than that of any iterated wreath product of $\Z$'s, and certainly faster than that of any $BS(1,N)$ group. Since the ERR method fails to predict the amenability of these groups for $N$ as low as $7$, and this behaviour is consistent with the pathological behaviour caused by $\RR$, we conclude that the data encoded in Figure \ref{fig:ERRpaperThompsons} does not imply the non-amenability of $F$, and so the conclusion of the paper \cite{ERR} that $F$ appears to be non-amenable based on this data is unreliable.
\section{Appropriation of the ERR algorithm \label{sec:appropriation}}
The original implementation of the ERR random walk uses only the
mean length of words visited in an attempt to estimate asymptotic behaviour of the cogrowth function.
In this section we show that,
using the full
distribution of word lengths visited, it is possible to
estimate specific values
of the cogrowth function.
When doing a long random walk, the probability of arriving at a word of
length $n$
can be estimated by multiplying the number of words of that length by the asymptotic probability that the walk ends at a word of this length, $\pi(n)$.
That is,
\[
\Pr(@n)\approx c_n\pi(n)=c_n\frac{\left(n+1\right)^{\alpha}\beta^{n}}{Z}.
\]
The proportion of the time that the walks spends at words of length
$n$, however, gives us another estimate of $\Pr(@n)$. If we let
$W_n$ be the number of times the walk visits a word of
length $n$ then we have that
\[
\Pr(@n)\approx\frac{W_n}{Y},
\]
where $Y$ is equal to the length of the walk. From this we obtain
\[
\frac{W_n}{Y}\approx c_n\frac{\left(n+1\right)^{\alpha}\beta^{n}}{Z}.
\]
For two different values, $n$ and $m$, we obtain the result
\begin{eqnarray*}
\frac{W_m}{W_n} & \approx & \frac{c_m\left(m+1\right)^{\alpha}\beta^{m}}{c_n\left(n+1\right)^{\alpha}\beta^{n}},
\end{eqnarray*}
Thus,
\begin{equation}
c_m\approx c_n
\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m}.
\label{eqn:cogrowth_estimate}
\end{equation}
Equation~\ref{eqn:cogrowth_estimate} provides a method of estimating the value
of $c_m$ using some known or previously estimated value of $c_n$
and the distribution
of word lengths visited from an ERR random
walk.
Let's try a quick implementation of this for Thompson's group $F$,
where the first 48 cogrowth terms of which are known \cite{Haagerup}.
We ran an ERR random walk of length exceeding $10^{12}$ steps
on the standard presentation (Equation~\ref{eqn:Fpresentation}) f or $\alpha=3$ and $\beta=0.3$. The frequency of word
length visited is shown in Table~\ref{table:singleThompsonsGroupDist}.
\begin{table
\[\begin{array}{|c|r|}
\hline
n & W_n \\
\hline
0 & 32547326274\\
10 & 56273373521 \\
12 & 31613690578\\
14 & 26477475739\\
16 & 13576713156\\
18 & 9684082360\\
20 & 5444250723\\
22 & 3360907182\\
24 & 1905434239\\
26 & 1121735814\\
28 & 638093341\\
30 & 367320461\\
32 & 208025510\\
34 & 118432982\\
36 & 65983874\\
38 & 37210588\\
40 & 20642387\\
42 & 11332618\\
44 & 6243538\\
46 & 3421761\\
48 & 1863477\\
\hline\end{array}\]
\caption{Data collected from an ERR random walk of length $Y=1.8\times 10^{11}$ with $\alpha=3$ and $\beta=0.3$ on the standard presentation
for Thompson's group $F$.
\label{table:singleThompsonsGroupDist}
}
\end{table}
\begin{table}[h]
\[\begin{array}{|c|c|c|c|c|c|c|}
\hline
n & \text{exact} & \text{estimate} & \text{\begin{tabular}{c}percentage\\ error\end{tabular}}\\
\hline
10 & 20 & 19.9988 & .006 \\
12 & 64 & 63.9928 & .01\\
14 & 336 & 335.969 & .01\\
16 & 1160 & 1160.23 & .02\\
18 & 5896 & 5893.13 & .05\\
20 & 24652 & 24667.2 & .06\\
22 & 117628 & 117588 & .03\\
24 & 531136 & 530650 & .09\\
26 & 2559552 & 2551340 & .3\\
28 & 12142320 & 12116600 & .2\\
30 & 59416808 & 59353400 & .1\\
32 & 290915560 & 290848000 & .02\\
34 & 1449601452 & 1453990000 & .3\\
36 & 7269071976 & 7206930000 & .8\\
38 & 36877764000 & 36583500000 & .8\\
40 & 1.8848\times 10^{11} & 1.8461\times 10^{11} & 2\\
42 & 9.7200\times 10^{11} & 9.3078\times 10^{11} & 4\\
44 & 5.0490\times 10^{12} & 4.7504\times 10^{12} & 6\\
46 & 2.6423\times 10^{13} & 2.4308\times 10^{13} & 8\\
48 & 1.3920\times 10^{14} & 1.245\times 10^{14} & 10\\
\hline\end{array}\]
\caption{Estimate of the first 48 terms of the cogrowth function
for Thompson's group $F$, constructed from an ERR random walk of $Y=1.8\times 10^{11}$ steps with $\alpha=3$ and $\beta=0.3$. Exact values from \cite{Haagerup}.
\label{tab:unsophisticatedThompsons48}
}
\end{table}
We used Equation~\ref{eqn:cogrowth_estimate} and the data in Table~\ref{table:singleThompsonsGroupDist} to estimate
$c_{10}$ from $c_0$, and then this estimate was used to estimate
$c_{12}$. (Note that the shortest non-empty trivial words are of length 10. Since the relators in the standard presentation of $F$ are even in length there are no odd length relators.)
Using the data and the previous estimate for $c_{n-2}$, estimates were made of the first 48 terms, and these compared to the correct value in
Table \ref{tab:unsophisticatedThompsons48}.
This implementation of Equation~\ref{eqn:cogrowth_estimate} may be refined in several ways. Firstly, in many groups we have exact
initial values of $c_n$ for more than the trivial result $c_0=1$. In this
case these initial values can be used to estimate subsequent terms. In this paper we are primarily concerned with testing the efficacy of this
method for determining cogrowth, and so do not make use of such data.
Secondly, in the above implementation the only cogrowth value
used to estimate $c_n$ was $c_{n-2}$. Instead, estimates
for $c_n$ may be made from $c_k$ for any $k<n$. These estimates
may then be averaged to form an estimate for $c_n$. Note,
however, that if only one ERR random walk is used, and each of the $c_k$ is itself estimated from previous values of the same distribution there may be issues with interdependence.
This leads naturally to the following refinement --- to obtain several independent estimates for a given cogrowth value
several ERR random walks can be run with different
values for the parameters $\alpha $ and $\beta$.
\subsection{The ERR-R algorithm.}
The ERR-R algorithm accepts as input a group presentation and
the cogrowth value with
$c_0=1$. As above, recursive application of
Equation~\ref{eqn:cogrowth_estimate} is used to produce estimates for longer
word lengths. However, in each step previous estimates for a range of $c_n$ are used to produce new estimates.
A detailed analysis of the error incurred with each application of Equation \ref{eqn:cogrowth_estimate} is performed in
Section \ref{subsec:error_analysis}. All error bounds which appear in subsequent graphs are constructed using these techniques.
Unsurprisingly, the errror analysis in Section \ref{subsec:error_analysis} predicts that the largest errors are incurred
when data is used from the tails of
random walk distributions. Ideally then, a separate random walk should be run for each $c_n$, with parameters $\alpha$ and
$\beta$ chosen so that the sampled word lengths occupy the peaks
of the distribution.
If many estimates are to be made this is computationally infeasible. Instead we performed ERR random walks
using a range of
$\alpha$ and $\beta$ values, which can be chosen so that all word lengths of interest are visited often.
When estimating $c_m$, one estimate was made from each random walk distribution and from each $c_n,$ $m-100<n<m$. To avoid using the tails of distributions only data points which were greater than 10\% of the max height were used.
Using Equation~\ref{eqn:errorEstimate} each estimate was assigned a weight equal to the inverse of the estimated error.
The final value for $c_m$ was taken as the weighted average of the estimates, and the error in $c_m$ was taken to be the
weighted average of the individual error estimates.
Random walk data was obtained as before using the python code of the second author as described in Remark~\ref{rmk:implementation_details}.
\subsection{Application to the examples in Section~\ref{sec:pathological_behaviour}}
Applying the ERR-R algorithm can be used to analyse in more detail the pathological behaviours analysed in this paper.
Unsurprisingly, for the presentations of the trivial group given in \ref{subsec:wrong_group} which ignore the long relator, the ERR-R estimates for cogrowth values align closely with the three strand braid group.
For $BS(1,N)$ we can use estimates of initial cogrowth to analyse how $\RR$ increases with $N$. This is shown, for example in Figure \ref{fig:nthRootBS1N} which exhibits the behaviour predicted by the convergence to $\Z\wr\Z$ in the space of marked groups.
Further analysis of these presentations will appear in \cite{CamPhD}.
\begin{figure
\includegraphics[width=110mm]{data_processing_BS1N.png}
\caption{
Estimates for $c_n^{1/n}$ for the groups $BS(1,N)$, $N=2\dots 7$. As $N$ increases the curves takes longer to approach the asymptote.
\label{fig:nthRootBS1N}
}
\end{figure}
\subsection{Application to surface group}
The fundamental group of a surface of genus 2 has presentation $\langle
a,b,c,d\mid [a,b][c,d]
\rangle$.
The cogrowth of this group has received a lot of attention, and good upper and lower bounds are known for the asymptotic rate of growth \cite{Gouezel,Nag}.
ERR random walks were run on this surface group with $\alpha=3,\;30,\;300$
and $\beta=0.281,\;0.286,\;0.291,\dots,0.351$. Estimates were made for $c_n$ as well as the error $\Delta c_n$.
The resultant upper and lower bounds for $c_n^{1/n}$ are shown in
Figure \ref{fig:nthRootSurface}.
\begin{figure
\includegraphics[width=110mm]{surface_group_estimates.png}
\caption{Upper and lower bounds for the $n$-th root of the cogrowth function
for the fundamental group of a surface of genus 2 as calculated from ERR random walks. The horizontal lines (indistinguishable at this scale) identify the known upper and lower bounds. Note that after 12000 recursive applications of Equation~\ref{eqn:cogrowth_estimate} the error in the $n$-th root is still only approximately 0.01. \label{fig:nthRootSurface}}
\end{figure}
\subsection{Application to Thompson's group $F$}
We now apply the more sophisticated implementation of the method to $F$. Recall that we can compare the first 48 values with exact values obtained by Haagerup {\em et al.}. Our method allows us to go much further than this though, which we do.
ERR random walks were run on $F$ with $\alpha=3,13,23,33,53,63$ and $\beta=0.28,0.29,\dots 0.37$.
Collection of experimental data is ongoing. Table \ref{tab:sophisticatedThompsons48} shows comparisons between estimates for $c_n^{1/n}$ and the actual values, for $n\leq 48$, as well as the estimates for the error obtained from the experimental data.
\begin{table}
\[\begin{array}{|c|r|r|c|c|c|c|}
\hline
n & \text{exact} & \text{estimate} & \text{\begin{tabular}{c} error\ (\%)\end{tabular}}& \text{\begin{tabular}{c}predicted\\ error\ (\%)\end{tabular}}\\
\hline
10 & 20 & 19.9996 & 0.002 & .03\\
12 & 64 & 63.9981 & 0.003 & 0.06\\
14 & 336 & 335.999 & 0.0002& 0.07\\
16 & 1160 & 1159.96 & 0.003& 0.1\\
18 & 5896 & 5895.98 & 0.0003& 0.1\\
20 & 24652 & 24653.1 & 0.005& 0.1\\
22 & 117628 & 117625 & 0.003& 0.2\\
24 & 531136 & 531098 & 0.007& 0.2\\
26 & 2559552 & 2558950 & 0.02& 0.2\\
28 & 12142320 & 12138200 & 0.03& 0.3\\
30 & 59416808 & 59408300 & 0.01& 0.3\\
32 & 290915560 & 290861000 & 0.02& 0.3\\
34 & 1449601452 & 1449260000 & 0.02& 0.3\\
36 & 7269071976 & 7268550000 & 0.007& 0.4\\
38 & 36877764000 & 36876700000 & 0.003& 0.5\\
40 & 1.8848\times 10^{11} & 1.88491 \times 10^{11} & 0.003& 0.5\\
42 & 9.7200\times 10^{11} & 9.7205 \times 10^{11} & 0.005& 0.5\\
44 & 5.0490\times 10^{12} & 5.05097\times 10^{12} & 0.04& 0.6\\
46 & 2.6423\times 10^{13} & 2.64353\times 10^{13} & 0.05& 0.6\\
48 & 1.3920\times 10^{14} & 1.39246\times 10^{14} & 0.03& 0.7\\
\hline\end{array}\]
\caption{Estimate of the first 48 terms of the cogrowth function
for Thompson's group $F$, constructed from 60 ERR random walks. Exact values from \cite{Haagerup}.
\label{tab:sophisticatedThompsons48}
}
\end{table}
\begin{rmk}
Table \ref{tab:sophisticatedThompsons48} shows a marked increase in the degree of accuracy of the estimates over those of Table \ref{tab:unsophisticatedThompsons48}. This suggests the method of using multiple distributions and weighted averages is effective. Note that there are approximately $10^{12}$ trivial words of length 48 so the walks could not possibly have visited each one. The sample of words visited by the walk seem to reflect the space as a whole reasonably accurately.
\end{rmk}
Figure \ref{fig:thompsonsNthRoot} shows our estimates for upper and lower bounds of $c_n^{1/n}$ for $n\leq 2000$.
\begin{figure}
\includegraphics[width=110mm]{2000Thompsons.pdf}
\caption{
Estimates of $c_n^{1/n}$ for Thompsons group $F$ for $n\leq 2000$, using the ERR-R method.
The figure
includes upper and lower bounds, but at this scale the estimated error
is to small for the bounds to be distinguished.
\label{fig:thompsonsNthRoot}}
\end{figure}
\subsection{Error analysis}\label{subsec:error_analysis}
Here we identify a method by which error in cogrowth estimates my be estimated. We stress that this is a statistical measurement of error, rather than theoretical.
Recall Equation~\ref{eqn:cogrowth_estimate}.
Suppose that $c_n$ is known up to $\pm\Delta c_n$,
and that the error in the measurements $W_m$ and $W_n$ are
$\pm\Delta W_m$ and $\pm\Delta W_n$ respectively. Then,
from elementary calculus, the error in
$c_m$ is given by
\begin{align}
\nonumber
\Delta c_m
\approx &\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta c_n\\
\nonumber
&+ \frac{c_n}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta W_m\\
\nonumber
&+ c_n\frac{W_m}{W_n^2}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta W_n\\
\nonumber
=&c\left(n\right)
\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m}
\left(
\frac{\Delta c_n}{c_n}
+\frac{\Delta W_m}{W_m}
+\frac{\Delta W_n}{W_n}
\right)\\
\approx&c_m
\left(
\frac{\Delta c_n}{c_n}
+\frac{\Delta W_m}{W_m}
+\frac{\Delta W_n}{W_n}
\right).\label{eqn:errorEstimate}
\end{align}
Hence the proportional error in the estimate of $c_m$ is
approximately equal to the sum of the proportional errors in $c_n,\,W_m$ and $W_n$. It is clear from this that if Equation~\ref{eqn:cogrowth_estimate} is used recursively (building new
estimates based on previously estimated cogrowth values) the proportional
error in $c_n$ is certain to increase. Note, the factor controlling the rate of growth in the proportional error of
estimates is the proportional error in $\Delta W_n$. If this
is constant as $n$ increases the proportional error in $c_n$ will grow linearly with $n$.
To calculate useful error margins for $c_n$ it is necessary to quantify $\Delta W_n$. Here we employ the same method
used in the ERR paper; walks are split into $M$ segments and
the number of times the walk visits words of length $n$ is recorded
for each segment. Let $x_{i,n}$ denote the number of times the walk
visited words of length $n$ in the $i$th segment. Then $W_n$ is taken to be the average of $x_{i,n}$ for $i=1\dots M$ and the error in
$W_n$ is calculated from the statistical variance of these values,
\begin{equation}\label{eqn:errorInWn}
\Delta W_n=\sqrt{\frac{\var\lbrace x_{i,n}\rbrace_{1\leq i\leq M}}{M-1}}.
\end{equation}
\begin{example}
Equations \ref{eqn:errorEstimate} and \ref{eqn:errorInWn} were used to produce the estimates of the error in the estimates contained in
Table~\ref{tab:sophisticatedThompsons48}.
Note that the estimated error is much larger then the actual error.
\end{example}
\subsection{Error in the $n$-th root of $c_n$}
We have noted that recursive uses of Equation~\ref{eqn:cogrowth_estimate} will result in an increasing proportional error in $c_n$.
However, it is the $n$-th root of $c_n$ which reflects the amenability of a group. Let
$\gamma_n=c_n^{1/n}$ and
$\Delta \gamma_n$ denote the error of the estimate for $\gamma_n$.
Once again from elementary calculus we obtain that for a
given $n$
\begin{align}
\nonumber
\Delta \gamma_n
&\approx\frac{1}{n}c_n^{\frac{1}{n}-1}\Delta c_n\\
\nonumber
&=\frac{1}{n}c_n^{\frac{1}{n}}\frac{\Delta c_n}{c_n}\\
\nonumber
&=\gamma_n\frac{1}{n}\frac{\Delta c_n}{c_n}\\
\text{and so }\frac{\Delta \gamma_n}{\gamma_n} &\approx\frac{1}{n}\frac{\Delta c_n}{c_n}.\label{eqn:errorInNthRoot}
\end{align}
Thus, if $\frac{\Delta c_n}{ c_n}$ increases at most linearly, $\frac{\Delta \gamma_n}{\gamma_n}$ can be expected to remain constant.
The values for $c_n$ grow exponentially, so a linearly increasing proportional error in $c_n$ corresponds with a massive increase in the absolute error in $c_n$. In contrast, $\gamma_n$ approaches a constant, so the proportional error depends linearly on the absolute error.
Thus it is not surprising that our experimental results show that even when the error in
cogrowth estimates grows large, the error in
the $n$-th root grows very slowly.
\section{Conclusion}
Several ideas emerge from this study.
Firstly, researchers performing experimental mathematics to determine the amenability of a group need to take care that their algorithm is not susceptible to interference from sub-dominant behaviours. For the reduced-cogrowth function the sub-dominant behaviour is identified by $\RR$. Amenability is an asymptotic property, and
the interference of sub-dominant behaviours on experimental algorithms can be subtle and nuanced. In particular, we have shown that, if Thompson's group $F$ is amenable, its function $\RR$ grows faster than any polynomial. This implies that the prediction of non-amenability of $F$ in \cite{ERR} is unreliable.
We have also shown that, despite potential inaccuracies in estimates of asymptotics, the ERR-R method can produce accurate results for initial cogrowth values.
These
are interesting in their own right. Indeed, if Thompson's group is not amenable, then its $\RR$ function need not be super-polynomial and results from experimental methods might well inform the construction of conjectures regarding cogrowth.
In this context the original benefits of the ERR algorithm still stand:
it requires no group theoretic computational software, no solution to the word problem, and remains a computationally inexpensive way to quickly gain insight into the cogrowth function of a finitely presented group.
\section*{Acknowledgements}
The authors wish to thank Andrew Rechnitzer and
Andrew Elvey-Price
for helpful feedback on this work.
|
\section{\boldmath Axion solution of the strong $CP$ puzzle and the axion mass}
Already in the early days of Quantum Chromodynamics (QCD) it was realised that the most generic
Lagrangian of QCD contained also a term of the form
${\mathcal L}_{\rm QCD} \supset -
\frac{\alpha_s}{8\pi}\, \bar\theta \,G_{\mu\nu}^b \tilde{G}^{b,\mu\nu}$,
where $\alpha_s$ is the strong coupling, $G_{\mu\nu}^b$ is the gluonic field strength, $\tilde{G}^{b,\mu\nu}$
its dual, and $\bar\theta \in [-\pi,\pi]$ an angular parameter. This term violates parity ($P$) and time-reversal ($T$)
invariances
and, due to the $CPT$ conservation theorem, also $CP$ invariance. Consequently, it induces $CP$ violation in
flavor-diagonal strong interactions,
notably non-zero electric dipole moments of nuclei. However, none have been detected up to date. The best constraint
currently comes from the electric dipole moment of the neutron, which is bounded by $|d_n|< 2.9\times 10^{-26} e$\,cm.
A comparison with the
prediction, $d_n \sim e \bar\theta m^\ast_q/m_n \sim 6\times 10^{-17}e$\,cm, where $m^\ast_q \equiv m_u m_d/(m_u+m_d)$ is the reduced quark and $m_n$ the neutron mass, leads to the conclusion that $|\bar\theta |< 10^{-9}$.
This is the strong $CP$ puzzle.
In Peccei-Quinn (PQ) extensions~\cite{Peccei:1977hh} of the Standard Model (SM), the symmetries of the latter
are extended by a global $U(1)_{\rm PQ}$ symmetry which is
spontanously broken by the vacuum expectation value (VEV)
of a new complex singlet scalar
field, $\langle{|\sigma |^2}\rangle=v_{\rm PQ}^2/2$, which is assumed to be much larger than the Higgs VEV.
SM quarks or new exotic quarks are supposed to carry PQ charges such that
$U(1)_{\rm PQ}$ is also broken by the gluonic triangle anomaly,
$\partial_\mu J_{U(1)_{\rm PQ}}^\mu \supset
-\frac{\alpha_s}{8\pi}\,N_{\rm DW}\, G_{\mu\nu}^a \tilde G^{a\,\mu\nu}$,
where $N_{\rm DW}$ is a model-dependent integer.
Under these circumstances and at energies above the confinement scale $\Lambda_{\rm QCD}$
of QCD, but far below $v_{\rm PQ}$, the PQSM
reduces to the SM plus a pseudo Nambu-Goldstone boson~\cite{Weinberg:1977ma,Wilczek:1977pj} -- the axion $A$ --
whose field, $\theta (x) \equiv A(x)/f_A\in [-\pi,\pi]$, corresponding to the angular degree of freedom
of $\sigma$, acts as a space-time dependent $\bar\theta$
parameter,
${\mathcal L}_\theta \supset
\frac{f_A^2}{2} \,\partial_\mu \theta \partial^\mu \theta
- \frac{\alpha_s}{8\pi}\,\theta(x)\,G_{\mu\nu}^c {\tilde G}^{c,\mu\nu}$,
with $f_A \equiv v_{\rm PQ}/N_{\rm DW}$.
Therefore, the $\overline\theta$-angle can be eliminated by a shift $\theta (x) \to \theta (x) -\overline\theta$.
At energies below $\Lambda_{\rm QCD}$,
the effective potential of the shifted field, which for convenience we again denote by $\theta(x)$, will then coincide
with the vacuum energy of QCD as a function of $\overline\theta$, which, on general
grounds, has an absolute
minimum at $\theta =0$, implying that there is no strong $CP$ violation: $\langle \theta\rangle =0$. In particular,
$V(\theta ) = \frac{1}{2} \chi \theta^2 + {\mathcal O}(\theta^4) $,
where $\chi\equiv \int d^4x\, \langle q(x)\,q(0)\rangle$, with $q(x)\equiv \frac{\alpha_s}{8\pi}\,G_{\mu\nu}^c(x) {\tilde G}^{c,\mu\nu}(x)$, is the topological susceptibility.
A recent lattice determination found~\cite{Borsanyi:2016ksw}
$\chi = [75.6(1.8)(0.9) {\rm MeV}]^4$, which agrees well with the result from NLO chiral perturbation theory~\cite{diCortona:2015ldu},
$\chi = [75.5(5) {\rm MeV}]^4$, leading to the following prediction of the axion mass in terms of the
axion decay constant $f_A$,
\begin{equation}
\label{zeroTma}
m_A\equiv \frac{1}{f_A}\sqrt{\frac{d^2 V}{d\theta^2}}{|_{\theta = 0}}= \frac{\sqrt{\chi}}{f_A} =
57.0(7)\, \left(\frac{10^{11}\rm GeV}{f_A}\right)\mu \textrm{eV}.
\end{equation}
\section{Axion cold dark matter in the case of post-inflationary PQ symmetry breaking}
In a certain range of its decay constant, the axion not only solves the strong $CP$ puzzle, but is also a cold dark matter
candidate~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. The extension of this range depends critically on the cosmological history. It is particularly constrained in the case on which we concentrate here: post-inflationary PQ symmetry restoration and subsequent breaking.\footnote{Remarkably, this case is strongly favored in the case of
saxion (modulus of $\sigma$) or saxion/Higgs inflation~\cite{Ballesteros:2016xej}.}
In the early universe, after the PQ phase transition, the axion field takes on
random initial values in domains of the size of the causal horizon.
Within each domain, the axion field evolves according to
\begin{equation}
\label{KG}
\ddot \theta + 3 H(T) \dot\theta + \frac{\chi(T)}{f_A^2} \sin\theta= 0 ,
\end{equation}
with temperature dependent Hubble expansion rate $H(T)\sim T^2/M_P$ and topological
susceptibility~\cite{Pisarski:1980md} $\chi (T)\propto T^{-(7 + 3/n_f)}$, for temperatures far above the QCD
quark-hadron crossover, $T_c^{\rm QCD}\simeq 150$\,MeV ($n_f$ is the number of active quark flavors).
At very high temperatures, $v_{\rm PQ} > T\gg T_c^{\rm QCD}$, the Hubble friction term is much larger than the potential term in (\ref{KG}), $3 H(T)\gg \sqrt{\chi(T)}/f_A$, and the axion field is frozen at its initial value. At temperatures around a GeV,
however, when $\sqrt{\chi(T)}/f_A \simeq 3 H(T)$, the field starts to
evolve towards the minimum of the potential and to oscillate around the $CP$ conserving ground state.
Such a spatially coherent oscillation has an equation of state
like cold dark matter, $w_A \equiv p_A/\rho_A \simeq 0$ (here $p_A$ and $\rho_A$ are the pressure and the
energy density of the axion field, respectively).
Averaging over the initial values of the axion field in the many domains filling our universe -- at temperatures around
a GeV the size of a domain is around a mpc -- one
obtains~\cite{Borsanyi:2016ksw,Ballesteros:2016xej}
$\Omega_A^{\rm (VR)}h^2 =
(3.8\pm 0.6 )\times 10^{-3} \,\left(f_A \over { 10^{10}\, {\rm GeV}}\right)^{1.165}$,
for the fractional contribution of axion cold dark matter to the energy density of the universe
from this so-called vacuum realignment (VR) mechanism~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}.
Here, the exponent, $1.165$, arises from the temperature dependence of
$\chi(T)$ at $T\sim $\,GeV, which has recently been determined quite precisely
from lattice QCD~\cite{Borsanyi:2016ksw}. Requiring, that the axion dark matter abundance
should not exceed the observed one, this result implies a lower limit on the axion
mass~\cite{Borsanyi:2016ksw}:
\begin{equation}
m_A > 28(2)\,\mu{\rm eV} \,.
\end{equation}
However, so far we have neglected that the domain-like structure discussed above comes along with a network of one and two dimensional topological defects -- strings~\cite{Davis:1986xc} and domain walls~\cite{Sikivie:1982qv} -- which are formed at the boundaries of the domains. Their collapse will also produce axions.
Axion strings are formed at the same time when the domain-like structure appears, cf. at the PQ phase transition.
In the string cores, of typical radius $1/m_\rho$, where $m_\rho \equiv \sqrt{2\lambda_\sigma} v_{\rm PQ}$ is the mass of the saxion (the particle excitation of the saxion field),
topology hinders the breaking of the PQ symmetry and a huge energy density
is stored. As the network evolves, the overall string length decreases by straightening and collapsing loops.
Moreover, some energy is radiated in the form of low-momentum axions. The energy density in the network of global strings is expected to reach a scaling behaviour,
$\rho_{\rm S} = \zeta \frac{\mu_{\rm S}}{t^2}$,
with string tension $\mu_{\rm S} \equiv \pi v_{\rm PQ}^2 \ln\left(\frac{m_\rho t }{\sqrt{\zeta}}\right)$,
where $\zeta$ is independent of time.
This scaling behavior implies that the number density of axions radiated from strings (S) can be
estimated as
\begin{equation}
\label{Nastring}
{n_{A}^{\rm (S)}(t)}
\simeq
\frac{\zeta}{\epsilon}\frac{v_{\rm PQ}^2}{t}\left[\ln\left(\frac{m_\rho t}{\sqrt{\zeta}}\right)-3\right] ,
\end{equation}
where the dimensional parameter $\epsilon$ gives a measure of the average energy of the radiated axions in units
of the Hubble scale, $\epsilon \equiv \langle E_A\rangle/(2\pi /t)$.
A number of field theory simulations have indicated that the network of strings evolves indeed toward the scaling solution
with $\zeta = {\mathcal O}(1)$ and $\epsilon ={\mathcal O}(1)$.
The latter value implies that most of the axions produced from strings become non-relativistic during the radiation-dominated era and contribute to the cold dark matter abundance. Adopting the values~\cite{Kawasaki:2014sqa}
$\zeta = 1.0 \pm 0.5$ and $\epsilon = 4.02 \pm 0.70$, one finds from (\ref{Nastring}) for the contribution of strings to today's dark matter abundance~\cite{Ballesteros:2016xej}
$\Omega_A^{\rm (S)} h^2
\approx
7.8^{+6.3}_{-4.5} \times 10^{-3} \times N_{\rm DW}^2
\left( \frac{f_A}{10^{10}\ {\rm GeV}}\right)^{1.165}$,
where the upper and lower end correspond to the maximum and minimum values obtained by using the above
error bars on $\zeta$ and $\epsilon$. They do not take into account a possible large theoretical error due to the fact that
the field theory simulations can only be performed at values of $\ln ( m_\rho t )\sim$\,a few, much smaller than the realistic value, $\sim 50$, and thus require an extrapolation.
Domain walls appear at temperatures of the order of a GeV, when the axion field, in any of the causally connected domains at this epoch, relaxes into one of the $N_{\rm DW}$ distinct but degenerate minima of the effective potential
effective potential, $V(A,T) = \chi (T) \left[ 1- \cos ( N_{\rm DW} A/v_{\rm PQ} )\right]$, in the
interval $-\pi v_{\rm PQ}\leq A \leq +\pi v_{\rm PQ}$. Between the domains, there appear two dimensional
topological defects dubbed domain walls whose thickness and stored energy density is controlled by $\chi (T)$. Importantly, strings are always attached by $N_{\rm DW}$ domain walls,
due to the fact that the value of the phase of the PQ field $\sigma$ must vary from $-\pi$ to $\pi$ around the string core.
Therefore, hybrid networks of strings and domain walls, so-called string-wall systems, are formed at $T={\mathcal O}(1)$\,GeV.
Their evolution strongly depends on the model-dependent value of $N_{\rm DW}$.
For $N_{\rm DW} = 1$, strings are pulled by one domain wall, which causes the disintegration into smaller pieces of a wall bounded by a string~\cite{Vilenkin:1982ks}.
String-wall systems are short-lived in this case, and their collapse (C) contributes an amount~\cite{Kawasaki:2014sqa}
$\Omega_A^{\rm (C)} h^2
\approx
3.9^{+2.3}_{-2.1} \times 10^{-3} \times
\left( \frac{f_A}{10^{10}\ {\rm GeV}}\right)^{1.165}$
to dark matter,
resulting in a total abundance
\begin{equation}
\Omega_A h^2
\approx \left( \Omega_A^{\rm{(VR)}} + \Omega_A^{\rm{(S)}} + \Omega_A^{\rm{(C)}}\right) h^2
\approx 1.6^{+1.0}_{-0.7}\times 10^{-2}\times \left(\frac{f_A}{10^{10}\,\mathrm{GeV}}\right)^{1.165}.
\label{omega_a_tot_short}
\end{equation}
Therefore, in post-inflationary PQ symmetry breaking models with $N_{\rm DW}=1$, the axion may explain all of cold dark matter in the universe
if its decay constant and mass are in the range
\begin{equation}
\label{mass_range}
f_A \approx (3.8-9.9)\times 10^{10}\,{\rm GeV}\hspace{3ex} \Leftrightarrow\hspace{3ex}
m_A \approx (58 - 150)\ \mu{\rm eV}\,.
\end{equation}
This prediction, however, has recently been challenged by the results from a new field theory simulation
technique
designed to work directly at high string tension with $\ln (m_\rho t)\sim 50$ and
to treat vacuum realignment, string, and string-wall contributions in a unified way~\cite{Klaer:2017ond}.
The reported dark matter axion mass,
\begin{equation}
m_A=(26.2 \pm 3.4)\,\mu{\rm eV}\,,
\end{equation}
where the error now only includes the uncertainty from $\chi(T)$,
is significantly lower than (\ref{mass_range}). It indicates that axions from strings and walls are negligible,
despite of the fact that the string networks appear to have a higher energy density ($\zeta \sim 4$) than those observed in conventional field theoretic simulations ($\zeta\sim 1$). This implies that the produced axions have a larger energy, $\epsilon \sim 40$,
and that dynamics at smaller scales -- outside the range of applicability of the new simulation method~\cite{Klaer:2017ond} -- can be relevant for the determination of the axion DM abundance.
Further studies on the dynamics of string-wall systems are required to include precise modelling of physics at smaller distance scales.
Fortunately, there are new axion dark matter direct detection experiments aiming to probe
the mass region of interest for $N_{\rm DW}=1$ models with post-inflationary PQ symmetry breaking, notably CULTASK~\cite{Chung:2016ysi}, HAYSTAC~\cite{Zhong:2018rsr}, and
MADMAX~\cite{TheMADMAXWorkingGroup:2016hpc}.
For $N_{\rm DW} > 1$, the string-wall systems are stable, since the strings are pulled in
$N_{\rm DW}$ different directions. The existence of such stable domain walls is firmly excluded by standard cosmology~\cite{Zeldovich:1974uw}. Stability can be avoided if there exist further interactions which explicitly break the
PQ symmetry, e.g.
${\mathcal L} \supset
g M_P^4 \left(\frac{\sigma}{M_P} \right)^N
+h.c.$,
where $g$ is a complex dimensionless coupling, $M_{P}$
is the reduced Planck mass, and $N$ is an integer ($>4$). The appearance of such terms is motivated by the fact that global symmetries are not protected from effects of quantum gravity.
They give rise to an additional contribution in the low energy effective potential of the axion field, which lifts the degeneracy of the minima of the QCD induced potential by an amount~\cite{Ringwald:2015dsf}
$\Delta V \simeq -2 |g| M_P^4 \left(\frac{v_{\rm PQ}}{\sqrt{2}M_P} \right)^N \left[
\cos \left( \frac{2\pi N}{N_{\rm DW}} + \Delta_D \right) - \cos \Delta_D \right]
$,
where $\Delta_D = \arg(g) - N \overline{\theta}$,
and acts like a volume pressure on domain walls.
If $\Delta V$ is small, domain walls live for a long time and emit a lot of axions, potentially overclosing the universe.
On the other hand, if $\Delta V$ is large, it shifts the location of the minimum of the axion effective potential and leads to large $CP$ violation, spoiling the axionic solution of the strong $CP$ problem.
A detailed investigation of the parameter space exploiting the results of field theory simulations~\cite{Kawasaki:2014sqa}
showed~\cite{Ringwald:2015dsf} that there exists a valid region in parameter space if
$N = 9$ or $10$.\footnote{The absence of PQ symmetry breaking operators with $4<N<9$ can be naturally explained if the PQ symmetry arises accidentally as a low energy remnant
from a more fundamental
discrete symmetry~\cite{Ringwald:2015dsf,Ernst:2018bib}.} In the case of $N_{\rm DW}=6$ and $N=9\,(10)$, and allowing a mild tuning of $|g|$,
the axion can explain the observed dark matter abundance
for
\begin{equation}
4.4\times 10^7\,(1.3\times 10^9)\,{\rm GeV} < f_A < 1\times 10^{10}\,{\rm GeV}\ \Leftrightarrow \
0.56\,{\rm meV} < m_A < 130\,(4.5)\,{\rm meV}\, .
\end{equation}
Intriguingly, a DFSZ axion ($N_{\rm DW}=6$) in such a mass range can explain the accumulating hints of excessive energy losses of stars in various stages of their evolution~\cite{Giannotti:2017hny}.
In this range, axion dark matter direct detection may be difficult, but not impossible~\cite{Horns:2012jf,Baryakhtar:2018doz}.
Fortunately, it is aimed to be probed by the fifth force experiment ARIADNE~\cite{Arvanitaki:2014dfa} and the helioscope
IAXO~\cite{Armengaud:2014gea}.
\section*{References}
|
\section{Introduction \label{sec:intro}}
Subluminous B stars (sdBs) show similar colours and spectral characteristics to main sequence stars of
spectral type B, but are less luminous. Compared to main sequence B stars, the hydrogen Balmer lines in the spectra
of sdBs are stronger while the helium lines are much weaker. The strong line broadening and the early confluence of the
Balmer series is caused by the high surface gravities ($\log\,g\simeq5.0-6.0$) of these compact stars
($R_{\rm sdB}\simeq0.1-0.3\,R_{\rm \odot}$). Subluminous B stars are considered to be core helium-burning stars with
very thin hydrogen envelopes and masses of about half a solar mass (Heber \cite{heber86}) located at the extreme end of the horizontal branch (EHB).
\subsection{Hot subdwarf formation \label{sec:formation}}
The origin of EHB stars is still unknown (see Heber
\cite{heber09} for a review). The key question is how all but a tiny fraction of the red-giant progenitor's hydrogen envelope was removed at the same time at which the helium core has attained the mass ($\simeq0.5\,M_{\rm \odot}$) to ignite the helium flash. The reason for this high mass loss at the tip of the red giant branch (RGB) is unclear. Several single-star scenarios are under discussion (D'Cruz et al. \cite{dcruz96}; Sweigart \cite{sweigart97}; De Marchi \& Paresce \cite{demarchi96}; Marietta et al. \cite{marietta00}), which require either a fine-tuning of parameters or extreme environmental conditions that are unlikely to be met for the bulk of the observed subdwarfs in the field.
According to Mengel et al. (\cite{mengel76}), the required strong mass loss can occur in a close-binary system. The progenitor of the sdB star has to fill its Roche lobe near the tip of the red-giant branch (RGB) to lose a large part of its hydrogen envelope. The merger of close binary white dwarfs was investigated by Webbink (\cite{webbink84}) and Iben \& Tutukov (\cite{iben84}), who showed that an EHB star can form when two helium core white dwarfs (WDs) merge and the product is sufficiently massive to ignite helium. Politano et al. (\cite{politano08}) proposed that the merger of a red giant and a low-mass main-sequence star during a common envelope (CE) phase may lead to the formation of a rapidly rotating single hot subdwarf star.
Maxted et al. (\cite{maxted01}) determined a very high fraction of radial velocity variable sdB stars, indicating that about two thirds of the sdB stars in the field are in close binaries with periods of less than 30 days (see also Morales-Rueda et al. \cite{morales03}; Napiwotzki et al. \cite{napiwotzki04a}; Copperwheat et al. \cite{copperwheat11}). Han et al. (\cite{han02,han03}) used binary population synthesis models to study the stable Roche lobe overflow (RLOF) channel, the common envelope ejection channel, where the mass transfer to the companion is dynamically unstable, and the He-WD merger channel.
The companions are mostly main sequence stars or white dwarfs. If the white dwarf companion is sufficiently massive, the merger of the binary system might exceed the Chandrasekhar mass and explode as a type Ia supernova. Indeed, Maxted et al. (\cite{maxted00}) found the sdB+WD binary KPD\,1930$+$2752 to be a system that might qualify as a supernova Ia progenitor (see also Geier et al. \cite{geier07}). In Paper~I of this series (Geier et al. \cite{geier10b}) more candidate systems with massive compact companions, either massive white dwarfs or even neutron stars and black holes, have been found. Furthermore, Geier et al. (\cite{geier11c}) reported the discovery of an eclipsing sdB binary with a brown dwarf companion.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{tefflogg_vsini.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram for the entire sample (not RV-variable) under study.
The helium main sequence (HeMS) and the EHB band (limited by the zero-age
EHB, ZAEHB, and the terminal-age EHB, TAEHB) are superimposed with EHB evolutionary tracks for solar metallicity taken from
Dorman et al. (\cite{dorman93}) labelled with their masses. Open circles mark objects where only upper limits could be derived for $v_{\rm rot}\sin{i}$, filled circles objects with significant $v_{\rm rot}\sin{i}$. The size of the symbols scales with the value of $v_{\rm rot}\sin{i}$.}
\label{fig:tefflogg}
\end{center}
\end{figure}
\subsection{Rotation on the horizontal branch \label{sec:rotation}}
The rotational properties of horizontal branch (HB) stars both in globular clusters and in the field all the way from the red to the blue end have been studied extensively in the last decades (Peterson \cite{peterson83b}, \cite{peterson85}; Peterson et al. \cite{peterson83a}, \cite{peterson95}; Behr et al. \cite{behr00a}, \cite{behr00b}; Kinman et al. \cite{kinman00}; Recio-Blanco et al. \cite{recio02}, \cite{recio04}; Behr \cite{behr03a}, \cite{behr03b}; Carney et al. \cite{carney03}, \cite{carney08}). Most of these investigations were motivated by the puzzling horizontal branch morphologies in some globular clusters and the search for second or third parameters responsible for this phenomenon. The most interesting result of these studies is the discovery of a significant change in the rotational velocities of blue horizontal branch (BHB) stars when their effective temperatures exceed $\simeq11\,500\,{\rm K}$. HB stars cooler than this threshold value show ${v_{\rm rot}\sin\,i}$ values up to $40\,{\rm km\,s^{-1}}$, while the hotter stars rotate with velocities lower than $\simeq10\,{\rm km\,s^{-1}}$.
The transition in rotational velocity is accompanied by a jump towards brighter magnitudes in the colour-magnitude diagram (Grundahl et al. \cite{grundahl99}) and a change in the atmospheric abundance pattern. Stars cooler than $\simeq11\,500\,{\rm K}$ show the typi\-cal abundances of their parent population (e.g. For \& Sneden \cite{for10}), while stars hotter than that are in general depleted in helium and strongly enriched in iron and other heavy elements such as titanium or chromium. Lighter elements such as magnesium and silicon on the other hand have normal abundances (Behr et al. \cite{behr03a,behr03b}; Fabbian et al. \cite{fabbian05}; Pace et al. \cite{pace06}). Diffusion processes in the stellar atmosphere are most likely responsible for this effect. Michaud et al. (\cite{michaud83}) predicted such abundance patterns before the anomalies were observed (see also Michaud et al. \cite{michaud08}). Caloi (\cite{caloi99}) explained the sharp transition between the two abundance patterns as the disappearance of subsurface convection layers at a critical temperature. Sweigart (\cite{sweigart02}) indeed found that thin convective layers below the surface driven by hydrogen ionization should exist and shift closer to the surface when the effective temperature increases. At about $12\,000\,{\rm K}$ the convection zone reaches the surface and the outer layer of the star becomes fully radiative. Since convection is very efficient in mixing the envelope, diffusion processes do not operate in HB star atmospheres of less than $12\,000\,{\rm K}$.
Slow rotation is considered as a prerequisite for diffusion. Michaud (\cite{michaud83}) was the first to show that meridional circulation stops the diffusion process as soon as the rotational velocity reaches a critical value and could explain the chemical peculiarity of HgMn stars in this way. Quievy et al. (\cite{quievy09}) performed similar calculations for BHB stars and showed that the critical rotational velocity is somewhere near $\simeq20\,{\rm km\,s^{-1}}$ at the transition temperature of $11\,500\,{\rm K}$. This means that the atmospheric abundances of stars with lower ${v_{\rm rot}\sin\,i}$ should be affected by diffusion processes.
What causes the slow rotation that allows diffusion to happen, is still unclear. Sills \& Pinsonneault (\cite{sills00}) used a standard stellar evolution code and modelled the distribution of rotational velocities on the BHB. In order to reproduce the two populations of fast and slow rotators they assumed two distinct main sequence progenitor populations with different rotational veloci\-ties. In their picture the slowly rotating BHBs originate from slowly rotating main sequence stars.
Another possible explanation is the spin-down of the surface layers by diffusion itself. Sweigart (\cite{sweigart02}) argued that the radiative levitation of iron triggers a weak stellar wind that carries away angular momentum. Vink \& Cassisi (\cite{vink02}) showed that such winds are radiatively driven.
Brown (\cite{brown07}) used a stellar evolution code including rotation and modelled the distribution of rotational velocities on the BHB. This code allows one to follow the evolution of the progenitor star through the He-flash. Brown (\cite{brown07}) argues that no signifi\-cant angular momentum is exchanged between the stellar core and stellar envelope during the flash. The surface rotation of their models highly depends on the rotation of the surface convection zone, which contains most of the outer envelope's angular momentum. Hot BHB stars without surface convection zone rotate slower than the cooler ones with convection zone. This approach allows one to reproduce the observed ${v_{\rm rot}\sin\,i}$-distribution of BHB stars without assuming bimodal stellar po\-pulations (Brown et al. \cite{brown08}).
While the rotational properties of horizontal branch stars both in
globular clusters and in the field are thoroughly examined, the investigation of EHB stars has mostly been restricted to close binary systems, where tidal interaction plays a major role (Geier et al. \cite{geier10b}). Very few apparently single EHB stars have been studied so far, all of which are slow rotators ($<10\,{\rm km\,s^{-1}}$, e.g. Heber et al. \cite{heber00}; Edelmann \cite{edelmann01}).
In this paper we determine the projected rotational velocities of more than a hundred sdB stars by measuring the broadening of metal lines. In Paper~I (Geier et al. \cite{geier10b}) the rotational properties of sdBs in close binary system were derived and used to clarify the nature of their unseen companions. Here we focus on the rotational properties of apparently single sdBs and wide binary systems, for which tidal interactions become negligible.
In Sect.~\ref{sec:obs} we give an overview of the observations of high-resolution spectra and the atmospheric parameters of our sample. The determination of the rotational properties of 105 sdB stars are described in Sect.~\ref{sec:rotlow}, the results are interpreted in Sect.~\ref{sec:distrib} and compared to the corresponding results for BHB stars in Sect.~\ref{sec:bhb}. The implications for the sdB formation scenarios and the further evolution to the white dwarf cooling tracks are discussed in Sect.~\ref{sec:implications} and Sect.~\ref{sec:wd}, respectively. Finally, a summary is given in Sect.~\ref{sec:summary}.
\section{Observations and atmospheric parameters \label{sec:obs}}
ESO-VLT/UVES spectra were obtained in the course of the ESO Supernovae Ia
Progenitor Survey (SPY, Napiwotzki et al. \cite{napiwotzki01, napiwotzki03})
at spectral resolution $R\simeq20\,000-40\,000$ covering
$3200-6650\,{\rm \AA}$ with two small gaps at $4580\,{\rm \AA}$ and
$5640\,{\rm \AA}$. Each of the 50 stars was observed at least twice (Lisker et al. \cite{lisker05}).
Another sample of 46 known bright subdwarfs was observed with the
FEROS spectrograph ($R=48\,000$, $3750-9200\,{\rm \AA}$) mounted at the ESO/MPG
2.2m telescope (Geier et al. \cite{geier12}).
Six stars were observed with the FOCES spectrograph
($R=30\,000$, $3800-7000\,{\rm \AA}$) mounted at the CAHA 2.2m telescope (Geier et al. \cite{geier12}).
Two stars were observed with the HIRES instrument ($R=45\,000$,
$3600-5120\,{\rm \AA}$) mounted at the Keck telescope (Heber et al. \cite{heber00}).
One star was observed with the HRS fiber spectrograph at the Hobby Eberly Telescope ($R=30\,000$, $4260-6290\,{\rm \AA}$, Geier et al. \cite{geier10b}).
Because a wide slit was used in the SPY survey and the seeing
disk did not always fill the slit, the instrumental profile of some of the UVES spectra was seeing-dependent.
This has to be accounted for to estimate the instrumental resolution (see Paper~I).
The resolution of the spectra taken with the fiber spectrographs FEROS and FOCES was assumed to be constant.
The single spectra of all programme stars were radial-velocity (RV) corrected and co-added in
order to achieve higher signal-to-noise.
Atmospheric parameters of the stars observed with UVES have been determined by Lisker et al. (\cite{lisker05}). HD\,205805 and Feige\,49 have been analysed by Przybilla et al. (\cite{przybilla06}), the two sdB pulsators KPD\,2109+4401 and PG\,1219+534 by Heber et al. (\cite{heber00}), and the sdB binaries PG\,1725+252 and TON\,S\,135 by Maxted et al. (\cite{maxted01}) and Heber (\cite{heber86}), respectively. The rest of the sample was analysed in Geier et al. (\cite{geier12}) and a more detailed publication of these results is in preparation. We adopted the atmospheric parameters given in Saffer et al. (\cite{saffer94}) for $[$CW83$]$\,1758$+$36.
The whole sample under study is listed in Tables~\ref{tab:vrot} and \ref{tab:vrotrv} and the effective temperatures are plotted versus the surface gravities in Fig.~\ref{fig:tefflogg}. Comparing the positions of our sample stars to evolutionary tracks, we conclude that all stars are concentrated on or above the EHB, which is fully consistent with the theory. We point out that the inaccuracies in the atmospheric parameters do not significantly affect the derived projected rotational velocities.
\section{Projected rotational velocities from metal lines
\label{sec:rotlow}}
To derive $v_{\rm rot}\,\sin{i}$, we compared the observed spectra
with rotationally broadened, synthetic line profiles using a semi-automatic
analysis pipeline. The profiles were computed for the appropriate atmospheric parameters using the LINFOR program (developed by Holweger, Steffen and Steenbock at Kiel university, modified by Lemke \cite{lemke97}).
For a standard set of up to 187 unblended metal lines from 24 different ions and with
wavelengths ranging from $3700$ to $6000\,{\rm \AA}$, a model grid with
appropriate atmospheric parameters and different elemental abundances was
automatically generated with LINFOR. The actual number of lines used as input
for an individual star depends on the wavelength coverage. Owing to the
insufficient quality of the spectra and the pollution with telluric features
in the regions blueward of $3700\,{\rm \AA}$ and redward of
$6000\,{\rm \AA}$ we excluded them from our analysis. A simultaneous fit of
elemental abundance, projected rotational velocity and radial velocity was
then performed separately for each identified line using the FITSB2
routine (Napiwotzki et al. \cite{napiwotzki04b}). A detailed investigation of statistical and systematic
uncertainties of the techniques applied is presented in Paper~I. Depending on the quality of the data and
the number of metal lines used, an accuracy of about $1.0\,{\rm km\,s^{-1}}$ can be achieved.
For the best spectra with highest resolution the detection limit is about $5.0\,{\rm km\,s^{-1}}$.
Projected rotational velocities of 105 sdBs have been measured (see Tables~\ref{tab:vrot}, \ref{tab:vrotrv}). Ninety-eight sdBs do not show any RV variability. In addition, seven are radial velocity variable systems with orbital periods of about a few days (see Table~\ref{tab:vrotrv}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_all.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for the full sample. Objects with limits below the detection limit have been stacked into the first dotted bin.}
\label{fig:distriball}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_single.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 71 single stars from our sample using the same binning as in Fig.~\ref{fig:distriball}. The solid grey line marks the distribution of ${v_{\rm rot}\sin\,i}$ under the assumption of randomly oriented rotation axes and a constant ${v_{\rm rot}=7.65\,{\rm km\,s^{-1}}}$, which matches the observed distribution very well.}
\label{fig:distribsingle}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_comp.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 16 sdBs with companions visible in the spectra using the same binning as in Fig.~\ref{fig:distriball}.}
\label{fig:distribcomp}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_RV.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 8 radial velocity variable sdBs with orbital periods exceeding $\simeq1.2\,{\rm d}$ using the same binning as in Fig.~\ref{fig:distriball}.}
\label{fig:distribrv}
\end{center}
\end{figure}
For eleven stars of our sample upper limits for the projected rotational velocities have already been published (Heber et al. \cite{heber00}; Edelmann et al. \cite{edelmann01}) based on the same spectra as used here (see Table~\ref{tab:vrotlit}). Only for PHL\,932 and PG\,0909$+$276 our measured $v_{\rm rot}\sin{i}$ deviate significantly from the results of Edelmann et al. (\cite{edelmann01}). This is most likely because they used fewer metal lines in their study.
Przybilla et al. (\cite{przybilla06}) performed an NLTE analysis of Feige\,49 and HD\,205805 using the same FEROS spectra as we do here and derived a ${v_{\rm rot}\sin\,i}$ below the detection limit. Again our measurements are consistent with their results, because they are very close to the detection limit we derived for FEROS spectra of sdBs ($\simeq5\,{\rm km\,s^{-1}}$, see Paper~I).
\section{Projected rotational velocity distributions \label{sec:distrib}}
The projected rotational velocities of our full sample of 98 stars without radial velocity variations are all low ($<10\,{\rm km\,s^{-1}}$, see Table~\ref{tab:vrot}). Taking into account the uncertainties, one can see that there is no obvious trend with the atmosperic parameters (see Fig.~\ref{fig:tefflogg}).
Fig.~\ref{fig:distriball} shows the distribution of ${v_{\rm rot}\sin\,i}$ binned to the average measurement error ($1.5\,{\rm km\,s^{-1}}$). Eleven stars that had only fairly weak upper limits of $10\,{\rm km\,s^{-1}}$, were sorted out.
The distribution is very uniform and shows a prominent peak at $6-8\,{\rm km\,s^{-1}}$. Because we can only determine the projected rotation, the true rotational velocities of most stars in the sample should be about $7-8\,{\rm km\,s^{-1}}$.
\subsection{Single-lined sdBs}
Our sample contains 71 single-lined sdBs, of which the ${v_{\rm rot}\sin\,i}$ could be constrained. Ten stars of which we were only able to derive upper limits of $10\,{\rm km\,s^{-1}}$ were sorted out. Fig.~\ref{fig:distribsingle} shows the ${v_{\rm rot}\sin\,i}$ distribution of this subsample. Most remarkably, the distribution is almost identical to that of the full sample. Adopting a random distribution of inclination angles and a constant ${v_{\rm rot}}$ of $\simeq8\,{\rm km\,s^{-1}}$, the observed ${v_{\rm rot}\sin\,i}$-distribution can indeed be well reproduced (see Fig.~\ref{fig:distriball}). We therefore conclude that most single sdBs in our sample have very similar rotation velocities.
\subsection{Double-lined sdB binaries}
Our sample contains 18 sdBs with visible spectral signatures of cooler main sequence (MS) companions (e.g. Mg\,{\sc i}, Lisker et al. \cite{lisker05}). Again, two stars with upper limits of $10\,{\rm km\,s^{-1}}$ were excluded.
The orbital periods of these systems are long. Green et al. (\cite{green06}) have argued that such systems should have periods of many months or years. Recently, Deca et al. (\cite{deca12}) were able to determine the orbital period $P\simeq760\,{\rm d}$ of the sdB+K binary PG\,1018$-$047. Similar periods were reported by \O stensen \& van Winckel (\cite{oestensen12}) for eight such binaries. The separations of the components are so wide that tidal interaction is negligible. Main-sequence companions do therefore not affect the rotational properties of the sdB stars in this type of binaries.
The distribution for sdBs with composite spectra is displayed in Fig.~\ref{fig:distribcomp}. Taking into account the much smaller sample size, the result is again similar. We therefore conclude that the rotational properties of sdBs in wide binaries with MS companions are the same as those of single sdBs, although they have probably formed in a very different way (see Sect.~\ref{sec:implications}).
\subsection{Pulsating sdBs}
Two types of sdB pulsators are known. The slow pulsations of the V\,1093\,Her stars (sdBV$_{\rm s}$, Green et al. \cite{green03}) are not expected to influence the line broadening significantly (see Geier et al. \cite{geier10b}). For the short-period pulsators (V\,361\,Hya type, sdBV$_{\rm r}$, Charpinet et al. \cite{charpinet97}; Kilkenny et al. \cite{kilkenny97}) unresolved pulsations can severely affect or even dominate the broadening of the metal lines and therefore fake high ${v_{\rm rot}\sin\,i}$. Telting et al. (\cite{telting08}) showed that this happens in the case of the hybrid pulsator Balloon\,090100001 using the same method as in this work. Unresolved pulsations are also most likely responsible for the high line broadening ($39\,{\rm km\,s^{-1}}$) measured for the strong pulsator PG\,1605+072 (Heber et al. \cite{heber99}, \cite{heber00}).
Our sample contains three known long-period pulsators (PHL\,44, Kilkenny et al. \cite{kilkenny07}; PHL\,457, Blanchette et al. \cite{blanchette08}; LB\,1516, Koen et al. \cite{koen10}) and two short-period ones (KPD\,2109$+$4401, Bill\`{e}res et al. \cite{billeres98}; PG\,1219$+$534, O'Donoghue et al. \cite{odonoghue99}). The ${v_{\rm rot}\sin\,i}$ of KPD\,2109$+$4401 is indeed among the highest of all sdBs in our sample ($10.5\pm1.6\,{\rm km\,s^{-1}}$), but it is unclear if this might not be partly due to unresolved pulsations. Jeffery \& Pollacco (\cite{jeffery00}) measured RV variations of $2\,{\rm km\,s^{-1}}$ for KPD\,2109$+$4401. Taking this into account, the sdBs rotational velocity may be slightly lower than measured. The ${v_{\rm rot}\sin\,i}$ of the other pulsators are not peculiar.
For most stars in our sample it is not clear whether they are pulsators or not, because no light curves of sufficient quality are available. Because only about $5\%$ of all sdBs show pulsations detectable from the ground, one may conclude that the contamination by pulsators should be quite low. Thanks to the extensive photometric surveys for sdB pulsators conducted by Bill\`{e}res et al. (\cite{billeres02}), Randall et al. (\cite{randall06}) and \O stensen et al. (\cite{oestensen10}), we know that 27 stars from our sample do not show short-period pulsations.
Restricting ourselves to these objects and again excluding those with visible companions, we constructed a "pure" sample of 16 single sdBs, for which the rotational broadening is proven to be disturbed neither by the presence of a companion nor by pulsations. The associated ${v_{\rm rot}\sin\,i}$ distribution does not differ from the other distributions (see Figs.~\ref{fig:distriball}-\ref{fig:distribcomp}). We therefore conclude that unresolved pulsations do not significantly affect our results.
\subsection{Radial velocity variable sdBs}
In Paper~I we showed that the ${v_{\rm rot}\sin\,i}$ distribution of sdBs in close binary systems is strongly affected by the tidal interaction with their companions, but that this influence becomes negligible if the orbital periods of the binaries become longer than $\simeq1.2\,{\rm d}$. It is instructive to have a look at the ${v_{\rm rot}\sin\,i}$-distribution of these long-period radial velocity variable systems. From Paper~I we selected all seven binaries with periods longer than $1.2\,{\rm d}$, for which tidal synchronisation is not established. We added the system LB\,1516, a binary with yet unknown orbital parameters, but for which Edelmann et al. (\cite{edelmann05}) provided a lower limit for the period of the order of days\footnote{TON\,S\,135 was not included because the orbital period of $\simeq4\,{\rm d}$ given in Edelmann et al. (\cite{edelmann05}) is not very significant and shorter periods cannot be excluded yet.}.
Fig.~\ref{fig:distribrv} shows the associated distribution. Given the small sample size and although two stars have somewhat higher ${v_{\rm rot}\sin\,i}=10-12\,{\rm km\,s^{-1}}$, the distribution is again very similar to the distributions shown before (see Figs.~\ref{fig:distriball}-\ref{fig:distribcomp}). Subdwarf B stars in close binaries obviously rotate in the same way as single stars or sdBs with visible companions if the orbital period is sufficiently long.
\section{Comparison with BHB stars \label{sec:bhb}}
Projected rotational velocities of BHB stars have been determined for many globular cluster and field stars (Peterson et al. \cite{peterson95}; Behr \cite{behr03a, behr03b}; Kinman et al. \cite{kinman00}; Recio-Blanco et al. \cite{recio04}). The results are plotted against the effective temperature in Fig.~\ref{fig:vsiniteff}. The characteristic jump in ${v_{\rm rot}\sin\,i}$ at a temperature of about $\simeq11\,500\,{\rm K}$ can be clearly seen. The sdB sequence basically extends the BHB trend to higher temperatures. The ${v_{\rm rot}\sin\,i}$ values remain at the same level as observed in hot BHB stars.
Comparing the ${v_{\rm rot}\sin\,i}$ of BHB and EHB stars, one has to take into account that the radii of both types of horizontal branch stars are quite different, which translates directly into very different angular momenta. While sdBs have surface gravities $\log{g}$ between $5.0$ and $6.0$, the surface gravities of BHB stars range from $\log{g}=3.0$ to $4.0$. The BHB stars with the same rotational velocities as EHB stars have higher angular momenta. Assuming rigid rotation, the same inclination angle of the rotation axis, and the same mass of $\simeq0.5\,M_{\rm \odot}$ for BHB and EHB stars, one can calculate the quantity ${v_{\rm rot}\sin\,i}\times g^{-1/2}$, which is directly proportional to the angular momentum. The surface gravities of the sdBs were taken from the literature (see Sect.~\ref{sec:obs}), those for the BHB stars from Behr (\cite{behr03a, behr03b}) and Kinman et al. (\cite{kinman00}). Since Peterson et al. (\cite{peterson95}) and Recio-Blanco et al. (\cite{recio04}) did not determine surface gravities for their BHB sample, we adopted a $\log{g}$ of $3.0$ for stars with temperatures below $\simeq10\,000\,{\rm K}$ and $3.5$ for the hotter ones as suggested by the results of Behr (\cite{behr03a, behr03b}) and Kinman et al. (\cite{kinman00}).
In Fig.~\ref{fig:lteff} ${v_{\rm rot}\sin\,i}\times g^{-1/2}$ is plotted against $T_{\rm eff}$. The transition between BHB and EHB stars is smooth. Since the progenitors of the EHB stars lost more envelope material on the RGB, the EHB stars are expected to have lower angular momenta than the BHB stars. This is consistent with what can be seen in the Fig.~\ref{fig:lteff}.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{vsiniteff.eps}}
\caption{Projected rotational velocity plotted against effective temperature. The grey squares mark BHB and some sdB stars taken from Peterson et al. (\cite{peterson95}), Behr (\cite{behr03a, behr03b}), Kinman et al. (\cite{kinman00}), and Recio-Blanco et al. (\cite{recio04}). Upper limits are marked with grey triangles. The black diamonds mark the sdBs from our sample. The vertical line marks the jump temperature of $11\,500\,{\rm K}$.}
\label{fig:vsiniteff}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{lteff.eps}}
\caption{${v_{\rm rot}\sin\,i}\times g^{-1/2}$ plotted against effective temperature. The grey squares mark BHB and some sdB stars taken from Peterson et al. (\cite{peterson95}), Behr (\cite{behr03a, behr03b}), Kinman et al. (\cite{kinman00}), and Recio-Blanco et al. (\cite{recio04}). Upper limits are marked with grey triangles. The black diamonds mark the sdBs from our sample. The vertical line marks the jump temperature of $11\,500\,{\rm K}$. Typical uncertainties for the sdBs are given in the upper right corner.}
\label{fig:lteff}
\end{center}
\end{figure}
\section{Implications for hot subdwarf formation \label{sec:implications}}
The uniform distribution of low projected rotational velocities in single and wide binary sdBs has consequences for the open question of hot subdwarf formation. As shown in this study, sdBs appear to rotate at low but spectroscopically detectable velocities of $8-10\,{\rm km\,s^{-1}}$. These results are remarkably similar to those derived for their cooler relatives, the BHB stars. Hot subdwarfs are likely formed through binary interaction or merging, which is also accompanied by a transfer of angular momentum. The rotational properties of sdB stars therefore allow one to constrain possible formation scenarios.
\subsection{Uniform rotation of EHB stars and mass loss on the RGB}
The rotational properties of sdBs residing on the EHB are very similar to those of hot BHB stars. The only exception is that the EHB stars obviously lost more envelope in the red-giant phase and therefore retained less angular momentum. How the envelope is lost does not affect the rotational velocities of sdB stars, since the ${v_{\rm rot}\sin\,i}$-distribution of RV variable systems with orbital periods sufficiently long to neglect the tidal influence of the companion (Fig.~\ref{fig:distribrv}) is similar to those of apparently single sdB stars (Fig.~\ref{fig:distribsingle}) and for sdB stars with visible main sequence companions (Fig.~\ref{fig:distribcomp}).
The abundance patterns of sdBs are dominated by diffusion processes very similar to those of the hot BHB stars (Geier et al. \cite{geier10a}). No surface convection zone should be present, and according to the model of Brown (\cite{brown07}) the angular momentum of the outer layers should be low. Stellar winds and magnetic fields may help to slow down the upper layers of the star. However, Unglaub (\cite{unglaub08}) showed that the weak winds predicted for sdB stars are most likely fractionated and are therefore not able to carry away the most abundant elements hydrogen and helium.
Angular momentum gained or retained from the formation process may also be stored in the stellar core, which may be rapidly rotating. Kawaler \& Hostler (\cite{kawaler05}) proposed such a scenario and suggested an asteroseismic approach to probe the rotation of the inner regions of sdBs. Van Grootel et al. (\cite{vangrootel08}) and Charpinet et al. (\cite{charpinet08}) performed such an analysis for the two short-period sdB pulsators Feige\,48 and PG\,1336$-$018, respectively, and found no deviation from rigid rotation at least in the outer layers of these stars down to about half the stellar radius. But these results may not be representative, because both stars are in close binary systems and are synchronised by the tidal influence of their companions (Geier et al. \cite{geier10b}). The rigid body rotation may have been caused by this effect and may not be a general feature of sdBs. Another setback of these analyses is the problem that p-mode pulsations are not suited to probe the innermost regions of sdBs. In contrast to that, g-mode pulsations reach the stellar core and it should be possible to measure the rotational properties of the whole stellar interior with asteroseismic methods. With the availability of high-precision light curves from the Kepler and CoRoT missions, the analysis of g-mode pulsators became possible and first results have been published by van Grootel et al. (\cite{vangrootel10}) and Charpinet et al. (\cite{charpinet11b}).
For the RV variable systems CE ejection is the only feasible formation channel. The systems with visible companions may have lost their envelopes via stable RLOF. Very recently, \O stensen et al. (\cite{oestensen12}) and Deca et al. (\cite{deca12}) reported the discovery of sdB+MS binaries with orbital periods up to $\simeq1200\,{\rm d}$, which may have been sufficiently close for mass transfer.
However, the visible companions to the sdBs may still have been separated by too much for an interaction with the subdwarf progenitors. More detailed binary evolution calculations are needed to solve this problem. Common envelope ejection and stable RLOF form similar sdB stars, because in both cases the hydrogen envelope is removed and the helium burning should start under similar conditions. It would therefore not be surprising if their ${v_{\rm rot}\sin\,i}$-distributions were to look similar.
\subsection{Where are the He-WD merger products?}
The ${v_{\rm rot}\sin\,i}$-distribution of the single sdB stars (Fig.~\ref{fig:distribsingle}) is particularly hard to understand in the context of the WD merger scenario. If a certain fraction or even all of the apparently single sdBs would have been formed in this way, one would not expect a ${v_{\rm rot}\sin\,i}$-distribution that resembles that of the post-CE or post-RLOF sdBs. Gourgouliatos \& Jeffery (\cite{gourgouliatos06}) showed that the merger product of two WDs would rotate faster than break-up velocity, if angular momentum were conserved. These authors concluded that angular momentum must be lost during the merger process. One way to lose angular momentum are stellar winds and magnetic fields. Another explanation may be the interaction with the accretion disc during the merger. If the less massive object is disrupted, it should form an accretion disc around the more massive component. The WD can only gain mass if angular momentum is transported outward in the disc. This process is expected to spin down the merger product (Gourgouliatos \& Jeffery \cite{gourgouliatos06}). According to a model proposed by Podsiadlowski (priv. comm.), the merger is accompanied by a series of outbursts caused by the ignition of helium. These flashes remove angular momentum from the merged remnant and should slow it down to rotational velocities of less than $20\,{\rm km\,s^{-1}}$.
However, even if it is possible to slow down the merged remnant of two He-WDs, it is very unlikely that the merger pro\-ducts would have a ${v_{\rm rot}\sin{i}}$-distribution almost identical to sdBs, of which we know that they were formed via CE-ejection or maybe stable RLOF. This would require an extreme fine-tuning of parameters, unless there is an as yet unknown mechanism at work, which leads to uniform rotation of the radiative, diffusion-dominated atmospheres. It is therefore questionable whether our sample contains stars that were formed by an He-WD merger or a CE-merger event. If this is not the case and because of the size of our sample, it would be safe to conclude that the merger channel does not contribute significantly to the observed population of single hydrogen-rich sdO/Bs in contrast to the models of Han et al. (\cite{han02}, \cite{han03}).
This conclusion is consistent with the most recent results by Fontaine et al. (\cite{fontaine12}), who studied the empirical mass distribution of sdB stars derived from eclipsing binary systems and asteroseismic analyses. The lack of sdB stars more massive than $\simeq0.5\,M_{\odot}$, which would be the outcome of the merger channel, led to the conclusion that mergers are less frequent in the formation process of isolated sdB stars than predicted by theory.
The only known single and fast rotating hot subdwarf star EC\,22081$-$1916 (Geier et al. \cite{geier11a}) may be the rare outcome of a CE merger event as suggested by Politano et al. (\cite{politano08}). It is unique among $\simeq100$ sdBs of our sample.
Possible candidates for WD-merger products are the helium rich sdOs (He-sdOs, Str\"oer at al. \cite{stroeer07}), since Hirsch et al. (\cite{hirsch09}) measured ${v_{\rm rot}\sin\,i}$ values of $20-30\,{\rm km\,s^{-1}}$ for some of those stars. Although their velocities are not particularly high, they are significantly different from the typical ${v_{\rm rot}\sin\,i}$ of sdBs. However, while the He-sdOs were first considered as single stars (Napiwotzki et al. \cite{napiwotzki08}), evidence grows that a fraction of them resides in close binaries (Green et al. \cite{green08}; Geier et al. \cite{geier11b}). At least those He-sdOs could not have been formed by a He-WD merger.
\subsection{Alternative formation scenarios}
Because the canonical binary scenario for sdB formation, which rests on the three pillars CE ejection, stable RLOF and He-WD merger, turned out to be very successful not only in explaining the properties of sdBs in the field (Han et al. \cite{han02}, \cite{han03}), but also in globular clusters (Han \cite{han08}) and the UV-upturn phenomenon in old galaxies (Han et al. \cite{han07}), the possible lack of merger candidates poses a problem.
Alternative formation scenarios such as CE ejection triggered by substellar companions (Soker \cite{soker98}; Bear \& Soker \cite{bear12}) may be responsible for the formation of apparently single sdBs. Evidence grows that such objects are quite common around sdB stars (e.g. Silvotti et al. \cite{silvotti07}; Geier et al. \cite{geier11c}; Charpinet et al. \cite{charpinet11a}). In the light of the results presented here and other recent observational evidence, the conclusion has to be drawn that the question of sdB formation is still far from settled.
\section{Connection to white dwarfs \label{sec:wd}}
Owing to their thin hydrogen envelopes, hot subdwarf stars will not evolve to the asymptotic giant branch (AGB-manqu\'e, Dorman et al. \cite{dorman93}). After about $100\,{\rm Myr}$ of core He-burning on the EHB and a shorter episode of He-shell burning, these objects will join the WD cooling sequence.
The rotational properties of single WDs are difficult to determine. Owing to the high pressure in the dense WD atmospheres, the spectral lines of WDs are strongly broadened and hence do not appear to be suitable to measure ${v_{\rm rot}\sin{i}}$. However, the H${\rm_\alpha}$ line often displays a sharp line core, which is caused by NLTE effects. In a small fraction of the WD-population metal lines are visible. However, excellent high-resolution spectra are necessary to constrain the projected rotational velocity (Berger et al. \cite{berger05}).
The derived upper limits ($\simeq10-50\,{\rm km\,s^{-1}}$) are consistent with the much lower rotational velocities of pulsating WDs derived with asteroseismic methods ($\simeq0.2-3.5\,{\rm km\,s^{-1}}$, Kawaler \cite{kawaler03}). Most single WDs are therefore obviously rather slow rotators. The reason for this is most likely a significant loss of mass and angular momentum due to stellar winds and thermal pulses in the AGB-phase, as has been shown by Charpinet et al. (\cite{charpinet09}).
The properties of WDs evolved from sdB progenitors on the other hand should be very different. Since the hot subdwarfs bypass the AGB-phase, both their masses and their angular momenta are expected to remain more or less constant when evolving to become WDs.
The average mass of these sdB remnants ($\simeq0.47\,M_{\rm \odot}$) is expected to be significantly lower than the average mass of normal WDs ($\simeq0.6\,M_{\rm \odot}$). But more importantly, the rotational velocities of these WDs must be very high. We have shown that single sdBs have small, but still detectable ${v_{\rm rot}\sin{i}}$. Assuming rigid rotation and conservation of mass and angular momentum, the rotational velocity at the surface scales with the stellar radius. Because the radius decreases by a factor of about $10$, the rotational velocity should increase by a factor of about $100$. Assuming an average ${v_{\rm rot}\simeq8\,{\rm km\,s^{-1}}}$ for single sdBs, WDs evolved through an EHB-phase should therefore have an average ${v_{\rm rot}\simeq800\,{\rm km\,s^{-1}}}$. Because about $1\%$ of all WDs are expected to have evolved through an EHB-phase, we expect a similar fraction of extremely fast rotating, low-mass WDs. These high ${v_{\rm rot}\sin{i}}$-values should be easily detectable even in medium-resolution spectra. The sample of WDs with observed spectra from the Sloan Digital Sky Survey (Eisenstein et al. \cite{eisenstein06}) for example should contain more than $100$ of these objects.
\section{Summary \label{sec:summary}}
We extended a project to derive the rotational properties of sdB stars and determined the projected rotational velocities of 105 sdB stars by measuring the broadening of metal lines using high-resolution spectra. All stars in our sample have low ${v_{\rm rot}\sin{i}}<10\,{\rm km\,s^{-1}}$. For $\simeq75\%$ of the sample we were able to determine significant rotation. The distribution of projected rotational velocities is consistent with an average rotation of $\simeq8\,{\rm km\,s^{-1}}$ for the sample. Furthermore, the $v_{\rm rot}\sin{i}$-distributions of single sdBs, hot subdwarfs with main sequence companions vi\-sible in the spectra and close binary systems with periods exceeding $1.2\,{\rm d}$ are similar. The BHB and EHB stars are related in terms of surface rotation and angular momentum. Hot BHBs with diffusion-dominated atmospheres are slow rotators like the EHB stars, which lost more envelope and therefore angular momentum on the RGB. The uniform rotation distributions of single and wide binary sdBs pose a challenge to our understanding of hot subdwarf formation. Especially the high fraction of He-WD mergers predicted by theory seems to be inconsistent with our results. We predict that the evolutionary channel of single sdB stars gives birth to a small population of rapidly rotating WDs with masses lower than average.
\begin{table*}[t!]
\caption{Projected rotational velocities of single sdBs and sdBs with visible companions.}
\label{tab:vrot}
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B/V}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument \\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HE\,0151$-$3919 & 20\,800 & 14.3$^{\rm B}$ & 66 & 1.06 & 27 & $<5.0$ & UVES \\
EC\,21494$-$7018 & 22\,400 & 11.2$^{\rm V}$ & 85 & & 16 & 8.6 $\pm$ 1.8 & FEROS \\
EC\,15103$-$1557 & 22\,600 & 12.9$^{\rm V}$ & 163 & & 8 & 6.5 $\pm$ 1.6 & FEROS \\
HD\,4539 & 23\,000 & 10.1$^{\rm B}$ & 112 & & 21 & 3.9 $\pm$ 1.0 & FEROS \\
EC\,11349$-$2753 & 23\,000 & 12.5$^{\rm B}$ & 185 & & 49 & 4.7 $\pm$ 1.0 & FEROS \\
EC\,14345$-$1729 & 23\,300 & 13.1$^{\rm V}$ & 117 & & 40 & 6.2 $\pm$ 1.0 & FEROS \\
HE\,0539$-$4246 & 23\,300 & 14.5$^{\rm B}$ & 40 & 0.87 & 19 & $<10.0$ & UVES \\
HE\,2307$-$0340$^{\rm no}$ & 23\,300 & 15.8$^{\rm B}$ & 61 & 0.89 & 17 & $<5.0$ & UVES \\
PG\,1432$+$004$^{\rm nr}$ & 23\,600 & 12.0$^{\rm B}$ & 170 & & 13 & 4.7 $\pm$ 1.0 & FEROS \\
EC\,19563$-$7205$^{\rm c}$ & 23\,900 & 12.8$^{\rm B}$ & 85 & & 34 & 9.8 $\pm$ 1.0 & FEROS \\
EC\,20106$-$5248 & 24\,500 & 12.6$^{\rm V}$ & 114 & & 47 & 7.8 $\pm$ 1.0 & FEROS \\
BD$+$48$^{\circ}$\,2721 & 24\,800 & 10.5$^{\rm B}$ & 326 & & 10 & 4.7 $\pm$ 1.4 & FOCES \\
HD\,205805 & 25\,000 & 9.9$^{\rm B}$ & 255 & & 20 & 4.5 $\pm$ 1.0 & FEROS \\
HE\,0321$-$0918$^{\rm no}$ & 25\,100 & 14.7$^{\rm B}$ & 37 & 1.22 & 7 & 5.6 $\pm$ 2.3 & UVES \\
PG\,1653$+$131 & 25\,400 & 14.1$^{\rm B}$ & 68 & & 32 & 8.3 $\pm$ 1.0 & FEROS \\
HE\,2237$+$0150 & 25\,600 & 15.8$^{\rm B}$ & 40 & 0.78 & 11 & 8.5 $\pm$ 1.8 & UVES \\
PG\,0342$+$026 & 26\,000 & 11.1$^{\rm B}$ & 190 & & 54 & 6.2 $\pm$ 1.0 & FEROS \\
PG\,2122$+$157$^{\rm c}$ & 26\,000 & 15.0$^{\rm B}$ & 67 & 0.78 & 13 & 7.9 $\pm$ 1.4 & UVES \\
GD\,108 & 26\,100 & 13.3$^{\rm B}$ & 97 & & 6 & 6.0 $\pm$ 1.8 & FEROS \\
Feige\,65 & 26\,200 & 11.8$^{\rm B}$ & 150 & & 18 & 7.2 $\pm$ 1.1 & FOCES \\
PHL\,44$^{\rm l}$ & 26\,600 & 13.0$^{\rm B}$ & 85 & & 31 & 8.4 $\pm$ 1.0 & FEROS \\
HE\,0513$-$2354 & 26\,800 & 15.8$^{\rm B}$ & 21 & 0.99 & 18 & $<10.0$ & UVES \\
HE\,0135$-$6150 & 27\,000 & 16.3$^{\rm B}$ & 37 & 0.71 & 13 & 5.5 $\pm$ 1.7 & UVES \\
SB\,815 & 27\,000 & 10.6$^{\rm B}$ & 85 & & 48 & 7.3 $\pm$ 1.0 & FEROS \\
HE\,2201$-$0001 & 27\,100 & 16.0$^{\rm B}$ & 35 & 1.10 & 28 & $<5.0$ & UVES \\
PG\,2205$+$023 & 27\,100 & 12.9$^{\rm B}$ & 36 & & 9 & $<10.0$ & FEROS \\
PG\,2314$+$076$^{\rm nb}$ & 27\,200 & 13.9$^{\rm B}$ & 71 & & 6 & 6.0 $\pm$ 2.2 & FEROS \\
SB\,485 & 27\,700 & 13.0$^{\rm B}$ & 112 & 0.71 & 24 & 7.2 $\pm$ 1.0 & UVES \\
KUV\,01542$-$0710$^{\rm c}$ & 27\,800 & 16.3$^{\rm B}$ & 58 & 0.92 & 8 & 7.2 $\pm$ 2.1 & UVES \\
HE\,2156$-$3927$^{\rm c}$ & 28\,000 & 14.1$^{\rm B}$ & 62 & 0.61 & 16 & 7.0 $\pm$ 1.2 & UVES \\
EC\,03591$-$3232 & 28\,000 & 11.2$^{\rm V}$ & 131 & & 34 & 4.8 $\pm$ 1.0 & FEROS \\
EC\,12234$-$2607 & 28\,000 & 13.8$^{\rm B}$ & 60 & & 19 & 6.8 $\pm$ 1.4 & FEROS \\
PG\,2349$+$002 & 28\,000 & 12.0$^{\rm B}$ & 68 & & 11 & 5.7 $\pm$ 1.5 & FEROS \\
HE\,2322$-$0617$^{\rm c,no}$ & 28\,100 & 15.7$^{\rm B}$ & 62 & 0.70 & 15 & 6.8 $\pm$ 1.3 & UVES \\
PG\,0258$+$184$^{\rm c,no}$ & 28\,100 & 15.2$^{\rm B}$ & 48 & 0.99 & 12 & 7.2 $\pm$ 1.7 & UVES \\
HE\,0136$-$2758$^{\rm no}$ & 28\,200 & 16.2$^{\rm B}$ & 29 & 1.20 & 27 & $<5.0$ & UVES \\
HE\,0016$+$0044$^{\rm no}$ & 28\,300 & 13.1$^{\rm B}$ & 58 & 0.67 & 14 & 6.5 $\pm$ 1.3 & UVES \\
PG\,1549$-$001$^{\rm no}$ & 28\,300 & 14.8$^{\rm B}$ & 45 & 1.16 & 20 & 5.6 $\pm$ 1.1 & UVES \\
HE\,2349$-$3135 & 28\,500 & 15.6$^{\rm B}$ & 53 & 1.13 & 13 & 10.0 $\pm$ 1.7 & UVES \\
EC\,01120$-$5259 & 28\,900 & 13.5$^{\rm V}$ & 73 & & 19 & 5.8 $\pm$ 1.2 & FEROS \\
HE\,0007$-$2212$^{\rm no}$ & 29\,000 & 14.8$^{\rm B}$ & 53 & 0.64 & 21 & 7.4 $\pm$ 1.0 & UVES \\
LB\,275$^{*}$ & 29\,300 & 14.9$^{\rm B}$ & 48 & 1.16 & 20 & 5.6 $\pm$ 1.1 & UVES \\
EC\,03263$-$6403 & 29\,300 & 13.2$^{\rm V}$ & 32 & & 40 & $<5.0$ & FEROS \\
HE\,1254$-$1540$^{\rm c,no}$ & 29\,700 & 15.2$^{\rm B}$ & 54 & 0.75 & 20 & 7.2 $\pm$ 1.3 & UVES \\
PG\,1303$+$097 & 29\,800 & 14.3$^{\rm B}$ & 51 & & 18 & 6.1 $\pm$ 1.5 & FEROS \\
HE\,2222$-$3738 & 30\,200 & 14.2$^{\rm B}$ & 61 & 0.83 & 28 & 8.7 $\pm$ 1.0 & UVES \\
HE\,2238$-$1455 & 30\,400 & 16.0$^{\rm B}$ & 48 & 0.80 & 14 & $<5.0$ & UVES \\
EC\,03470$-$5039 & 30\,500 & 13.6$^{\rm V}$ & 53 & & 9 & 7.3 $\pm$ 2.0 & FEROS \\
Feige\,38 & 30\,600 & 12.8$^{\rm B}$ & 148 & & 34 & 5.3 $\pm$ 1.0 & FEROS \\
HE\,1038$-$2326$^{\rm c}$ & 30\,600 & 15.8$^{\rm B}$ & 34 & 1.27 & 28 & $<5.0$ & UVES \\
PG\,1710$+$490 & 30\,600 & 12.1$^{\rm B}$ & 80 & & 11 & 7.1 $\pm$ 1.6 & FOCES \\
HE\,0447$-$3654 & 30\,700 & 14.6$^{\rm V}$ & 44 & & 11 & 7.3 $\pm$ 1.8 & FEROS \\
EC\,14248$-$2647 & 31\,400 & 12.0$^{\rm V}$ & 104 & & 14 & 7.0 $\pm$ 1.5 & FEROS \\
HE\,0207$+$0030$^{\rm no}$ & 31\,400 & 14.7$^{\rm B}$ & 27 & 1.30 & 7 & 5.1 $\pm$ 2.3 & UVES \\
KPD\,2109$+$4401$^{\rm s}$ & 31\,800 & 13.2$^{\rm B}$ & 136 & & 9 & 10.5 $\pm$ 1.6 & HIRES \\
EC\,02542$-$3019 & 31\,900 & 12.8$^{\rm B}$ & 65 & & 13 & 7.3 $\pm$ 1.5 & FEROS \\
$[$CW83$]$\,1758$+$36$^{\rm nb}$ & 32\,000 & 11.1$^{\rm B}$ & 110 & & 5 & 5.7 $\pm$ 1.4 & FOCES \\
TON\,S\,155$^{\rm c}$ & 32\,300 & 14.9$^{\rm B}$ & 35 & 0.85 & 14 & $<5.0$ & UVES \\
EC\,21043$-$4017 & 32\,400 & 13.1$^{\rm V}$ & 65 & & 8 & 5.6 $\pm$ 1.8 & FEROS \\
EC\,20229$-$3716 & 32\,500 & 11.4$^{\rm V}$ & 153 & & 29 & 4.5 $\pm$ 1.0 & FEROS \\
HS\,2125$+$1105$^{\rm c}$ & 32\,500 & 16.4$^{\rm B}$ & 29 & 0.80 & 8 & 6.0 $\pm$ 2.4 & UVES \\
HE\,1221$-$2618$^{\rm c}$ & 32\,600 & 14.9$^{\rm B}$ & 35 & 1.06 & 11 & 6.8 $\pm$ 1.6 & UVES \\
HS\,2033$+$0821$^{\rm no}$ & 32\,700 & 14.4$^{\rm B}$ & 43 & 1.14 & 37 & $<5.0$ & UVES \\
HE\,0415$-$2417$^{\rm no}$ & 32\,800 & 16.2$^{\rm B}$ & 34 & 0.83 & 10 & $<10.0$ & UVES \\
EC\,05479$-$5818 & 33\,000 & 13.1$^{\rm V}$ & 81 & & 20 & 5.8 $\pm$ 1.1 & FEROS \\
HE\,1200$-$0931$^{\rm c,no}$ & 33\,400 & 16.2$^{\rm B}$ & 30 & 0.86 & 12 & $<5.0$ & UVES \\
\hline
\\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument \\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PHL\,932 & 33\,600 & 12.0$^{\rm B}$ & 102 & 1.10 & 12 & 9.0 $\pm$ 1.3 & UVES \\
HE\,1422$-$1851$^{\rm c,no}$ & 33\,900 & 16.3$^{\rm B}$ & 14 & 0.58 & 10 & $<10.0$ & UVES \\
PHL\,555 & 34\,100 & 13.8$^{\rm B}$ & 56 & 0.88 & 17 & 6.9 $\pm$ 1.2 & UVES \\
HE\,1419$-$1205$^{\rm c}$ & 34\,200 & 16.2$^{\rm B}$ & 28 & 0.69 & 16 & $<10.0$ & UVES \\
PG\,1219$+$534$^{\rm s}$ & 34\,300 & 12.4$^{\rm B}$ & 140 & & 11 & 5.7 $\pm$ 1.4 & HIRES \\
HS\,2216$+$1833$^{\rm c}$ & 34\,400 & 13.8$^{\rm B}$ & 54 & 0.90 & 11 & 5.3 $\pm$ 1.6 & UVES \\
HE\,1050$-$0630$^{\rm no}$ & 34\,500 & 14.0$^{\rm B}$ & 59 & 1.20 & 28 & 7.3 $\pm$ 1.4 & UVES \\
HE\,1519$-$0708$^{\rm no}$ & 34\,500 & 15.6$^{\rm B}$ & 20 & 0.84 & 8 & 9.0 $\pm$ 2.4 & UVES \\
HE\,1450$-$0957 & 34\,600 & 15.1$^{\rm B}$ & 32 & 0.71 & 6 & 9.0 $\pm$ 2.4 & UVES \\
EC\,13047$-$3049 & 34\,700 & 12.8$^{\rm V}$ & 68 & & 5 & 6.8 $\pm$ 3.6 & FEROS \\
HS\,1710$+$1614$^{\rm no}$ & 34\,800 & 15.7$^{\rm B}$ & 38 & 1.30 & 13 & $<5.0$ & UVES \\
PHL\,334 & 34\,800 & 12.5$^{\rm B}$ & 87 & & 13 & $<5.0$ & FEROS \\
Feige\,49 & 35\,000 & 13.2$^{\rm B}$ & 119 & & 40 & 6.2 $\pm$ 1.0 & FEROS \\
HE\,2151$-$1001$^{\rm s}$ & 35\,000 & 15.6$^{\rm B}$ & 42 & 0.66 & 6 & 6.7 $\pm$ 2.4 & UVES \\
PG\,0909$+$164$^{\rm s}$ & 35\,300 & 13.9$^{\rm B}$ & 52 & & 4 & $<10.0$ & FEROS \\
HE\,1021$-$0255$^{\rm no}$ & 35\,500 & 15.3$^{\rm B}$ & 40 & 1.61 & 11 & $<10.0$ & UVES \\
PG\,0909$+$276$^{\rm nb}$ & 35\,500 & 13.9$^{\rm B}$ & 82 & & 13 & 9.3 $\pm$ 1.4 & FOCES \\
HE\,0101$-$2707 & 35\,600 & 15.0$^{\rm B}$ & 67 & 0.85 & 12 & 8.1 $\pm$ 1.5 & UVES \\
EC\,03408$-$1315 & 35\,700 & 13.6$^{\rm V}$ & 66 & & 11 & 8.8 $\pm$ 1.8 & FEROS \\
HE\,1352$-$1827$^{\rm c}$ & 35\,700 & 16.2$^{\rm B}$ & 24 & 0.85 & 5 & 8.2 $\pm$ 2.7 & UVES \\
PG\,1207$-$032$^{\rm no}$ & 35\,700 & 13.1$^{\rm B}$ & 50 & 0.64 & 9 & 6.6 $\pm$ 1.6 & UVES \\
HE\,0019$-$5545 & 35\,700 & 15.8$^{\rm B}$ & 38 & 0.76 & 7 & 5.9 $\pm$ 2.3 & UVES \\
GD\,619 & 36\,100 & 13.9$^{\rm B}$ & 96 & 0.81 & 10 & 6.1 $\pm$ 1.5 & UVES \\
HE\,1441$-$0558$^{\rm c,no}$ & 36\,400 & 14.4$^{\rm B}$ & 30 & 0.70 & 8 & 6.9 $\pm$ 2.0 & UVES \\
HE\,0123$-$3330 & 36\,600 & 15.2$^{\rm B}$ & 48 & 0.66 & 8 & 6.9 $\pm$ 1.8 & UVES \\
PG\,1505$+$074 & 37\,100 & 12.2$^{\rm B}$ & 153 & & 4 & $<5.0$ & FEROS \\
HE\,1407$+$0033$^{\rm no}$ & 37\,300 & 15.5$^{\rm B}$ & 35 & 0.72 & 9 & $<10.0$ & UVES \\
PG\,1616$+$144$^{\rm nb}$ & 37\,300 & 13.5$^{\rm B}$ & 44 & & 4 & $<10.0$ & FEROS \\
EC\,00042$-$2737$^{\rm c}$ & 37\,500 & 13.9$^{\rm B}$ & 37 & & 9 & $<10.0$ & FEROS \\
PHL\,1548 & 37\,400 & 12.5$^{\rm B}$ & 90 & & 10 & 9.1 $\pm$ 1.6 & FEROS \\
PB\,5333$^{\rm nb}$ & 40\,600 & 12.5$^{\rm B}$ & 66 & & 2 & $<10.0$ & FEROS \\
$[$CW83$]$\,0512$-$08 & 38\,400 & 11.3$^{\rm B}$ & 124 & & 14 & 7.7 $\pm$ 1.1 & FEROS \\
\hline
\\
\end{tabular}
\tablefoot{The average seeing is only given if the spectra were obtained with a wide
slit in the course of the SPY survey. In all other cases the seeing should not
influence the measurements. $^{\rm c}$Main sequence companion visible in the spectrum (Lisker et al. \cite{lisker05}). $^{\rm s}$Pulsating subdwarf of V\,361\,Hya type. $^{\rm l}$Pulsating subdwarf of V\,1093\,Her type. No short-period pulsations have been detected either by $^{\rm nb}$Bill\`{e}res et al. (\cite{billeres02}), $^{\rm nr}$Randall et al. (\cite{randall06}) or $^{\rm no}$\O stensen et al. (\cite{oestensen10}). $^{*}$Misidentified as CBS\,275 in Lisker et al. (\cite{lisker05}).}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Projected rotational velocities of radial velocity variable sdBs.}
\label{tab:vrotrv}
\begin{center}
\begin{tabular}{lllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B/V}$ & S/N & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument\\
& [K] & [mag] & & & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
TON\,S\,135 & 25\,000 & 13.1$^{\rm B}$ & 45 & 35 & 6.4 $\pm$ 1.0 & FEROS \\
LB\,1516$^{\rm l}$ & 25\,200 & 12.7$^{\rm B}$ & 58 & 23 & 6.0 $\pm$ 1.3 & FEROS \\
PHL\,457$^{\rm l}$ & 26\,500 & 13.0$^{\rm B}$ & 59 & 47 & 6.1 $\pm$ 1.0 & FEROS \\
EC\,14338$-$1445 & 27\,700 & 13.5$^{\rm V}$ & 71 & 39 & 8.9 $\pm$ 1.0 & FEROS \\
PG\,1725$+$252 & 28\,900 & 11.5$^{\rm B}$ & 45 & 11 & 7.4 $\pm$ 1.1 & HRS \\
PG\,1519$+$640 & 30\,300 & 12.1$^{\rm B}$ & 104 & 11 & 9.4 $\pm$ 1.4 & FOCES \\
PG\,2151$+$100 & 32\,700 & 12.9$^{\rm B}$ & 69 & 9 & 9.0 $\pm$ 1.7 & FEROS \\
\hline
\\
\end{tabular}
\tablefoot{$^{\rm l}$Pulsating subdwarf of V\,1093\,Her type.}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Comparison with literature.}
\label{tab:vrotlit}
\begin{center}
\begin{tabular}{lrrl}
\hline
\noalign{\smallskip}
System & This work & Literature & Reference \\
& ${v_{\rm rot}\,\sin\,i}$ & ${v_{\rm rot}\,\sin\,i}$ & \\
& [${\rm km\,s^{-1}}$] & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
KPD\,2109$+$4401 & $10.5\pm1.6$ & $<10.0$ & Heber \\
PG\,1219$+$534 & $5.7\pm1.4$ & $<10.0$ & et al. (\cite{heber00}) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
BD$+$48$^{\circ}$\,2721 & $4.7\pm1.4$ & $<5.0$ & Edelmann \\
Feige\,65 & $7.2\pm1.1$ & $<5.0$ & et al. (\cite{edelmann01}) \\
HD\,205805 & $4.5\pm1.0$ & $<5.0$ & \\
HD\,4539 & $3.9\pm1.0$ & $<5.0$ & \\
LB\,1516 & $6.0\pm1.3$ & $<5.0$ & \\
PG\,0342$+$026 & $6.2\pm1.0$ & $<5.0$ & \\
PG\,0909$+$276 & $9.3\pm1.4$ & $<5.0$ & \\
PHL\,932 & $9.0\pm1.3$ & $<5.0$ & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Feige\,49 & $6.2\pm1.0$ & $0.0^{*}$ & Przybilla \\
HD\,205805 & $4.5\pm1.0$ & $0.0^{*}$ & et al. (\cite{przybilla06}) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\tablefoot{$^{*}$Adopted value for line fits is below the detection limit.}
\end{center}
\end{table*}
\begin{acknowledgements}
S. G. was supported by the Deutsche Forschungsgemeinschaft under grant
He~1354/49-1. The authors thank N. Reid, R. Napiwotzki, L. Morales-Rueda and H. Edelmann for providing their data.
Furthermore, we would like to thank the referee G. Fontaine for his comments and suggestions.
\end{acknowledgements}
|
\section{Introduction}
Dual stable Grothendieck polynomials $g_{\l/\m}(x)$ were introduced in \cite{LP2007}. They are Hopf-dual to the stable Grothendieck polynomials, which represent some classes of the structure sheaves of Schubert varieties. The connection of stable and dual stable Grothendieck polynomials with the $K$-theory of the Grassmannian has been discussed in various papers including \cite{LS1982,FK1996,B2002}, and \cite{LP2007}. The paper \cite{LP2007} gives an explicit combinatorial rule for the coefficients of polynomials $g_{\lambda}(x)$ in the basis of Schur polynomials $s_{\mu}(x)$. We extend this result to the case of $g_{\l/\m}(x)$ for a skew shape ${\l/\m}$, and give a different rule (for straight shapes, it coincides with the rule of \cite{B2012}) that provides the same coefficients for straight shapes and extends the classical Littlewood-Richardson rule. We do this by constructing a crystal graph (see \cite{K1995}) on the set ${\mathcal{R}}({\l/\m})$ of all reverse plane partitions of shape ${\l/\m}$ with entries not exceeding a fixed number $m>0$.
\subsection{Main results}
To a reverse plane partition $T\in {\mathcal{R}}({\l/\m})$ we assign a reading word $r(T)$ in the following way: ignore each entry of $T$ that is equal to the entry directly below it; then read all the remaining entries in the left-to-right bottom-to-top order (the usual reading order for the Young tableaux). After that we define a family of operators $e_1,e_2,\dots,e_{m-1}$ on the set ${\mathcal{R}}({\l/\m})$ which are essentially the usual parenthesation operators applied to the reading word (see \cite{LS1978}).
\begin{theorem}
\label{thm:crystal}
The operators $e_1,e_2,\dots, e_{m-1}$ satisfy the crystal axioms (which can be found in \cite{K1995} and will also be discussed in the sequel).
\end{theorem}
Therefore we get a crystal graph structure on ${\mathcal{R}}({\l/\m})$. As a direct application of that (see \cite{K1995}), we get a Littlewood-Richardson rule for the reverse plane partitions:
\begin{corollary}
\label{cor:LR}
The dual stable Grothendieck polynomial $g_{\l/\m}(x)$ is expanded in terms of Schur polynomials $s_\nu(x)$ as follows:
$$ g_{\l/\m}(x)=\sum_\nu h_{\l/\m}^\nu s_\nu(x),$$
where the sum is over all Young diagrams $\nu$, and the coefficient $h_{\l/\m}^\nu$ is equal to the number of reverse plane partitions $T$ of shape ${\l/\m}$ and weight $\nu$ such that the reading word $r(T)$ is a lattice word.
\end{corollary}
We also give a self-contained proof of this Corollary without using the theory of crystal graphs. Note that the highest degree homogeneous component of $g_{\l/\m}(x)$ is the skew-Schur polynomial $s_{\l/\m}(x)$, so Corollary \ref{cor:LR} is an extension of the Littlewood-Richardson rule for skew-Schur polynomials.
\begin{remark}
\def{\mathrm{ceq}}{{\mathrm{ceq}}}
In \cite{GGL2015}, the following refinement $\tilde g_{\l/\m}(x;t)$ of $g_{\l/\m}(x)$ was introduced. For a reverse plane partition $T\in{\mathcal{R}}({\l/\m})$ let ${\mathrm{ceq}}(T):=(c_1,c_2,\dots)$ be a weak composition whose $i$-th entry $c_i$ is equal to the number of columns $j$ such that the boxes $(i,j)$ and $(i+1,j)$ both belong to ${\l/\m}$ and the entries of $T$ in these boxes are the same. Let $t=(t_1,t_2,\dots)$ be a vector of indeterminates, and put $t^{{\mathrm{ceq}}(T)}:=t_1^{c_1}t_2^{c_2}\dots$. Then the bounded degree power series $\tilde g_{\l/\m}(x;t)$ is defined as a sum over all reverse plane partitions $T$ of shape ${\l/\m}$ of $x^Tt^{{\mathrm{ceq}}(T)}$. It will be clear later that the operators $e_1,e_2,\dots,e_{m-1}$ preserve this ${\mathrm{ceq}}$-statistic, therefore, Corollary \ref{cor:LR} also admits a refinement:
$$\tilde g_{\l/\m}(x;t)=\sum_\alpha t^\alpha \sum_\nu h_{\l/\m}^{\nu,\alpha} s_\nu(x),$$
where the first sum is over all weak compositions $\alpha$, and $h_{\l/\m}^{\nu,\alpha}$ counts the number of reverse plane partitions $T$ of shape ${\l/\m}$ and weight $\nu$ such that the reading word $r(T)$ is a lattice word with an extra property that ${\mathrm{ceq}}(T)=\alpha$.
\end{remark}
\subsection{Previous research}
There already is a combinatorial rule for the coefficients $h_{\l/\m}^\nu$ in \cite{LP2007} for the case when ${\mu}=\emptyset$ and ${\l/\m}={\lambda}$ is a straight shape. Namely, $h_{\lambda}^\nu$ equals to the number $f_{\lambda}^\nu$ of \textit{elegant fillings} of ${\lambda}/\nu$, that is, the number of semi-standard Young tableaux $T$ of shape ${\lambda}/\nu$ such that all entries in the $i$-th row of $T$ are strictly less than $i$. This formula is Hopf-dual to the corresponding formula for stable Grothendieck polynomials that appeared earlier in \cite[Theorem 2.16]{L2000}, which implies that the dual stable Grothendieck polynomials are indeed Hopf-dual to the usual stable Grothendieck polynomials. To prove this rule, Lam and Pylyavskyy in \cite{LP2007} construct a weight preserving bijection between reverse plane partitions of shape ${\lambda}$ and pairs $(S,U)$, where $S$ is a semi-standard Young tableau of some shape $\mu$ and $U$ is an elegant filling of ${\lambda}/\mu$. Following this bijection one can deduce that $T$ is a reverse plane partition of shape ${\lambda}$ and weight $\nu$ whose reading word is a lattice word if and only if it corresponds to a pair $(S,U)$ such that $S$ is the filling of the shape $\nu$ with all entries in the $i$-th row equal to $i$, and $U$ is an elegant tableau of shape ${\lambda}/\nu$. Therefore the bijection from \cite{LP2007} restricted to the reverse plane partitions whose reading word is a lattice word proves the equality of the numbers $h_{\lambda}^\nu$ and $f_{\lambda}^\nu$.
For straight shapes, a combinatorial rule that involved the coefficients $h_{\lambda}^\nu$ instead of $f_{\lambda}^\nu$ was given in \cite[Proposition 5.3]{B2012} together with bijections that also show the equality of the numbers $h_{\lambda}^\nu$ and $f_{\lambda}^\nu$.
\subsection{The structure of the paper}
The rest of this section contains some background information about dual stable Grothendieck polynomials, crystal graphs, and introduction to the operators $e_i$ that occur in the statement of Theorem \ref{thm:crystal}.
The second section is dedicated to the proof of Theorem \ref{thm:crystal} and Corollary \ref{cor:LR} by exploring further properties and connections between the reading words of reverse plane partitions and the action of operators $e_i$.
\subsection{Preliminaries}
\subsubsection{Reverse plane partitions}
We follow the notations of \cite{LP2007}. Let ${\l/\m}$ be a skew shape and $m$ a positive integer. A \textit{reverse plane partition} $T$ of shape ${\l/\m}$ with entries in $[m]:=\{1,\dots,m\}$ is a tableau of this shape such that its entries do not exceed $m$ and weakly increase both in rows and in columns. For $i\in [m]$, by $T(i)$ we denote the number of columns of $T$ that contain $i$. To each reverse plane partition $T$ we attach a monomial $x^T=\Pi_{i\in[m]} x_i^{T(i)}$. For a skew shape ${\l/\m}$, define the dual stable Grothendieck polynomial $g_{\l/\m}(x_1,\dots,x_m)$ as the sum of weights of all reverse plane partitions $T$ of shape ${\l/\m}$ with entries in $[m]$:
$$g_{\l/\m}(x)=\sum_T x^T.$$
As it was shown in \cite{LP2007}, these polynomials are symmetric.
\subsubsection{Crystal graphs}
\label{subsubsection:crystal}
\def{\mathcal{S}}{{\mathcal{S}}}
Crystal graphs are important for representation theory of certain quantized universal enveloping algebras, and have been a fruitful topic of research for the past two decades. We give a brief adaptation of the crystal graph theory based on \cite{K1995,S2003,L1994} with a very low yet sufficient for the rest of this paper level of detail.
A crystal graph $G$ can be viewed as a set $V$ of vertices together with a set $e_1,\dots,e_{m-1}:V\to V\cup \{0\}$ of operators that act on the vertices of $G$ and return either a vertex of $G$ or zero. In addition, these operators are required to satisfy a set of simple \textit{crystal axioms}. If they do, then they are called \textit{crystal operators}, and $G$ is called \textit{a crystal graph}.
Instead of providing the list of these axioms, we give an important example of a crystal graph, which is the only crystal graph that we will be interested in. Fix $n>0$. Let ${\mathcal{S}}:=[m]^n$ be the set of all strings of length $n$ in the alphabet $[m]$. For $s=(s_1,s_2,\dots,s_n)\in{\mathcal{S}}$, the weight $w(s)=(w_1(s),\dots,w_m(s))$ is defined as
$$w_i(s):=\# \{j\in[n]:s_j=i\}.$$
For $i\in [m-1]$ we define the operator $E_i:{\mathcal{S}}\to{\mathcal{S}}\cup\{0\}$. For $s:=(s_1,s_2,\dots, s_n)\in {\mathcal{S}}$ the value $E_i(s)$ is evaluated using the following algorithm:
\begin{enumerate}
\item \label{step:ignore}Ignore all entries of $s$ other than the ones equal to $i$ or to $i+1$;
\item \label{step:pair}Ignore all occurrences of $i+1$ immediately followed by $i$;
\item \label{step:replace}After doing the previous step as many times as possible we obtain a string that consists of several $i$'s followed by several $i+1$'s. If there is at least one $i+1$, then $E_i$ replaces the leftmost $i+1$ by an $i$, and otherwise we set $E_i(s):=0$.
\end{enumerate}
In other words, $E_i$ labels each $i$ by a closing parenthesis, each $i+1$ by an opening parenthesis, and then it replaces the leftmost unmatched opening parenthesis by a closing one if there are any unmatched opening parentheses present. As an example, let $i=1,m=3,n=13$ and consider the following string $s:=(1,2,2,3,1,3,2,2,2,1,3,1,2)$. After step (\ref{step:ignore}) we ignore all $3$'s, so the string $s$ becomes $(1,2,2,*,1,*,2,2,2,1,*,1,2)$. Here the ignored entries are represented as stars. Next, we do step \ref{step:pair} as many times as needed, so our string is modified as follows:
\begin{eqnarray*}
s&=& (1,2,2,3,1,3,2,2,2,1,3,1,2)\\
&\to& (1,2,2,*,1,*,2,2,2,1,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,2,1,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,*,*,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,*,*,*,1,2)\\
&\to& (1,2,*,*,*,*,2,*,*,*,*,*,2).
\end{eqnarray*}
Now we can easily calculate the $E_1$-orbit of $s$:
\begin{eqnarray*}
E_1^0(s)&=& (1,2,2,3,1,3,2,2,2,1,3,1,2)\\
E_1^1(s)&=& (1,\mathbf{1},2,3,1,3,2,2,2,1,3,1,2)\\
E_1^2(s)&=& (1,1,2,3,1,3,\mathbf{1},2,2,1,3,1,2)\\
E_1^3(s)&=& (1,1,2,3,1,3,1,2,2,1,3,1,\mathbf{1})\\
E_1^4(s)&=& 0.
\end{eqnarray*}
\def{\textrm{Im}}{{\textrm{Im}}}
\def{\textrm{Id}}{{\textrm{Id}}}
Similarly, define the operators $F_i$ to be the operators that replace the rightmost unmatched closing parenthesis by an opening one. The operators $E_i$ and $F_i$ are ``inverse to each other'' in the sense that for any two strings $u,v\in {\mathcal{S}}$, $E_i(u)=v$ if and only if $F_i(v)=u$.
These operators satisfy the crystal axioms and therefore have a lot of nice properties, which we summarize in the following Lemma:
\begin{lemma}
\label{lemma:crystal}
\begin{enumerate}
\item Each connected component of the corresponding edge-colored graph has exactly one vertex $v\in {\mathcal{S}}$ such that for every $i\in [m-1]$, $E_i(v)=0$.
\item This component is completely determined (up to an isomorphism of edge-colored graphs) by the weight $w(v)$, which is clearly a weakly decreasing sequence of integers.
\item The sum of $x^{w(u)}$ over all vertices $u$ in this connected component is equal to the Schur polynomial $s_{w(v)}$.
\end{enumerate}
\end{lemma}
Even though all of these properties follow from the fact that $E_i$ and $F_i$ satisfy crystal axioms, we prove them just to make the proof of Corollary \ref{cor:LR} self-contained. Note that a somewhat related proof can be found in \cite{RS1998}.
\begin{proof}
Note that if the words $u,u'\in {\mathcal{S}}$ are Knuth equivalent (see \cite{K1970}), then the words $E_i(u)$ and $E_i(u')$ are Knuth equivalent (or both zero), and also the words $F_i(u)$ and $F_i(u')$ are Knuth equivalent (or both zero). And for each word $u\in{\mathcal{S}}$ there is exactly one word $u'\in{\mathcal{S}}$ which is Knuth equivalent to $u$ and such that it is a reading word of some semi-standard Young tableau $T$. But the operators $E_i$ and $F_i$ applied to the reading word of $T$ produce a reading word of some other tableau that has the same shape as $T$.
Now all three properties follow from the fact that any two semi-standard Young tableaux of the same straight shape can be obtained from one another by applying a sequence of operators $E_i$ and $F_i$. To show this, consider a tableau $T_0$ of shape ${\lambda}$ such that for every $j$, all of its entries in the $j$-th row are equal to $j$. Consider an integer $k\geq 1$, and let $T$ be a tableau of shape ${\lambda}$ such that for $j\geq k$, all entries of $T$ in the $j$-th row are equal to $j$ and such that for $j<k$, the entries of $T$ in the $j$-th row are less than or equal to $k$. Then we claim that such $T$ can be obtained from $T_0$ by applying a sequence of operators $F_i$ for different $i$'s. This statement is true for $k=1$ and can be easily proven by induction for all $k\geq 1$.
\end{proof}
\subsection{The crystal operators for reverse plane partitions}
\subsubsection{The descent-resolution algorithm}
We describe the descent-resolution algorithm for reverse plane partitions from \cite{GGL2015}, where it was used in order to describe the analogue of the Bender-Knuth involution for reverse plane partitions. Let ${\l/\m}$ be a skew shape, and fix $i\in[m-1]$. For a tableau $T'$ of shape ${\l/\m}$ such that the entries of $T'$ are equal to either $i$ or $i+1$ and weakly increase in columns but not necessarily in rows, we say that a column of $T'$ is \textit{$i$-pure}, if it contains an $i$ but does not contain an $i+1$. Similarly, we call a column \textit{$i+1$-pure} if it contains an $i+1$ but does not contain an $i$. If a column contains both $i$ and $i+1$, then we call this column \textit{mixed}.
\begin{definition}[see \cite{GGL2015}]
A tableau $T'$ is a \textit{benign tableau} if the entries of $T'$ weakly increase in columns and for every two mixed columns $A$ and $B$ ($A$ is to the left of $B$), the lowest $i$ in $A$ is not higher than the lowest $i$ in $B$. In other words, the vertical coordinates of the borders between $i$'s and $i+1$'s in mixed columns weakly increase from left to right (see Figure \ref{fig:benign}).
\end{definition}
\newcommand{\multicolumn{1}{|c|}{}}{\multicolumn{1}{|c|}{}}
\newcommand{\mm}[1]{\multicolumn{1}{|c|}{#1}}
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||ccc||ccc||}\hline
& & & & & & & & \\
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \mm{1} & \multicolumn{1}{|c|}{} & \mm{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \mm{2} & \multicolumn{1}{|c|}{} \\
\cline{3-3} \mm{2} & \multicolumn{1}{|c|}{} & \mm{2}\\
\cline{2-3} \multicolumn{1}{|c|}{} & & \\
\cline{1-1}\\
\end{tabular}& &
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \mm{1}\\
\cline{3-3} \mm{2} & \mm{1} & \multicolumn{1}{|c|}{} \\
\cline{2-2} \multicolumn{1}{|c|}{} & \mm{2} & \mm{2}\\
\cline{2-3} \multicolumn{1}{|c|}{} & & \\
\cline{1-1}\\
\end{tabular}& &
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \multicolumn{1}{|c|}{} & \mm{1} & \multicolumn{1}{|c|}{} \\
\cline{2-2} \mm{1} & \multicolumn{1}{|c|}{} & \mm{2}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \mm{2} & \multicolumn{1}{|c|}{} \\
\cline{2-3} \mm{2} & & \\
\cline{1-1} \\
\end{tabular}&\\
& (a) & & & (b) & & & (c) & \\\hline
\end{tabular}
\caption{\label{fig:benign} The table (a) is not benign, (b) is benign but is not a reverse plane partition, (c) is a reverse plane partition.}
\end{figure}
The descent-resolution algorithm takes a benign tableau $T'$ and converts it into a reverse plane partition of the same shape and weight.
A benign tableau $T'$ may easily fail to be a reverse plane partition. More specifically, it may contain an $i+1$ with an $i$ directly to the right of it -- we call such a situation \textit{a descent}. Let $A$ be the column containing an $i+1$ and $A+1$ be the column containing an $i$. Then there are three possible types of descents depending on the types of the columns $A$ and $A+1$ (their abbreviations are relevant when $i=1$):
\begin{enumerate}
\item[(2M)] $A$ is $i+1$-pure and $A+1$ is mixed
\item[(M1)] $A$ is mixed and $A+1$ is $i$-pure
\item[(21)] $A$ is $i+1$-pure and $A+1$ is $i$-pure
\end{enumerate}
There is a fourth type of descents in which both columns are mixed, but the benign tableau property implies that such descents are impossible. For a descent of each of these three types, \cite{GGL2015} provides a \textit{descent-resolution step}, which changes only the entries of $A$ and $A+1$ and resolves this descent.
For descents of the first two types, the descent-resolution step switches the roles of the columns but preserves the vertical coordinate of the lowest $i$ in the mixed column; this determines the operation uniquely. For a descent of the third type, it simply replaces all $i$'s by $i+1$'s and vice versa in both columns. It is clear that the resulting tableau will also be a benign tableau. The descent-resolution steps for $i=1$ are visualized in Figure \ref{fig:reduction}.
\def{\mathbf{1}}{{\mathbf{1}}}
\def{\mathbf{2}}{{\mathbf{2}}}
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||ccc||ccc||}\hline
& & & & & & & & \\
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{2}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}\\
& (M1) & & & (2M) & & & (21) & \\\hline
\end{tabular}
\caption{\label{fig:reduction} The descent-resolution steps (taken from \cite{GGL2015}).}
\end{figure}
The descent-resolution algorithm performs these descent-resolution steps until there are no descents left, which means that we get a reverse plane partition. This algorithm terminates, because $i$-pure columns always move to the right while $i+1$-pure columns always move to the left. Also, it is shown in \cite{GGL2015} that the result of the algorithm does not depend on the order in which the descents are resolved.
\subsubsection{The definition of $e_i$'s and $f_i$'s}
Let ${\l/\m}$ be a skew shape, and fix $i\in[m-1]$. For a reverse plane partition $T$ of shape ${\l/\m}$ with entries in $[m]$, define $e_i(T)$ as follows. First, consider only the subtableau of $T$ that consists of entries equal to either $i$ or $i+1$. Then, label each $i$-pure column by a closing parenthesis and each $i+1$-pure column by an opening parenthesis (and ignore the mixed columns).
Choose the ($i+1$-pure) column $A$ that corresponds to the leftmost unmatched opening parenthesis (if all opening parentheses are matched, set $e_i(T):=0$). Replace all the $i+1$'s in $A$ by $i$'s, and then apply the descent-resolution algorithm to the resulting benign tableau.
Similarly, $f_i$ chooses the ($i$-pure) column $B$ that corresponds to the rightmost unmatched closing parenthesis and replaces all the $i$'s in it by $i+1$'s and then applies the descent-resolution algorithm.
We discuss the properties of $e_i$'s and $f_i$'s and their connection to the defined above reading word in the next section.
\section{Properties of the reading words of reverse plane partitions}
Recall that the reading word $r(T)$ of a reverse plane partition $T$ of shape ${\l/\m}$ is defined as the usual left-to-right bottom-to-top Young tableaux reading word that ignores each entry that has the same entry below it. An example is shown in Figure \ref{fig:rw}.
\begin{figure}[h]
$$\young(::12,:114,1114,1334,235:,245,34)\to \young(::\ 2,:\ \ \ ,\ 11\ ,1\ 34,\ 3\ :,2\ 5,34)\to 34253134112$$
\caption{\label{fig:rw} The reading word of a skew-shaped reverse plane partition.}
\end{figure}
We assume the coordinates of the boxes are in the matrix notation. For a reverse plane partition $T$, define its \textit{height vector} $h(T)$ to be the sequence of vertical coordinates of the entries of $T$ that contribute to $r(T)$ arranged in the exact same order as they appear in the reading word. For example, for $T$ as in Figure \ref{fig:rw} we put $h(T):=(7,7,6,6,5,4,4,4,3,3,1)$. It is always a weakly decreasing sequence of positive integers. Similarly, we define the height vector of a benign tableau. Note that each descent-resolution step preserves the height vector, and, therefore, so do the operators $e_i$ and $f_i$.
\begin{lemma}
\label{lemma:injective}
Fix a skew shape ${\l/\m}$ and a sequence $h$ of positive integers. Then for each reading word $r$ there is at most one reverse plane partition $T$ of shape ${\l/\m}$ with $r(T)=r$ and $h(T)=h$.
\end{lemma}
\begin{proof}
Suppose that there exists a reverse plane partition $T$ of shape ${\l/\m}$ with $r(T)=r$ and $h(T)=h$. Then $T$ can be uniquely reconstructed from $r$ and $h$ by filling the boxes of ${\l/\m}$ in the reading order:
\begin{enumerate}
\item Set $j=1$;
\item Let $B$ be the first (in the reading order) box of ${\l/\m}$ which is not filled with a number. Let $a$ be the value in the box directly below it, and let $c$ be the value in the box directly to the left of it (if there is no such box then we put $a:=+\infty$ or $c:=0$);
\item If the height of $B$ is not equal to $h_j$, then set the entry in the box $B$ equal to $a$ and proceed to the next box (go to step 2);
\item If the number $r_j$ does not satisfy $c\leq r_j<a$, then, again, set the entry in the box $B$ equal to $a$ and proceed to the next box;
\item Otherwise, we set the entry in the box $B$ equal to $r_j$, increase $j$ by $1$ and proceed to the next box.
\end{enumerate}
Note that if $r$ and $h$ are the reading word and the height vector of some reverse plane partition, then the entries of $h$ weakly decrease, and the entries of $r$ that have the same height weakly increase. We prove by induction that the first $k$ entries of $T$ (in the reading order) are the same as the first $k$ entries of the reverse plane partition that the algorithm produces. For $k=0$ it is true. Now, we want to put $r_j$ somewhere into the row $h_j$ so that the entry below it is strictly bigger than $r_j$ and so that the entries in the row weakly increase. Thus if $r_j$ cannot be put into the current box (because either $r_j\geq a$ or $r_j<b$), then this box should be ignored by the reading word, so its value should be the same as the value directly below it. If $b\leq r_j<a$, then $r_j$ has to be put into the current box, because if we put $r_j$ somewhere to the right, then we have to fill this box with the value directly below it (with $a$), which is strictly bigger than $r_j$, so the entries in the row will not be weakly increasing.
\end{proof}
Recall that the operators $F_i$ are defined on the set ${\mathcal{S}}=[m]^n$ of all strings of length $n$, and replace the rightmost unmatched closing parenthesis (corresponding to an entry equal to $i$) by an opening parenthesis (by an $i+1$). Meanwhile, the operators $f_i$ act on ${\mathcal{R}}({\l/\m})$, which is the set of all reverse plane partitions of shape ${\l/\m}$ with entries less than or equal to $m$. It turns out that these two actions commute with the operation of taking the reading word:
\begin{lemma}
\label{lemma:intertw}
Let $T$ be a reverse plane partition. Then
$$F_i(r(T))=r(f_i(T)).$$
In particular, if $f_i(T)$ is zero then $F_i(r(T))$ is zero and the converse is also true.
\end{lemma}
And, because $e_i$ and $f_i$ are ``inverse to each other'' (in the same sense as above), and the same is true for $E_i$ and $F_i$, we get
\begin{corollary}
Let $T$ be a reverse plane partition. Then
$$E_i(r(T))=r(e_i(T)).$$
\end{corollary}
\begin{proof}[{Proof of Lemma \ref{lemma:intertw}}]
The operator $f_i$ labels $i$-pure columns by closing parentheses and $i+1$-pure columns by opening parentheses. Then it finds the rightmost unmatched closing parenthesis and replaces the corresponding $i$-pure column by an $i+1$-pure column. After that we get a benign tableau $T'$, and then we apply the descent-resolution algorithm to $T'$ which produces a reverse plane partition $T''=:f_i(T)$. Our proof consists of two parts:
\begin{enumerate}
\item $r(T')=r(T'')$;
\item $F_i(r(T))=r(T')$.
\end{enumerate}
\begin{remark}
Note that both of these parts are false for $e_i$ and $E_i$. To make them true, one needs to introduce the reading word that ignores each entry equal to the entry directly \textit{above} it, rather than directly below it.
\end{remark}
We start with the first part. Note that even though $T'$ and $T''$ differ by a sequence of descent-resolution steps, it is not true in general that the descent-resolution steps preserve the reading word. Fortunately, as we will see later, all the appearing descents are of the first type. And the corresponding descent-resolution step (see Figure \ref{fig:reduction1}) clearly does not change the reading word.
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||}\hline
& & \\
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}\\ & & \\\hline
\end{tabular}
\caption{\label{fig:reduction1} The first descent-resolution step (M1).}
\end{figure}
The reason we only need this descent-resolution step is the definition of $f_i$. Namely, $f_i$ changes only one $i$-pure column $A$ into an $i+1$-pure column. And this column is required to be labeled by the rightmost unmatched closing parenthesis. Let $B$ be the leftmost $i+1$-pure column to the right of $A$, and let $C$ be the rightmost $i$-pure column to the left of $A$. If there was an $i$-pure column between $A$ and $B$, then it would also be unmatched, so $A$ would not be labeled by the rightmost unmatched closing parenthesis. Also, if there was an $i+1$-pure column $D$ between $C$ and $A$, then it would have to be matched to some $i$-pure column between $D$ and $A$, so $C$ would not be the rightmost $i$-pure column to the left of $A$. All in all we can see that all the columns between $C$ and $A$ and between $A$ and $B$ are mixed. If either $C$ or $B$ is undefined, then all the columns to the left (resp., to the right) of $A$ are mixed.
Now it is clear why only the descents of the first type appear while the descent-resolution steps are performed. The column $A$ becomes $i+1$-pure, so the only possible descent can occur between $A$ and $A+1$, and as we resolve it, the $i+1$-pure column moves to the right. But because it is surrounded by mixed columns, the only appearing descents are between this $i+1$-pure column and the mixed column to the right of it. And if this $i+1$-pure column moves to the position $B-1$, then there are no descents left, because $B$ is also $i+1$-pure. This finishes the proof of the first part.
The second part asks for a certain correspondence between two different matchings. The first one appears when we label $i$-pure columns by closing parentheses, $i+1$-pure columns by opening parentheses, and then say that two pure columns match each other if their labels (two parentheses) match each other in the parenthesis sequence. In this situation we say that these two columns \textit{match in the reverse plane partition}. The second matching appears when we label the entries of the reading word by parentheses and say that two entries of the reading word match each other if their labels match each other. In this situation we say that these two entries \textit{match in the reading word}.
The second part of the Lemma states that an $i$-pure column is labeled by the rightmost unmatched closing parenthesis in the reverse plane partition
if and only if the corresponding entry in the reading word is also labeled by the rightmost unmatched closing parenthesis in the reading word. Here we can restrict our attention to reverse plane partitions that are filled only with $i$'s and $i+1$'s. For a column $A$, let $j(A)$ be the position of the corresponding entry of the reading word if $A$ is either $i$- or $i+1$-pure. If $A$ is mixed, then set $j^-(A)$ to be the position of the entry of the reading word corresponding to $i$ and set $j^+(A)$ to be the position of the entry of the reading word corresponding to $i+1$.
We need to check three implications:
\begin{enumerate}
\item If a column $A$ is $i$-pure and unmatched in the reverse plane partition, then the entry $j(A)$ is unmatched in the reading word.
\item If a column $A$ is mixed, then the entry $j^-(A)$ is matched to something (not necessarily to $j^+(A)$) in the reading word.
\item If a column $A$ is $i$-pure and matched to some $i+1$-pure column $B$ in the reverse plane partition, then the entry $j(A)$ is also matched to something (not necessarily to $j(B)$) in the reading word.
\end{enumerate}
It is clear that these three properties together imply that the $i$-pure columns unmatched in the reverse plane partition correspond exactly to the unmatched $i$'s in the reading word. And because the reading word preserves the order of pure columns, the second part of the lemma reduces to proving these three implications.
Note that if a column $A$ is $i$-pure, then for every other column $B$ that is to the right (resp., left) of $A$, the entry $j(B)$ or $j^-(B)$ or $j^+(B)$ if defined is also to the right (resp. left) of $j(A)$. Another simple useful observation is that if we have any injective map that attaches to each $i+1$ in the reading word an $i$ to the right of it, then all the $i+1$'s in this reading word are matched. Now we are ready to check the implications (1)-(3).
(1) If a column $A$ is $i$-pure and unmatched, then we can just throw everything to the right of $A$ and $j(A)$ out. Now, every $i+1$-pure column to the left of $A$ is matched to something in the reverse plane partition, so for every $i+1$ to the left of $j(A)$ in the reading word we have an $i$ that is between it and $j(A)$, and for different $i+1$'s these $i$'s are also different. Therefore every $i+1$ to the left of $j(A)$ is matched in the reading word as well, so $j(A)$ is unmatched in the reading word.
(2) Suppose $A$ is mixed. If we throw out all the columns that are to the right of $A$, then several $i+1$'s between $j^+(A)$ and $j^-(A)$ will be thrown out of the reading word, but all the $i$'s to the left of $j^-(A)$ will remain untouched. Let $B$ be the rightmost $i$-pure column to the left of $A$. Now we throw out all the columns to the left of $B$ and also $B$ itself, which corresponds to erasing the part the reading word from the beginning to $j(B)$ (if there was no such $B$ then we do not throw anything out of the reading word). Now we have a reverse plane partition that contains no $i$-pure columns, so by the counting argument $j^-(A)$ is matched in the reading word. But then it was also matched in the original reading word.
(3) Suppose $A$ is $i$-pure and is matched in the reverse plane partition to some $i+1$-pure column $B$ to the left of $A$. Let $C$ be the rightmost $i$-pure column to the left of $B$. We throw out everything that is to the right of $A$ or to the left of $C$, which corresponds to keeping all the entries of the reading word between $j(C)$ and $j(A)$. We also remove $C$ and $j(C)$. All the $i$-pure columns between $A$ and $B$ are matched in the reverse plane partition to some $i+1$-pure columns between $A$ and $B$, and there are no $i$-pure columns between $B$ and $C$, so the number of $i+1$'s between $j(C)$ and $j(A)$ is strictly bigger than the number of $i$'s between $j(C)$ and $j(A)$, so $j(A)$ has to be matched to something in the reading word. We finish the proof of the third implication, which finishes the proof of the second (last) part of the Lemma.
\end{proof}
Let ${\l/\m}$ be a skew shape, and let $h$ be a sequence of positive integers. Lemmas \ref{lemma:injective} and \ref{lemma:intertw} give a vertex-injective map from the graph of all reverse plane partitions $T$ of shape ${\l/\m}$ with $h(T)=h$ to the graph ${\mathcal{S}}$ of all strings of the same length as $h$, and this map takes the operators $e_i$ and $f_i$ to $E_i$ and $F_i$. Therefore each connected component of the graph of all reverse plane partitions is isomorphic to the corresponding connected component of the graph ${\mathcal{S}}$. Now the proof of Theorem \ref{thm:crystal} follows from the observations about crystal graphs made in Subsection \ref{subsubsection:crystal}, in particular, the proof of Corollary \ref{cor:LR} follows from Lemma \ref{lemma:crystal}. \qed
\section*{Acknowledgments}
I am grateful to Prof. Alex Postnikov and to Darij Grinberg for their valuable remarks.
|
\section{Introduction}
\label{sec:intro}
A major open question in the study of exoplanets is the origin of their apparent
obliquity properties---the distribution of the angle $\lambda$ between the
stellar spin and the planet's orbital angular momentum vectors as projected on
the sky (see, e.g., the review by \citealt{WinnFabrycky15}).
Measurements of the Rossiter--McLaughlin effect in hot Jupiters (HJs, defined
here as planets with masses $M_\mathrm{p}\gtrsim0.3\,M_\mathrm{J}$ that have
orbital periods $P_\mathrm{orb} \lesssim 10\,$days) have indicated that
$\lambda$ spans the entire range from~$0^\circ$ to~$180^\circ$, in stark
contrast with the situation in the solar system (where the angle between the
planets' total angular momentum vector and that of the Sun is only
$\sim$$6^\circ$).
In addition, there is a marked difference in the distribution of $\lambda$
between G~stars, where $\sim$$1/2$ of systems are well aligned
($\lambda < 20^\circ$) and the rest are spread out roughly uniformly over the
remainder of the $\lambda$ range, and F~stars of effective temperature
$T_\mathrm{eff} \gtrsim 6250\,$K, which exhibit only a weak excess of
well-aligned systems. There is, however, also evidence for a dependence of the
obliquity distribution on the properties of the planets and not just on those of
the host star; in particular, only planets with $M_\mathrm{p} < 3\,M_\mathrm{J}$
have apparent retrograde orbits ($\lambda > 90^\circ$).
Various explanations have been proposed to account for the broad range of
observed obliquities, but the inferred dependences on $T_\mathrm{eff}$ and
$M_\mathrm{p}$ provide strong constraints on a viable model. In one scenario
\cite[][]{Winn+10, Albrecht+12}, HJs arrive in the vicinity of the host star on
a misaligned orbit and subsequently act to realign the host through a tidal
interaction, which is more effective in cool stars than in hot ones.
In this picture, HJs form at large radii and either migrate inward through their
natal disk while maintaining nearly circular orbits or are placed on a
high-eccentricity orbit after the gaseous disk dissipates---which enables them
to approach the center and become tidally trapped by the star (with their orbits
getting circularized by tidal friction; e.g., \citealt{FordRasio06}).\footnote{
The possibility of HJs forming at their observed locations has also been
considered in the literature \citep[e.g.,][]{Boley+16,Batygin+16}, but the
likelihood of this scenario is still being debated.}
The processes that initiate high-eccentricity migration (HEM), which can be
either planet--planet scattering \citep[e.g.,][]{Chatterjee+08, JuricTremaine08,
BeaugeNesvorny12} or secular interactions that involve a stellar binary
companion or one or more planetary companions (such as Kozai-Lidov oscillations
--- e.g., \citealt{WuMurray03, FabryckyTremaine07, Naoz+11, Petrovich15b}---and
secular chaos---e.g., \citealt{WuLithwick11, LithwickWu14, Petrovich15a,
Hamers+17}), all give rise to HJs with a distribution of misaligned orbits.
In the case of classical disk migration, the observed obliquities can be
attributed to a primordial misalignment of the natal disk that occurred during
its initial assembly from a turbulent interstellar gas \citep[e.g.,][]{Bate+10,
Fielding+15} or as a result of magnetic and/or gravitational torques induced,
respectively, by a tilted stellar dipolar field and a misaligned companion
\citep[e.g.,][]{Lai+11, Batygin12, BatyginAdams13, Lai14, SpaldingBatygin14}.
The tidal realignment hypothesis that underlies the above modeling framework
was challenged by the results of \citet{Mazeh+15}, who examined the rotational
photometric modulations of a large number of {\it Kepler}\/ sources.
Their analysis indicated that the common occurrence of aligned systems around
cool stars characterizes the general population of planets and not just HJs,
and, moreover, that this property extends to orbital periods as long as
$\sim$$50\,$days, about an order of magnitude larger than the maximum value of
$P_\mathrm{orb}$ for which tidal interaction with the star remains important.
To reconcile this finding with the above scenario, \citet{MatsakosKonigl15}
appealed to the results of planet formation and evolution models, which predict
that giant planets form efficiently in protoplanetary disks and that most of
them migrate rapidly to the disk's inner edge, where, if the arriving planet's
mass is not too high ($\lesssim 1\,M_\mathrm{J}$), it could remain stranded near
that radius for up to $\sim$$1\,$Gyr---until it gets tidally ingested by the
host star.
They proposed that the ingestion of a stranded HJ (SHJ)---which is accompanied
by the transfer of its orbital angular momentum to the star---is the dominant
spin-realignment mechanism.
In this picture, the dichotomy in the obliquity properties between cool and hot
stars is a direct consequence of the higher efficiency of magnetic braking and
lower moment of inertia of the former in comparison with the latter.
By applying a simple dynamical model to the observed HJ distributions in~G and
F~stars, \citet{MatsakosKonigl15} inferred that $\sim$50\% of planetary systems
harbor an SHJ with a typical mass of $\sim$$0.6\,M_\mathrm{J}$.
In this picture, the obliquity properties of currently observed HJs---and the
fact that they are consistent with those of lower-mass and more distant
planets---are most naturally explained if most of the planets in a given
system---including any SHJ that may have been present---are formed in, and
migrate along the plane of, a primordially misaligned disk.\footnote{
This explanation does not necessarily imply that all planets that reached the
vicinity of the host star must have moved in by classical migration, although
SHJs evidently arrived in this way.
In fact, \citet{MatsakosKonigl16} inferred that most of the planets that
delineate the boundary of the so-called sub-Jovian desert in the
orbital-period--planet-mass plane got in by a secular HEM process (one that,
however, did not give rise to high orbital inclinations relative to the natal
disk plane).}
This interpretation is compatible with the properties of systems like Kepler-56,
in which two close-in planets have $\lambda \approx 45^\circ$ and yet are nearly
coplanar \citep{Huber+13}, and 55~Cnc, a coplanar five-planet system with
$\lambda \approx 72^\circ$ \citep[e.g.,][]{Kaib+11, BourrierHebrard14}.\footnote{
The two-planet system KOI-89 \citep{Ahlers+15} may be yet another example.}
It is also consistent with the apparent lack of a correlation between the
obliquity properties of observed HJs and the presence of a massive companion
\citep[e.g.,][]{Knutson+14, Ngo+15, Piskorz+15}.
In this paper we explore a variant of the primordial disk misalignment model
first proposed by \citet{Batygin12}, in which, instead of the tilting of the
entire disk by a distant ($\sim$500\,au) stellar companion on an inclined orbit,
we consider the gravitational torque exerted by a much closer ($\sim$5\,au)
\emph{planetary} companion on such an orbit, which acts to misalign \emph{only
the inner region} of the protoplanetary disk.
This model is motivated by the inferences from radial velocity surveys and
adaptive-optics imaging data (\citealt{Bryan+16}; see also \citealt{Knutson+14})
that $\sim$70\% of planetary systems harboring a transiting HJ have a companion
with mass in the range 1--13\,$M_\mathrm{J}$ and semimajor axis in the range
$1$--$20$\,au, and that $\sim$50\% of systems harboring one or two planets
detected by the radial velocity method have a companion with mass in the range
$1$--$20\,M_\mathrm{J}$ and semimajor axis in the range $5$--$20$\,au.
Further motivation is provided by the work of \citet{LiWinn16}, who re-examined
the photometric data analyzed by \citet{Mazeh+15} and found indications that the
good-alignment property of planets around cool stars does not hold for large
orbital periods, with the obliquities of planets with
$P_\mathrm{orb} \gtrsim 10^2\,$days appearing to tend toward a random
distribution.
One possible origin for a giant planet on an inclined orbit with a semimajor
axis $a$ of a few au is planet--planet scattering in the natal disk.
Current theories suggest that giant planets may form in tightly packed
configurations that can become dynamically unstable and undergo orbit crossing
(see, e.g., \citealt{Davies+14} for a review).
The instabilities start to develop before the gaseous disk component dissipates
\citep[e.g.,][]{Matsumura+10, Marzari+10}, and it has been argued
\citep{Chatterjee+08} that the planet--planet scattering process may, in fact,
peak before the disk is fully depleted of gas (see also \citealt{Lega+13}).
A close encounter between two giant planets is likely to result in a collision
if the ratio $(M_\mathrm{p}/M_*)(a/R_\mathrm{p})$ (the Safronov number) is $< 1$
(where $M_*$ is the stellar mass and $R_\mathrm{p}$ is the planet's radius), and
in a scattering if this ratio is $> 1$ \citep[e.g.,][]{FordRasio08}.
The scattering efficiency is thus maximized when a giant planet on a
comparatively wide orbit is involved \citep[cf.][]{Petrovich+14}.
High inclinations might also be induced by resonant excitation in giant planets
that become trapped in a mean-motion resonance through classical (Type II) disk
migration \citep{ThommesLissauer03, LibertTsiganis09}, and this process could,
moreover, provide an alternative pathway to planet--planet scattering
\citep{LibertTsiganis11}.
In these scenarios, the other giant planets that were originally present in the
disk can be assumed to have either been ejected from the system in the course of
their interaction with the remaining misaligned planet or else reached the star
at some later time through disk migration.
As we show in this paper, a planet on an inclined orbit can have a significant
effect on the orientation of the disk region interior to its orbital radius when
the mass of that region decreases to the point where the inner disk's angular
momentum becomes comparable to that of the planet.
For typical mass depletion rates in protoplanetary disks \citep[e.g.,][]
{BatyginAdams13}, this can be expected to happen when the system's age is
$\sim$$10^6$--$10^7\,$yr, which is comparable to the estimated formation time of
Jupiter-mass planets at $\gtrsim 5\,$au.
In the proposed scenario, a planet of mass $M_\mathrm{p} \gtrsim M_\mathrm{J}$
is placed on a high-inclination orbit at a time $t_0 \gtrsim 1\,$Myr that, on
the one hand, is late enough for the disk mass interior to the planet's location
to have decreased to a comparable value, but that, on the other hand, is early
enough for the inner disk to retain sufficient mass after becoming misaligned to
enforce the orbital misalignment of existing planets and/or form new planets in
its reoriented orbital plane (including any Jupiter-mass planets destined to
become an HJ or an SHJ).
The dynamical model adopted in this paper is informed by the
smooth-particle-hydrodynamics simulations carried out by
\citet{Xiang-GruessPapaloizou13}.
They considered the interaction between a massive ($1$--$6\,M_\mathrm{J}$)
planet that is placed on an inclined, circular orbit of radius $5$\,au and a
low-mass ($0.01\,M_*$) protoplanetary disk that extends to $25$\,au.
A key finding of these simulations was that the disk develops a warped
structure, with the regions interior and exterior to the planet's radial
location behaving as separate, rigid disks with distinct inclinations; in
particular, the inner disk was found to exhibit substantial misalignment with
respect to its initial direction when the planet's mass was large enough and its
initial inclination was intermediate between the limits of $0^\circ$ and
$90^\circ$ at which no torque is exerted on the disk.
Motivated by these results, we construct an analytic model for the gravitational
interaction between the planet and the two separate parts of the disk.
The general effect of an interaction of this type between a planet on an
inclined orbit and a rigid disk is to induce a precession of the planet's orbit
about the total angular momentum vector.
In contrast with \citet{Xiang-GruessPapaloizou13}, whose simulations only
extended over a fraction of a precession period, we consider the long-term
evolution of such systems.
In particular, we use our analytic model to study how the ongoing depletion of
the disk's mass affects the orbital orientations of the planet and of the disk's
two parts.
We describe the model in Section~\ref{sec:model} and present our calculations in
Section~\ref{sec:results}.
We discuss the implications of these results to planet obliquity measurements
and to the alignment properties of debris disks in Section~\ref{sec:discussion},
and summarize in Section~\ref{sec:conclusion}.
\section{Modeling approach}
\label{sec:model}
\subsection{Assumptions}
\label{subsec:assumptions}
\begin{figure}
\includegraphics[width=\columnwidth]{initial_fig1.eps}
\caption{
Schematic representation (not to scale) of the initial configuration of our
model.
See text for details.
\label{fig:initial}}
\end{figure}
The initial configuration that we adopt is sketched in Figure~\ref{fig:initial}.
We consider a young star (subscript s) that is surrounded by a Keplerian
accretion disk, and a Jupiter-mass planet (subscript p) on a circular orbit.
The disk consists of two parts: an inner disk (subscript d) that extends between
an inner radius $r_\mathrm{d,in}$ and an outer radius $r_\mathrm{d,out}$, and an
outer disk (subscript h) that extends between $r_\mathrm{h,in}$ and
$r_\mathrm{h,out}$; they are separated by a narrow gap that is centered on the
planet's orbital radius $a$.
The two parts of the disk are initially coplanar, with their normals aligned
with the stellar angular momentum vector $\boldsymbol{S}$, whereas the planet's
orbital angular momentum vector $\boldsymbol{P}$ is initially inclined at an
angle $\psi_\mathrm{p0}$ with respect to $\boldsymbol{S}$ (where the subscript
$0$ denotes the time $t = t_0$ at which the planet is placed on the inclined
orbit).
We assume that, during the subsequent evolution, each part of the disk maintains
a flat geometry and precesses as a rigid body.
The rigidity approximation is commonly adopted in this context and is attributed
to efficient communication across the disk through the propagation of bending
waves or the action of a viscous stress (e.g., \citealt{Larwood+96}; see also
\citealt{Lai14} and references therein).\footnote{
One should, however, bear in mind that real accretion disks are inherently fluid
in nature and therefore cannot strictly obey the rigid-body approximation; see,
e.g., \citet{Rawiraswattana+16}.}
Based on the simulation results presented in \citet{Xiang-GruessPapaloizou13},
we conjecture that this communication is severed at the location of the planet.
This outcome is evidently the result of the planet's opening up a gap in the
disk, although it appears that the gap need not be fully evacuated for this
process to be effective.
In fact, the most strongly warped simulated disk configurations correspond to
comparatively high initial inclination angles, for which the planet spends a
relatively small fraction of the orbital time inside the disk, resulting in gaps
that are less deep and wide than in the fully embedded case.
Our calculations indicate that, during the disk's subsequent evolution, its
inner and outer parts may actually detach as a result of the precessional
oscillation of the inner disk.
This oscillation is particularly strong in the case of highly mass-depleted
disks on which we focus attention in this paper: in the example shown in
Figure~\ref{fig:all-m} below, the initial amplitude of this oscillation is
$\sim$$40^\circ$.
The planet's orbital inclination is subject to damping by dynamical friction
\citep{Xiang-GruessPapaloizou13}, although the damping rate is likely low for
the high values of $\psi_\mathrm{p0}$ that are of particular interest to us
\citep{Bitsch+13}.
Furthermore, in cases where the precessional oscillation of the inner disk
causes the disk to split at the orbital radius of the planet, one can plausibly
expect the local gas density to become too low for dynamical friction to
continue to play a significant role on timescales longer than the initial
oscillation period ($\sim$$10^4$\,yr for the example shown in
Figure~\ref{fig:all-m}).
In light of these considerations, and in the interest of simplicity, we do not
include the effects of dynamical friction in any of our presented models.
As a further simplification, we assume that the planet's orbit remains circular.
The initial orbital eccentricity of a planet ejected from the disk by either of
the two mechanisms mentioned in Section~\ref{sec:intro} may well have a
nonnegligible eccentricity.
However, the simulations performed by \citet{Bitsch+13} indicate that the
dynamical friction process damps eccentricities much faster than inclinations,
so that the orbit can potentially be circularized on a timescale that is shorter
than the precession time (i.e., before the two parts of the disk can become
fully separated).
On the other hand, even if the initial eccentricity is zero, it may be pumped up
by the planet's gravitational interaction with the outer disk if
$\psi_\mathrm{p0}$ is high enough ($\gtrsim 20^\circ$;
\citealt{Teyssandier+13}).
This is essentially the Kozai-Lidov effect, wherein the eccentricity undergoes
periodic oscillations in antiphase with the orbital inclination
\citep{TerquemAjmia10}.
These oscillations were noticed in the numerical simulations of
\citet{Xiang-GruessPapaloizou13} and \citet{Bitsch+13}.
Their period can be approximated by $\tau_\mathrm{KL} \sim (r_\mathrm{h,out}/
r_\mathrm{h,in})^2 (2\pi/|\Omega_\mathrm{ph}|)$ \citep{TerquemAjmia10}, where we
used the expression for the precession frequency $\Omega_\mathrm{ph}$
(Equation~(\ref{eq:omega_ph})) that corresponds to the torque exerted by the
outer disk on the misaligned planet.
For the parameters of the representative mass-depleted disk model shown in
Figure~\ref{fig:all-m}, $\tau_\mathrm{KL} \sim 10^6$\,yr.
This time is longer by a factor of $\sim$$10^2$ than the initial precession
period of the inner disk in this example, implying that the Kozai-Lidov process
will have little effect on the high-amplitude oscillations of $\psi_\mathrm{p}$.
Kozai-Lidov oscillations might, however, modify the details of the long-term
behavior of the inner disk, since $\tau_\mathrm{KL}$ is comparable to the
mass-depletion time $\tau$ (Equation~(\ref{eq:deplete})) that underlies the
secular evolution of the system.
Our model takes into account the tidal interaction of the spinning star with the
inner and outer disks and with the planet, which was not considered in the
aforementioned simulations.
The inclusion of this interaction is motivated by the finding
\citep{BatyginAdams13, Lai14, SpaldingBatygin14} that an evolving protoplanetary
disk with a binary companion on an inclined orbit can experience a resonance
between the disk precession frequency (driven by the companion) and the stellar
precession frequency (driven by the disk), and that this resonance crossing can
generate a strong misalignment between the angular momentum vectors of the disk
and the star.
As it turns out (see Section~\ref{sec:results}), in the case that we
consider---in which the companion is a Jupiter-mass planet with an orbital
radius of a few au rather than a solar-mass star at a distance of a few hundred
au---this resonance is not encountered.
We also show that, even in the case of a binary companion, the misalignment
effect associated with the resonance crossing is weaker than that inferred in
the above works when one also takes into account the torque that the \emph{star}
exerts on the inner disk (see Appendix~\ref{app:resonance}).
\subsection{Equations}
We model the dynamics of the system by following the temporal evolution of the
angular momenta ($\boldsymbol{S}$, $\boldsymbol{D}$, $\boldsymbol{P}$, and
$\boldsymbol{H}$) of the four constituents (the star, the inner disk, the
planet, and the outer disk, respectively) due to their mutual gravitational
torques.
Given that the orbital period of the planet is much shorter than the
characteristic precession time scales of the system, we approximate the planet
as a ring of uniform density, with a total mass equal to that of the planet and
a radius equal to its semimajor axis.
The evolution of the angular momentum $\boldsymbol L_k$ of an object $k$ under
the influence of a torque $\boldsymbol T_{ik}$ exerted by an object $i$ is given
by $d\boldsymbol L_k/dt = \boldsymbol T_{ik}$.
The set of equations that describes the temporal evolution of the four angular
momenta is thus
\begin{equation}
\frac{d\boldsymbol S}{dt} = \boldsymbol T_\mathrm{ds}
+ \boldsymbol T_\mathrm{ps} + \boldsymbol T_\mathrm{hs}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol D}{dt} = \boldsymbol T_\mathrm{sd}
+ \boldsymbol T_\mathrm{pd} + \boldsymbol T_\mathrm{hd}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol P}{dt} = \boldsymbol T_\mathrm{sp}
+ \boldsymbol T_\mathrm{dp} + \boldsymbol T_\mathrm{hp}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol H}{dt} = \boldsymbol T_\mathrm{sh}
+ \boldsymbol T_\mathrm{dh} + \boldsymbol T_\mathrm{ph}\,,
\end{equation}
where $\boldsymbol T_{ik} = -\boldsymbol T_{ki}$.
The above equations can also be expressed in terms of the precession frequencies
$\Omega_{ik}$:
\begin{equation}
\frac{d\boldsymbol L_k}{dt}
= \sum_i\boldsymbol T_{ik}
= \sum_i\Omega_{ik}\frac{\boldsymbol L_i\times\boldsymbol L_k}{J_{ik}}\,,
\label{eq:precession}
\end{equation}
where $J_{ik} = |\boldsymbol L_i + \boldsymbol L_k|
= (L_i^2 + L_k^2 + 2L_iL_k\cos{\theta_{ik}})^{1/2}$ and
$\Omega_{ik} = \Omega_{ki}$.
In Appendix~\ref{app:torques} we derive analytic expressions for the torques
$\boldsymbol T_{ik}$ and the corresponding precession frequencies $\Omega_{ik}$.
\subsection{Numerical Setup}
The host is assumed to be a protostar of mass $M_* = M_\odot$,
radius $R_* = 2R_\odot$, rotation rate $\Omega_* = 0.1(GM_*/R_*^3)^{1/2}$, and
angular momentum
\begin{eqnarray}
S &=& k_*M_*R_*^2\Omega_* =
1.71 \times 10^{50}\\
&\times& \left(\frac{k_*}{0.2}\right) \left(\frac{M_*}{M_\odot}\right)
\left(\frac{R_*}{2R_\odot}\right)^2
\left(\frac{\Omega_*}{0.1\sqrt{GM_\odot/(2R_\odot)^3}}\right)\,
\mathrm{erg\,s}\nonumber\,,
\end{eqnarray}
where $k_* \simeq 0.2$ for a fully convective star (modeled as a polytrope of
index $n = 1.5$).
The planet is taken to have Jupiter's mass and radius,
$M_\mathrm{p} = M_\mathrm{J}$ and $R_\mathrm{p} = R_\mathrm{J}$, and a
fixed semimajor axis, $a = 5$\,au, so that its orbital angular momentum is
\begin{eqnarray}
P &=& M_\mathrm{p}(GM_*a)^{1/2} =
1.89 \times 10^{50}
\label{eq:P}\\
&&\times \left(\frac{M_\mathrm{p}}{M_\mathrm{J}}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\,\mathrm{erg\,s}\,.\nonumber
\end{eqnarray}
We consider two values for the total initial disk mass: (1)
$M_\mathrm{t0} = 0.1\,M_*$, corresponding to a comparatively massive disk, and
(2) $M_\mathrm{t0} = 0.02\,M_*$, corresponding to a highly evolved system that
has entered the transition-disk phase.
In both cases we take the disk surface density to scale with radius as $r^{-1}$.
The inner disk extends from $r_\mathrm{d,in} = 4R_\odot$ to
$r_\mathrm{d,out} = a$, and initially has $10\%$ of the total mass.
Its angular momentum is
\begin{eqnarray}
D &=& \frac{2}{3}M_\mathrm{d}\left(GM_*\right)^{1/2}
\frac{r_\mathrm{d,out}^{3/2} - r_\mathrm{d,in}^{3/2}}
{r_\mathrm{d,out} - r_\mathrm{d,in}}
\label{eq:D}\\
&\simeq& 1.32 \times 10^{51}\,
\left(\frac{M_\mathrm{d}}{0.01M_\odot}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s} \nonumber \,.
\end{eqnarray}
The outer disk has edges at $r_\mathrm{h,in} = a$ and
$r_\mathrm{h,out} = 50$\,au, and angular momentum
\begin{eqnarray}
H &=& \frac{2}{3}M_\mathrm{h}\left(GM_*\right)^{1/2}
\frac{r_\mathrm{h,out}^{3/2} - r_\mathrm{h,in}^{3/2}}
{r_\mathrm{h,out} - r_\mathrm{h,in}}\\
&\simeq& 3.76 \times 10^{52}\,
\left(\frac{M_\mathrm{h}}{0.09M_\odot}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{r_\mathrm{h,out}}{50\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s}
\nonumber\,.
\end{eqnarray}
We model mass depletion in the disk using the expression first employed in this
context by \citet{BatyginAdams13},
\begin{equation}
M_\mathrm{t}(t) = \frac{M_{\mathrm{t}}(t=0)}{1 + t/\tau}\,,
\label{eq:deplete}
\end{equation}
where we adopt $M_{\mathrm{t}}(t=0)=0.1\,M_\sun$ and $\tau = 0.5$\,Myr as in
\citet{Lai14}.
We assume that this expression can also be applied separately to the inner and
outer parts of the disk.
The time evolution of the inner disk's angular momentum due to mass depletion is
thus given by
\begin{equation}
\label{eq:dDdt}
\left(\frac{d\boldsymbol{D}}{dt}\right)_\mathrm{depl}
= -\frac{D_0}{\tau(1 + t/\tau)^2}\hat{\boldsymbol{D}}
= -\frac{\boldsymbol{D}}{\tau+t}\,.
\end{equation}
For the outer disk we assume that the presence of the planet inhibits efficient
mass accretion, and we consider the following limits: (1) the outer disk's mass
remains constant, and (2) the outer disk loses mass (e.g., through
photoevaporation) at the rate given by Equation~(\ref{eq:deplete}).\footnote{
After the inner disk tilts away from the outer disk, the inner rim of the outer
disk becomes exposed to the direct stellar radiation field, which accelerates
the evaporation process \citep{Alexander+06}.
According to current models, disk evaporation is induced primarily by X-ray and
FUV photons and occurs at a rate of
$\sim$$10^{-9}$--$10^{-8}\,M_\sun\,\mathrm{yr}^{-1}$ for typical stellar
radiation fields (see \citealt{Gorti+16} for a review).
Even if the actual rate is near the lower end of this range, the outer disk in
our low-$M_{\rm t0}$ models would be fully depleted of mass on a timescale of
$\sim$$10$\,Myr; however, a similar outcome for the high-$M_\mathrm{t0}$ models
would require the mass evaporation rate to be near the upper end of the
estimated range.}
We assume that any angular momentum lost by the disk is transported out of the
system (for example, by a disk wind).
We adopt a Cartesian coordinate system ($x,\,y,\,z$) as the ``lab'' frame of
reference (see Figure~\ref{fig:initial}).
Initially, the equatorial plane of the star and the planes of the inner and
outer disks coincide with the $x$--$y$ plane (i.e.,
$\psi_\mathrm{s0} = \psi_\mathrm{d0} = \psi_\mathrm{h0} = 0$, where $\psi_k$
denotes the angle between $\boldsymbol{L}_k$ and the $z$ axis), and only the
orbital plane of the planet has a finite initial inclination
($\psi_\mathrm{p0}$).
The $x$ axis is chosen to coincide with the initial line of nodes of the
planet's orbital plane.
\begin{table*}
\begin{center}
\caption{Model parameters\label{tab:models}}
\begin{tabular}{l|cccccccccc}
\hline\hline
Model & $\boldsymbol{S}$ & $\boldsymbol{D}$ & $\boldsymbol{P}$ & $\boldsymbol{H}$
& $M_\mathrm{d0} \ [M_*] $ & $M_\mathrm{h0}\ [M_*]$ & $M_\mathrm{t0}\ [M_*]$
& $M_\mathrm{p}$ & $a$ [au] & $\psi_\mathrm{p0}\ [^\circ]$ \\
\hline
\texttt{DP-M} & -- & $\surd$ & $\surd$ & -- & $0.010\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{DP-m} & -- & $\surd$ & $\surd$ & -- & $0.002\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-M} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090\downarrow$ & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-m} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-Mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090$ -- & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018$ -- & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{retrograde} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $110$ \\
\texttt{binary} & $\surd$ & $\surd$ & $\surd$ & -- & -- & -- & $\ \ \,0.10\downarrow$ & $M_\odot$ & $300$ & $10$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
Table~\ref{tab:models} presents the models we explore and summarizes the
relevant parameters.
Specifically, column 1 contains the models' designations (with the letters
\texttt{M} and \texttt{m} denoting, respectively, high and low disk masses at
time $t=t_0$), columns 2--5 indicate which system components are being
considered, columns 6--9 list the disk and planet masses (with the arrow
indicating active mass depletion), and columns 10 and~11 give the planet's
semimajor axis and initial misalignment angle, respectively.
The last listed model (\texttt{binary}) does not correspond to a planet
misaligning the inner disk but rather to a binary star tilting the entire disk.
This case is considered for comparison with the corresponding model in
\citet{Lai14}.
\section{Results}
\label{sec:results}
The gravitational interactions among the different components of the system that
we consider (star, inner disk, planet, and outer disk) can result in a highly
nonlinear behavior.
To gain insight into these interactions we start by analyzing a much simpler
system, one consisting only of the inner disk and the (initially misaligned)
planet.
The relevant timescales that characterize the evolution of this system are the
precession period $\tau_\mathrm{dp} \equiv 2\pi/\Omega_\mathrm{dp}$
(Equation~(\ref{eq:omega_dp})) and the mass depletion timescale
$\tau = 5\times 10^5\,$yr (Equation~(\ref{eq:deplete})).
\begin{figure*}
\includegraphics[width=\textwidth]{DP-M_fig2.eps}
\caption{
Time evolution of a ``reduced'' system, consisting of just a planet and an
inner disk, for an initial disk mass $M_\mathrm{d0} = 0.01\,M_*$
(model~\texttt{DP-M}).
Top left: the angles that the angular momentum vectors $\boldsymbol{D}$,
$\boldsymbol{P}$ and $\boldsymbol{J}_\mathrm{dp}$ form with the $z$ axis
(the initial direction of $\boldsymbol{D}$), as well as the angle between
$\boldsymbol{D}$ and $\boldsymbol{P}$.
Top right: the projections of the angular momentum unit vectors onto the
$x$--$y$ plane.
Bottom left: the characteristic precession frequency.
Bottom right: the magnitudes of the angular momentum vectors.
In the left-hand panels, the initial $0.1$\,Myr of the evolution is
displayed at a higher resolution.
\label{fig:DP-M}}
\end{figure*}
Figure~\ref{fig:DP-M} shows the evolution of such a system for the case
(model~\texttt{DP-M}) where a Jupiter-mass planet on a misaligned orbit
($\psi_\mathrm{p0} = 60^\circ$) torques an inner disk of initial mass
$M_\mathrm{d0} = 0.01\,M_*$ (corresponding to $M_\mathrm{t0} = 0.1\,M_*$, i.e.,
to $t_0 = 0$ when $M_* = M_\sun$; see Equation~(\ref{eq:deplete})).
The top left panel exhibits the angles $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$
(blue: inner disk; red: planet) as a function of time.
In this and the subsequent figures, we show results for a total duration of
$10$\,Myr.
This is long enough in comparison with $\tau$ to capture the secular evolution
of the system, which is driven by the mass depletion in the inner disk.
To capture the details of the oscillatory behavior associated with the
precession of the individual angular momentum vectors ($\boldsymbol{D}$ and
$\boldsymbol{P}$) about the total angular momentum vector
$\boldsymbol{J}_\mathrm{dp} = \boldsymbol{D} + \boldsymbol{P}$ (subscript
j)---which takes place on the shorter timescale $\tau_\mathrm{dp}$
($\simeq 9\times 10^3$\,yr at $t = t_0$)---we display the initial $0.1$\,Myr in
the top left panel using a higher time resolution and, in addition, show the
projected trajectories of the unit vectors $\hat{\boldsymbol{D}}$,
$\hat{\boldsymbol{P}}$, and $\hat{\boldsymbol{J}}_\mathrm{dp}$ in the $x$--$y$
plane during this time interval in the top right panel.
Given that $0.1\,{\rm Myr} \ll \tau$, the vectors $\hat{\boldsymbol{D}}$ and
$\hat{\boldsymbol{P}}$ execute a circular motion about
$\hat{\boldsymbol{J}}_\mathrm{dp}$ with virtually constant inclinations with
respect to the latter vector (given by the angles $\theta_\mathrm{jd}$ and
$\theta_\mathrm{jp}$, respectively), and the orientation of
$\hat{\boldsymbol{J}}_\mathrm{dp}$ with respect to the $z$ axis (given by the
angle $\psi_\mathrm{j}$) also remains essentially unchanged.
(The projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$ on the $x$--$y$ plane is
displaced from the center along the $y$ axis, reflecting the fact that the
planet's initial line of nodes coincides with the $x$ axis.)
As the vectors $\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$ precess about
$\hat{\boldsymbol{J}}_\mathrm{dp}$, the angles $\psi_\mathrm{d}$ and
$\psi_\mathrm{p}$ oscillate in the ranges
$|\psi_\mathrm{j} - \theta_\mathrm{jd}| \leq \psi_\mathrm{d} \leq
\psi_\mathrm{j} + \theta_\mathrm{jd}$ and
$|\psi_\mathrm{j} - \theta_\mathrm{jp}| \leq \psi_\mathrm{p} \leq
\psi_\mathrm{j} + \theta_\mathrm{jp}$, respectively.
\begin{figure}
\begin{center}
\includegraphics{misalignment_fig3.eps}
\end{center}
\caption{
Schematic sketch of the change in the total angular momentum vector
$\boldsymbol{J}_\mathrm{dp}$ that is induced by mass depletion from the disk
in the limit where the precession period $\tau_{\rm dp}$ is much shorter
than the characteristic depletion time $\tau$.
The two depicted configurations are separated by $0.5\,\tau_\mathrm{dp}$.
\label{fig:vectors}}
\end{figure}
A notable feature of the evolution of this system on a timescale $\gtrsim \tau$
is the increase in the angle $\psi_\mathrm{d}$ (blue line in the top left
panel)---indicating progressive misalignment of the disk with respect to its
initial orientation---as the magnitude of the angular momentum $\boldsymbol{D}$
decreases with the loss of mass from the disk (blue line in the bottom right
panel).
At the same time, the orbital plane of the planet (red line in the top left
panel) tends toward alignment with $\boldsymbol{J}_\mathrm{dp}$.
The magenta lines in the top left and bottom right panels indicate that the
orientation of the vector $\boldsymbol{J}_\mathrm{dp}$ remains fixed even as its
magnitude decreases (on a timescale $\gtrsim \tau$) on account of the decrease
in the magnitude of $\boldsymbol{D}$.
As we demonstrate analytically in Appendix~\ref{app:Jdp}, the constancy of
$\psi_\mathrm{j}$ is a consequence of the inequality
$\tau_\mathrm{dp} \ll \tau$.
To better understand the evolution of the disk and planet orientations, we
consider the (small) variations in $\boldsymbol{D}$ and
$\boldsymbol{J}_\mathrm{dp}$ that are induced by mass depletion over a small
fraction of the precession period.
On the left-hand side of Figure~\ref{fig:vectors} we show a schematic sketch of
the orientations of the vectors $\boldsymbol{D}$, $\boldsymbol{P}$, and
$\boldsymbol{J}_\mathrm{dp}$ at some given time (denoted by the subscript 1) and
a short time later (subscript 2).
During that time interval the vector $\boldsymbol{J}_\mathrm{dp}$ tilts slightly
to the left, and as a result it moves away from $\boldsymbol{D}$ and closer to
$\boldsymbol{P}$.
The sketch on the right-hand side of Figure~\ref{fig:vectors} demonstrates that,
if we were to consider the same evolution a half-cycle later, the same
conclusion would be reached: in this case the vector
$\boldsymbol{J}_{\mathrm{dp}3}$ moves slightly to the right (to become
$\boldsymbol{J}_{\mathrm{dp}4}$), with the angle between
$\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{D}$ again increasing even as the
angle between $\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{P}$ decreases.
The angles between the total angular momentum vector and the vectors
$\boldsymbol{D}$ and $\boldsymbol{P}$ are thus seen to undergo a systematic,
secular variation.
The sketch in Figure~\ref{fig:vectors} also indicates that the vector
$\boldsymbol{J}_\mathrm{dp}$ undergoes an oscillation over each precession
cycle.
However, when $\tau_\mathrm{dp} \ll \tau$ and the fractional decrease in
$M_\mathrm{d}$ over a precession period remains $\ll 1$, the amplitude of the
oscillation is very small and $\boldsymbol{J}_\mathrm{dp}$ practically maintains
its initial direction (see Appendix~\ref{app:Jdp} for a formal demonstration of
this result).
In the limit where the disk mass becomes highly depleted and $D \to 0$,
$\boldsymbol{J}_\mathrm{dp} \to \boldsymbol{P}$, i.e., the planet aligns with
the initial direction of $\boldsymbol{J}_\mathrm{dp}$
($\theta_\mathrm{jp} \to 0$ and $\psi_\mathrm{p} \to \psi_\mathrm{j}$).
The disk angular momentum vector then precesses about $\boldsymbol{P}$, with its
orientation angle $\psi_\mathrm{d}$ (blue line in top left panel of
Figure~\ref{fig:DP-M}) oscillating between
$|\psi_\mathrm{p} - \theta_\mathrm{dp}|$ and
$\psi_\mathrm{p} + \theta_\mathrm{dp}$.\footnote{
The angle $\theta_\mathrm{dp}$ between $\boldsymbol{D}$ and $\boldsymbol{P}$
(cyan line in the top left panel of Figure~\ref{fig:DP-M}) remains constant
because there are no torques that can modify it.}
Note that the precession frequency is also affected by the disk's mass depletion
and decreases with time (see Equation~(\ref{eq:omega_dp})); the time evolution
of $\Omega_{\rm dp}$ is shown in the bottom left panel of Figure~\ref{fig:DP-M}.
\begin{figure*}
\includegraphics[width=\textwidth]{DP-m_fig4.eps}
\caption{
Same as Figure~\ref{fig:DP-M}, except that $M_\mathrm{d0} = 0.02\,M_*$
(model~\texttt{DP-m}).
\label{fig:DP-m}}
\end{figure*}
Figure~\ref{fig:DP-m} shows the evolution of a similar
system---model~\texttt{DP-m}---in which the inner disk has a lower initial mass,
$M_\mathrm{d0} = 0.002\,M_*$ (corresponding to $M_\mathrm{t0} = 0.02\,M_*$,
i.e., to $t_0=2$\,Myr when $M_*=M_\sun$; see Equation~(\ref{eq:deplete})).
The initial oscillation frequency in this case is lower than in model
\texttt{DP-M}, as expected from Equation~(\ref{eq:omega_dp}), but it attains the
same asymptotic value (bottom left panel), corresponding to the limit
$J_\mathrm{dp} \to P$ in which $\Omega_\mathrm{dp}$ becomes independent of
$M_\mathrm{d}$.
The initial value of $J_\mathrm{dp}/D$ is higher in the present model than in
the model considered in Figure~\ref{fig:DP-M} ($\simeq 1.5$ vs. $\simeq 1.1$;
see Equations~(\ref{eq:P}) and~(\ref{eq:D})), which results in a higher value of
$\psi_\mathrm{j}$ (and, correspondingly, a higher initial value of
$\theta_\mathrm{jd}$ and lower initial value of $\theta_\mathrm{jp}$).
The higher value of $\psi_\mathrm{j}$ is the reason why the oscillation
amplitude of $\psi_\mathrm{d}$ and the initial oscillation amplitude of
$\psi_\mathrm{p}$ (top left panel) are larger in this case.
The higher value of $J_\mathrm{dp}/D_0$ in Figure~\ref{fig:DP-m} also accounts
for the differences in the projection map shown in the top right panel (a larger
$y$ value for the projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$, a larger
area encircled by the projection of $\hat{\boldsymbol{D}}$, and a smaller area
encircled by the projection of $\hat{\boldsymbol{P}}$).
\begin{figure*}
\includegraphics[width=\textwidth]{all-M_fig5.eps}
\caption{
Time evolution of the full system (star, inner disk, planet, outer disk) for
an initial inner disk mass $M_\mathrm{d0} = 0.01\,M_*$ and initial total
disk mass $M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}).
Panel arrangement is the same as in Figure~\ref{fig:DP-M}, although the
details of the displayed quantities---which are specified in each panel and
now also include the angular momenta of the star ($\boldsymbol{S}$) and the
outer disk ($\boldsymbol{H}$)---are different.
\label{fig:all-M}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{all-m_fig6.eps}
\caption{
Same as Figure~\ref{fig:all-M}, except that $M_\mathrm{d0} = 0.002\,M_*$ and
$M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m}).
\label{fig:all-m}}
\end{figure*}
We now consider the full system for two values of the total disk mass:
$M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}, corresponding to $t_0 = 0$;
Figure~\ref{fig:all-M}) and $M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m},
corresponding to $t_0 = 2$\,Myr; Figure~\ref{fig:all-m}), assuming that both
parts of the disk lose mass according to the relation given by
Equation~(\ref{eq:deplete}).
The inner disks in these two cases correspond, respectively, to the disk masses
adopted in model~\texttt{DP-M} (Figure~\ref{fig:DP-M}) and model~\texttt{DP-m}
(Figure~\ref{fig:DP-m}).
The merit of first considering the simpler systems described by the latter
models becomes apparent from a comparison between the respective figures.
It is seen that the basic behavior of model~\texttt{all-M} is similar to that of
model~\texttt{DP-M}, and that the main differences between model~\texttt{all-M}
and model~\texttt{all-m} are captured by the way in which model~\texttt{DP-m} is
distinct from model~\texttt{DP-M}.
The physical basis for this correspondence is the centrality of the torque
exerted on the inner disk by the planet.
According to Equation~(\ref{eq:precession}), the relative magnitudes of the
torques acting on the disk at sufficiently late times (after $D$ becomes smaller
than the angular momentum of each of the other system components) are reflected
in the magnitudes of the corresponding precession frequencies.
The dominance of the planet's contribution can thus be inferred from the plots
in the bottom left panels of Figures~\ref{fig:all-M} and~\ref{fig:all-m}, which
show that, after the contribution of $D$ becomes unimportant (bottom right
panels), the precession frequency induced by the planet exceeds those induced by
the outer disk and by the star.\footnote{
The star--planet and star--outer-disk precession frequencies
($\Omega_\mathrm{sp}$ and~$\Omega_\mathrm{sh}$; see
Equations~(\ref{eq:omega_sp}) and~(\ref{eq:omega_sh})) are not shown in these
figures because they are too low to fit in the plotted range.}
While the basic disk misalignment mechanism is the same as in the
planet--inner-disk system, the detailed behavior of the full system is
understandably more complex.
One difference that is apparent from a comparison of the left-hand panels in
Figures~\ref{fig:all-M} and~\ref{fig:DP-M} is the higher oscillation frequency
of $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$ in the full model (with the same
frequency also seen in the timeline of $\psi_\mathrm{s}$).
In this case the planet--outer-disk precession frequency $\Omega_\mathrm{ph}$
(Equation~(\ref{eq:omega_ph})) and the inner-disk--outer-disk precession
frequency $\Omega_\mathrm{dh}$ (Equation~(\ref{eq:omega_dh})) are initially
comparable and larger than $\Omega_\mathrm{dp}$, and $\Omega_\mathrm{ph}$
remains the dominant frequency throughout the system's evolution.
The fact that the outer disk imposes a precession on both $\boldsymbol{P}$ and
$\boldsymbol{D}$ has the effect of weakening the interaction between the planet
and the inner disk, which slows down the disk misalignment process.
Another difference is revealed by a comparison of the top right panels: in the
full system, $\hat{\boldsymbol{J}}_\mathrm{dp}$ precesses on account of the
torque induced by the outer disk, so it no longer corresponds to just a single
point in the $x$--$y$ plane.
This, in turn, increases the sizes of the regions traced in this plane by
$\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$.
The behavior of the lower-$M_\mathrm{t0}$ model shown in Figure~\ref{fig:all-m}
is also more involved.
In this case, in addition to the strong oscillations of the angles $\psi_i$
already manifested in Figure~\ref{fig:DP-m}, the different precession
frequencies $\Omega_{ik}$ also exhibit large-amplitude oscillations, reflecting
their dependence on the angles $\theta_{ik}$ between the angular momentum
vectors.
In both of the full-system models, the strongest influence on the star is
produced by its interaction with the inner disk, but the resulting precession
frequency ($\Omega_\mathrm{sd}$) remains low.
Therefore, the stellar angular momentum vector essentially retains its original
orientation, which implies that the angle $\psi_\mathrm{d}$ is a good proxy for
the angle between the primordial stellar spin and the orbit of any planet that
eventually forms in the inner disk.
\begin{figure}
\includegraphics[width=\columnwidth]{all-Mx_fig7.eps}
\caption{
Time evolution of the full system in the limit where only the inner disk
undergoes mass depletion and the mass of the outer disk remains unchanged,
for the same initial conditions as in Figure~\ref{fig:all-M}
(model~\texttt{all-Mx}).
The top and bottom panels correspond, respectively, to the top left and
bottom left panels of Figure~\ref{fig:all-M}, but in this case the initial
$0.1$\,Myr of the evolution are not displayed at a higher resolution.
\label{fig:all-Mx}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{all-mx_fig8.eps}
\caption{
Same as Figure~\ref{fig:all-Mx}, but for the initial conditions of
Figure~\ref{fig:all-m} (model~\texttt{all-mx}).
\label{fig:all-mx}}
\end{figure}
We repeated the calculations shown in Figures~\ref{fig:all-M}
and~\ref{fig:all-m} under the assumption that only the inner disk loses mass
while $M_\mathrm{h}$ remains constant (models~\texttt{all-Mx}
and~\texttt{all-mx}; Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx},
respectively).
At the start of the evolution, the frequencies $\Omega_\mathrm{ph}$ and
$\Omega_\mathrm{dh}$ are $\propto$$M_\mathrm{h}$, whereas $\Omega_\mathrm{dp}$
scales linearly (or, in the case of the lower-$M_\mathrm{d0}$ model, close to
linearly) with $M_\mathrm{d}$ (see Appendix~\ref{app:torques}).
In the cases considered in Figures~\ref{fig:all-M} and~\ref{fig:all-m} all these
frequencies decrease with time, so the relative magnitude of
$\Omega_\mathrm{dp}$ remains comparatively large throughout the evolution.
In contrast, in the cases shown in Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx}
the frequencies $\Omega_\mathrm{ph}$ and $\Omega_\mathrm{dh}$ remain constant
and only $\Omega_\mathrm{dp}$ decreases with time.
As the difference between $\Omega_\mathrm{dp}$ and the other two frequencies
starts to grow, the inner disk misalignment process is aborted, and thereafter
the mean values of $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$ remain constant.
This behavior is consistent with our conclusion about the central role that the
torque exerted by the planet plays in misaligning the inner disk: when the fast
precession that the outer disk induces in the orbital motions of both the planet
and the inner disk comes to dominate the system dynamics, the direct coupling
between the planet and the inner disk is effectively broken and the misalignment
process is halted.
Note, however, from Figure~\ref{fig:all-mx} that, even in this case, the angle
$\psi_\mathrm{d}$ can attain a high value (as part of a large-amplitude
oscillation) when $M_\mathrm{t0}$ is small.
\begin{figure*}
\includegraphics[width=\textwidth]{retrograde_fig9.eps}
\caption{
Time evolution with the same initial conditions as in
Figure~\ref{fig:all-m}, except that the planet is initially on a retrograde
orbit ($\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$;
model~\texttt{retrograde}).
The display format is the same as in Figure~\ref{fig:all-Mx}, but in this
case the panels also show a zoomed-in version of the evolution around the
time of the jumps in $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$.
The dashed line in the top panel marks the transition between prograde and
retrograde orientations ($90^\circ$).
\label{fig:retrograde}}
\end{figure*}
To determine whether the proposed misalignment mechanism can also account for
disks (and, eventually, planets) on retrograde orbits, we consider a system in
which the companion planet is placed on such an orbit
(model~\texttt{retrograde}, which is the same as model~\texttt{all-m} except
that $\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$).
As Figure~\ref{fig:retrograde} demonstrates, the disk in this case evolves to a
retrograde configuration ($\psi_\mathrm{d} > 90^\circ$) at late times even as
the planet's orbit reverts to prograde motion.
A noteworthy feature of the plotted orbital evolution (shown in the
high-resolution portion of the figure) is the rapid increase in the value of
$\psi_\mathrm{d}$ (which is an adequate proxy for $\theta_\mathrm{sd}$ also in
this case)---and corresponding fast decrease in the value of
$\psi_\mathrm{p}$---that occurs when the planet's orbit transitions from a
retrograde to a prograde orientation.
This behavior can be traced to the fact that $\cos{\theta_\mathrm{ph}}$ vanishes
at essentially the same time that $\psi_\mathrm{p}$ crosses $90^\circ$ because
the outer disk (which dominates the total angular momentum) remains well aligned
with the $z$ axis.
This, in turn, implies (see Equation~(\ref{eq:omega_ph})) that, at the time of
the retrograde-to-prograde transition, the planet becomes dynamically decoupled
from the outer disk and only retains a coupling to the inner disk.
Its evolution is, however, different from that of a ``reduced'' system, in which
only the planet and the inner disk interact, because the inner disk remains
dynamically ``tethered'' to the outer disk ($\theta_\mathrm{dh}\ne 90^\circ$).
As we verified by an explicit calculation, the evolution of the reduced system
remains smooth when $\psi_\mathrm{p}$ crosses $90^\circ$.
The jump in $\psi_\mathrm{p}$ exhibited by the full system leads to a
significant increase in the value of $\cos{\theta_\mathrm{ph}}$ and hence of
$\Omega_\mathrm{ph}$, which, in turn, restores (and even enhances) the planet's
coupling to the outer disk after its transition to retrograde motion (see bottom
panel of Figure~\ref{fig:retrograde}).
The maximum value attained by $\theta_\mathrm{sd}$ in this example is
$\simeq 172^\circ$, which, just as in the prograde case shown in
Figure~\ref{fig:all-m}, exceeds the initial misalignment angle of the planetary
orbit (albeit to a much larger extent in this case).
It is, however, worth noting that not all model systems in which the planet is
initially on a retrograde orbit give rise to a retrograde inner disk at the end
of the prescribed evolution time; in particular, we found that the outcome of
the simulated evolution (which depends on whether $\psi_\mathrm{p}$ drops below
$90^\circ$) is sensitive to the value of the initial planetary misalignment
angle $\psi_{\mathrm{p}0}$ (keeping all other model parameters unchanged).
In concluding this section it is instructive to compare the results obtained
for our model with those found for the model originally proposed by
\citet{Batygin12} (see Section~\ref{sec:intro} for references to additional work
on that model).
We introduced our proposed scenario as a variant of the latter model, with a
close-by giant planet taking the place of a distant stellar companion.
In the original proposal the disk misalignment was attributed to the
precessional motion that is induced by the torque that the binary companion
exerts on the disk.
In this picture the spin--orbit angle oscillates (on a timescale $\sim$1\,Myr
for typical parameters) between $0^\circ$ and roughly twice the binary orbital
inclination, so it can be large if observed at the ``right'' time.
Our model retains this feature of the earlier proposal, particularly in cases
where the companion planet is placed on a high-inclination orbit after the disk
has already lost much of its initial mass, but it also exhibits a novel feature
that gives rise to a secular (rather than oscillatory) change in the spin--orbit
angle (which can potentially lead to a substantial increase in this angle).
This new behavior represents an ``exchange of orientations'' between the planet
and the inner disk that is driven by the mass loss from the inner disk and
corresponds to a decrease of the inner disk's angular momentum from a value
higher than that of the planet to a lower value (with the two remaining within
an order of magnitude of each other for representative parameters).
This behavior is not found in a binary system because of the large mismatch
between the angular momenta of the companion and the disk in that case (and, in
fact, it is also suppressed in the case of a planetary companion when the mass
of the outer disk is not depleted).
As we already noted in Section~\ref{subsec:assumptions}, \citet{BatyginAdams13}
suggested that the disk misalignment in a binary system can be significantly
increased due to a resonance between the star--disk and binary--disk precession
frequencies.
(We can use Equations~(\ref{eq:omega_sd}) and~(\ref{eq:omega_dp}), respectively,
to evaluate these frequencies, plugging in values for the outer disk radius,
companion orbital radius, and companion mass that are appropriate for the binary
case.)
\citet{Lai14} clarified the effect of this resonance and emphasized that, for
plausible system parameters, it can be expected to be crossed as the disk
becomes depleted of mass.
However, for the planetary-companion systems considered in this paper the ratio
$|\Omega_\mathrm{sd}/\Omega_\mathrm{dp}|$ remains $< 1$ throughout the
evolution, so no such resonance is encountered in this case.
In both of these systems $\Omega_\mathrm{sd}$ is initially
$\propto M_\mathrm{d}$, so it decreases during the early evolution.
The same scaling also characterizes $\Omega_\mathrm{dp}$ in the planetary case,
which explains why the corresponding curves do not cross.
In contrast, in the binary case (for which the sum of the disk and companion
angular momenta is dominated by the companion's contribution) the frequency
$\Omega_\mathrm{dp}$ does not scale with the disk mass and it thus remains
nearly constant, which makes it possible for the corresponding curves to cross
(see Figure~\ref{fig:binary} in Appendix~\ref{app:resonance}).
Since our formalism also encompasses the binary case, we examined one such
system (model~\texttt{binary})---using the parameters adopted in figure~3 of
\citet{Lai14}---for comparison with the results of that work.
Our findings are presented in Appendix~\ref{app:resonance}.
\section{Discussion}
\label{sec:discussion}
The model considered in this paper represents a variant of the primordial disk
misalignment scenario of \citet{Batygin12} in which the companion is a nearby
planet rather than a distant star and only the inner region of the
protoplanetary disk (interior to the planet's orbit) becomes inclined.
In this section we assess whether this model provides a viable framework for
interpreting the relevant observations.
The first---and most basic---question that needs to be addressed is whether the
proposed misalignment mechanism is compatible with the broad range of apparent
spin--orbit angles indicated by the data.
In Section~\ref{sec:results} we showed that the spin--orbit angle
$\theta_\mathrm{sd}$ can deviate from its initial value of $0^\circ$ either
because of the precessional motion that is induced by the planet's torque on the
disk or on account of the secular variation that is driven by the mass depletion
process.
In the ``reduced'' disk--planet model considered in Figures~\ref{fig:DP-M}
and~\ref{fig:DP-m}, for which the angle $\psi_\mathrm{d}$ is taken as a proxy
for the intrinsic spin--orbit angle, the latter mechanism increases
$\theta_\mathrm{sd}$ to $\sim$$45^\circ$--$50^\circ$ on a timescale of $10$\,Myr
for an initial planetary inclination $\psi_\mathrm{p0} = 60^\circ$.
The maximum disk misalignment is, however, increased above this value by the
precessional oscillation, whose amplitude is higher the lower the initial mass
of the disk.
Based on the heuristic discussion given in connection with
Figure~\ref{fig:vectors}, the maximum possible value of $\psi_\mathrm{d}$
(corresponding to the limit $J_\mathrm{dp} \to P$) is given by
\begin{equation}
\label{eq:psi_max}
\psi_\mathrm{d,max} = \arccos\frac{D_0 + P\cos\psi_\mathrm{p0}}
{(D_0^2 + P^2 + 2D_0P\cos\psi_\mathrm{p0})^{1/2}} + \psi_\mathrm{p0}\,.
\end{equation}
For the parameters of Figure~\ref{fig:DP-m},
$\psi_\mathrm{d,max} \approx 84.5^\circ$, which can be compared with the actual
maximum value ($\simeq 72^\circ$) attained over the course of the $10$-Myr
evolution depicted in this figure.\footnote{
The intrinsic spin--orbit angle is not directly measurable, so its value must be
inferred from that of the apparent (projected) misalignment angle $\lambda$
\citep{FabryckyWinn09}.
In the special case of a planet whose orbital plane contains the line of
sight---an excellent approximation for planets observed by the transits
method---the apparent obliquity cannot exceed the associated intrinsic
misalignment angle (i.e., $\lambda \le \theta_\mathrm{sd}$).}
Although the behavior of the full system (which includes also the outer disk and
the star) is more complicated, we found (see Figures~\ref{fig:all-M}
and~\ref{fig:all-m}) that, if the outer disk also loses mass, the maximum value
attained by $\theta_{\rm sd}$ ($\simeq 67^\circ$) is not much smaller than in
the simplified model.
Note that in the original primordial-misalignment scenario the maximum value of
$\theta_\mathrm{sd}$ ($\simeq 2\,\psi_\mathrm{p0}$) would have been considerably
higher ($\simeq 120^\circ$) for the parameters employed in our example.
However, as indicated by Equation~(\ref{eq:psi_max}), the maximum value
predicted by our model depends on the ratio $P/D_0$ and can in principle exceed
the binary-companion limit if $D_0$ is small and $P$ is sufficiently
large.\footnote{
$D_0$, the magnitude of the initial angular momentum of the inner disk, cannot
be much smaller than the value adopted in models~\texttt{DP-m}
and~\texttt{all-m} in view of the minimum value of $M_\mathrm{d0}$ that is
needed to account for the observed misaligned planets in the
primordial-disk-misalignment scenario (and also for the no-longer-present HJ in
the SHJ picture).}
Repeating the calculations shown in Figure~\ref{fig:all-m} for higher values of
$M_\mathrm{p}$, we found that the maximum value of $\theta_\mathrm{sd}$ is
$\sim$$89^\circ$, $104^\circ$ and~$125^\circ$ when $M_\mathrm{p}/M_\mathrm{J}$
increases from~1 to~2, 3, and~4, respectively.
These results further demonstrate that the disk can be tilted to a retrograde
configuration even when $\psi_\mathrm{p0} < 90^\circ$ if the planet is
sufficiently massive, although a retrograde disk orientation can also be
attained (including in the case of $M_\mathrm{p} \lesssim M_\mathrm{J}$) if the
planet's orbit is initially retrograde (see Figure~\ref{fig:retrograde}).
A low initial value of the disk angular momentum $D$ arises naturally in the
leading scenarios for placing planets in inclined orbits, which favor
comparatively low disk masses (see Section~\ref{sec:intro}).
The distribution of $\psi_\mathrm{p0}$ as well as those of the occurrence rate,
mass, and orbital radius of planets on inclined orbits are required for
determining the predicted distribution of primordial inner-disk misalignment
angles in this scenario, for comparison with observations.\footnote{
\citet{MatsakosKonigl15} were able to reproduce the observed obliquity
distributions of HJs around G and F stars within the framework of the SHJ model
under the assumption that the intrinsic spin--orbit angle has a random
distribution (corresponding to a flat distribution of $\lambda$; see
\citealt{FabryckyWinn09}).}
However, this information, as well as data on the relevant values of
$M_\mathrm{d0}$, are not yet available, so our results for $\theta_\mathrm{sd}$
are only a first step (a proof of concept) toward validating this interpretation
of the measured planet obliquities.
Our proposed misalignment mechanism is most effective when the disk mass within
the planetary orbit drops to $\sim$$M_\mathrm{p}$.
In the example demonstrating this fact (Figure~\ref{fig:all-m}),
$M_\mathrm{d0} \approx 2\,M_\mathrm{J}$.
In the primordial disk misalignment scenario, $M_\mathrm{d0}$ includes the mass
that would eventually be detected in the form of an HJ (or a lower-mass planet)
moving around the central star on a misaligned orbit.
Furthermore, if the ingestion of an HJ on a misaligned orbit is as ubiquitous as
inferred in the SHJ picture, that mass, too, must be included in the tally.
These requirements are consistent with the fact that the typical disk
misalignment time in our model (a few Myr) is comparable to the expected
giant-planet formation time, but this similarity also raises the question of
whether the torque exerted by the initially misaligned planet has the same
effect on the gaseous inner disk and on a giant planet embedded within it.
This question was considered by several authors in the context of a binary
companion \citep[e.g.,][]{Xiang-GruessPapaloizou14, PicognaMarzari15,
Martin+16}.
A useful gauge of the outcome of this dynamical interaction is the ratio of the
precession frequency induced in the embedded planet (which we label
$\Omega_\mathrm{pp}$) to $\Omega_\mathrm{dp}$ \citep{PicognaMarzari15}.
We derive an expression for $\Omega_\mathrm{pp}$ by approximating the inclined
and embedded planets as two rings with radii $a$ and $a_1 < a$, respectively
(see Appendix~\ref{app:torques}), and evaluate $\Omega_\mathrm{dp}$ under the
assumption that the disk mass has been sufficiently depleted for the planetary
contribution ($P$) to dominate $J_\mathrm{dp}$.
This leads to
$\Omega_\mathrm{pp}/\Omega_\mathrm{dp} \simeq 2\,(a_1/r_\mathrm{d,out})^{3/2}$,
which is the same as the estimate obtained by \citet{PicognaMarzari15} for a
binary system.
In the latter case, this ratio is small ($\lesssim 0.1$) for typical parameters,
implying that the embedded planet cannot keep up with the disk precession and
hence that its orbit develops a significant tilt with respect to the disk's
plane.
However, when the companion is a planet, the above ratio equals $(a_1/a)^{3/2}$
and may be considerably larger ($\lesssim 1$), which suggests that the embedded
planet can remain coupled to the disk in this case.
A key prediction of our proposed scenario---which distinguishes it from the
original \citet{Batygin12} proposal---is that there would in general be a
difference in the obliquity properties of ``nearby'' and ``distant'' planets,
corresponding to the different orientations attained, respectively, by the inner
and outer disks.
This prediction is qualitatively consistent with the finding of \citet{LiWinn16}
that the good spin--orbit alignment inferred in cool stars from an analysis of
rotational photometric modulations in \textit{Kepler} sources \citep{Mazeh+15}
becomes weaker (with the inferred orientations possibly tending toward a nearly
random distribution) at large orbital periods
($P_\mathrm{orb} \gtrsim 10^2\,$days).
The interpretation of these results in our picture is that the outer planets
remain aligned with the original stellar-spin direction, whereas the inner
planets---and, according to the SHJ model, also the stellar spin in $\sim$50\%
of sources---assume the orientation of the misaligned inner disk (which samples
a broad range of angles with respect to the initial spin direction).
Further observations and analysis are required to corroborate and refine these
findings so that they can be used to place tighter constrains on the models.
The result reported by \citet{LiWinn16} is seemingly at odds with another set of
observational findings---the discovery that the orbital planes of debris disks
(on scales $\gtrsim 10^2\,$au) are by and large well aligned with the spin axis
of the central star \citep{Watson+11, Greaves+14}.
This inferred alignment also seemingly rules out any interpretation of the
obliquity properties of exoplanets (including the SHJ model) that appeals to a
tidal realignment of the host star by a misaligned HJ.
These apparent difficulties can, however, be alleviated in the context of the
SHJ scenario and our present model.
Specifically, in the SHJ picture the realignment of the host star occurs on a
relatively long timescale ($\lesssim 1\,$Gyr; see \citealt{MatsakosKonigl15}).
This is much longer than the lifetime ($\sim$1--10\,Myr) of the gaseous disk
that gives rise to both the misaligned ``nearby'' planets and the debris disk
(which, in the scenario considered in this paper, are associated with the inner
and outer parts of the disk, respectively).
The inferred alignment properties of debris disks can be understood in this
picture if these disks are not much older than $\sim$1\,Gyr, so that the stellar
spin axis still points roughly along its original direction (which coincides
with the symmetry axis of the outer disk).
We searched the literature for age estimates of the 11 uniformly observed debris
disks tabulated in \citet{Greaves+14} and found that only two (10~CVn and
61~Vir) are definitely much older than $1$\,Gyr.
Now, \citet{MatsakosKonigl15} estimated that $\sim$50\% of systems ingest an SHJ
and should exhibit spin--orbit alignment to within $20^\circ$, with the rest
remaining misaligned.
Thus, the probability of observing an aligned debris disk in an older system is
$\sim 1/2$, implying that the chance of detecting 2 out of 2 such systems is
$\sim 1/4$.
It is, however, worth noting that the two aforementioned systems may not
actually be well aligned: based on the formal measurement uncertainties quoted
in \citet{Greaves+14}, the misalignment angle could be as large as $36^\circ$ in
10~CVn and $31^\circ$ in 61~Vir.
Further measurements that target old systems might be able to test the proposed
explanation, although one should bear in mind that additional factors may affect
the observational findings.
For example, in the tidal-downsizing scenario of planet formation, debris disks
are less likely to exist around stars that host giant planets \citep[see][]
{FletcherNayakshin16}.
\section{Conclusion}
\label{sec:conclusion}
In this paper we conduct a proof-of-concept study of a variant of the primordial
disk misalignment model of \citet{Batygin12}.
In that model, a binary companion with an orbital radius of a few hundred au
exerts a gravitational torque on a protoplanetary disk that causes its plane to
precess and leads to a large-amplitude oscillation of the spin--orbit angle
$\theta_\mathrm{sd}$ (the angle between the angular momentum vectors of the disk
and the central star).
Motivated by recent observations, we explore an alternative model in which the
role of the distant binary is taken by a giant planet with an orbital radius of
just a few au.
Such a companion likely resided originally in the disk, and its orbit most
probably became inclined away from the disk's plane through a gravitational
interaction with other planets (involving either scattering or resonant
excitation).
Our model setup is guided by indications from numerical simulations
\citep{Xiang-GruessPapaloizou13} that, in the presence of the misaligned planet,
the disk separates at the planet's orbital radius into inner and outer parts
that exhibit distinct dynamical behaviors even as each can still be well
approximated as a rigid body.
We integrate the secular dynamical evolution equations in the quadrupole
approximation for a system consisting of the inclined planet, the two disk
parts, and the spinning star, with the disk assumed to undergo continuous mass
depletion.
We show that this model can give rise to a broad range of values for the angle
between the angular momentum vectors of the inner disk and the star (including
values of $\theta_\mathrm{sd}$ in excess of $90^\circ$), but that the
orientation of the outer disk remains virtually unchanged.
We demonstrate that the misalignment is induced by the torque that the planet
exerts on the inner disk and that it is suppressed when the mass depletion time
in the outer disk is much longer than in the inner disk, so that the outer disk
remains comparatively massive and the fast precession that it induces in the
motions of the inner disk and the planet effectively breaks the dynamical
coupling between the latter two.
Our calculations reveal that the largest misalignments are attained when the
initial disk mass is low (on the order of that of observed systems at the onset
of the transition-disk phase).
We argued that, when the misalignment angle is large, the inner and outer parts
of the disk become fully detached and damping of the planet's orbital
inclination by dynamical friction effectively ceases.
This suggests a consistent primordial misalignment scenario: the inner region of
a protoplanetary disk can be strongly misaligned by a giant planet on a
high-inclination orbit if the disk's mass is low (i.e., late in the disk's
evolution); in turn, the planet's orbital inclination is least susceptible to
damping in a disk that undergoes a strong misalignment.
We find that, in addition to the precession-related oscillations seen in the
binary-companion model, the spin--orbit angle also exhibits a secular growth in
the planetary-companion case, corresponding to a monotonic increase in the angle
between the inner disk's and the total (inner disk plus planet) angular momentum
vectors (accompanied by a monotonic decrease in the angle between the planet's
and the total angular momentum vectors).
This behavior arises when the magnitude of the inner disk's angular momentum is
initially comparable to that of the planet but drops below it as a result of
mass depletion (on a timescale that is long in comparison with the precession
period).
This does not happen when the companion is a binary, since in that case the
companion's angular momentum far exceeds that of the inner disk at all times.
On the other hand, in the binary case the mass depletion process can drive the
system to a resonance between the disk--planet and star--disk precession
frequencies, which has the potential of significantly increasing the maximum
value of $\theta_\mathrm{sd}$ \citep[e.g.,][]{BatyginAdams13, Lai14}.
We show that this resonance is not encountered when the companion is a nearby
planet because---in contrast with the binary-companion case, in which the
disk--binary precession frequency remains constant---both of these precession
frequencies decrease with time in the planetary-companion case. However, we
also show that when the torque that the star exerts on the disk is
taken into account (and not just that exerted by the companion, as in previous
treatments), the misalignment effect of the resonance crossing in the binary
case is measurably weaker.
A key underlying assumption of the primordial disk-misalignment model is that
the planets embedded in the disk remain confined to its plane as the disk's
orientation shifts, so that their orbits become misaligned to the same extent as
that of the gaseous disk.
However, the precession frequency that a binary companion induces in the disk
can be significantly higher than the one induced by its direct interaction with
an embedded planet, which would lead to the planet's orbital plane separating
from that of the disk: this argument was used to critique the original version
of the primordial misalignment model \citep[e.g.,][]{PicognaMarzari15}.
However, this potential difficulty is mitigated in the planetary-companion
scenario, where the ratio of these two frequencies is typically substantially
smaller.
The apparent difference in the obliquity properties of HJs around cool and hot
stars can be attributed to the tidal realignment of a cool host star by an
initially misaligned HJ \citep[e.g.,][]{Albrecht+12}.
The finding \citep{Mazeh+15} that this dichotomy is exhibited also by lower-mass
planets and extends to orbital distances where tidal interactions with the star
are very weak motivated the SHJ proposal \citep{MatsakosKonigl15}, which
postulates that $\sim$50\% of systems contain an HJ that arrives through
migration in the protoplanetary disk and becomes stranded near its inner edge
for a period of $\lesssim 1$\,Gyr---during which time the central star continues
to lose angular momentum by magnetic braking---until the tidal interaction with
the star finally causes it to be ingested (resulting in the transfer of the
planet's orbital angular momentum to the star and in the realignment of the
stellar spin in the case of cool stars).
This picture fits naturally with the primordial misalignment model discussed in
this paper.
In this broader scenario, the alignment properties of currently observed planets
(which do not include SHJs) can be explained if these planets largely remain
confined to the plane of their primordial parent disk.
In the case of cool stars the planets exhibit strong alignment on account of the
realignment action of a predecessor SHJ, whereas in the case of hot stars they
exhibit a broad range of spin--orbit angles, reflecting the primordial range of
disk misalignment angles that was preserved on account of the ineffectiveness of
the tidal realignment process in these stars.
A distinguishing prediction of the planetary-companion variant of the primordial
misalignment model in the context of this scenario arises from the expected
difference in the alignment properties of the inner and outer disks, which
implies that the good alignment exhibited by planets around cool stars should
give way to a broad range of apparent spin--orbit angles above a certain orbital
period.
There is already an observational indication of this trend \citep{LiWinn16}, but
additional data are needed to firm it up.
A complementary prediction, which is potentially also testable, is that the
range of obliquities exhibited by planets around hot stars would narrow toward
$\lambda=0^\circ$ at large orbital periods.
This scenario may also provide an explanation for another puzzling observational
finding---that large-scale debris disks are by and large well aligned with the
spin vector of the central star---which, on the face of it, seems inconsistent
with the spin-realignment hypothesis.
In this interpretation, debris disks are associated with the outer parts of
protoplanetary disks and should therefore remain aligned with the central
star---as a general rule for hot stars, but also in the case of cool hosts that
harbor a stranded HJ if they are observed before the SHJ realigns the star.
This explanation is consistent with the fact that the great majority of observed
debris disks have inferred ages $\ll 1$\,Gyr, but the extent to which it
addresses the above finding can be tested through its prediction that a
sufficiently large sample of older systems should also contain misaligned disks.
\acknowledgements
We are grateful to Dan Fabrycky, Tsevi Mazeh, and Sean Mills for fruitful
discussions.
We also thank Gongjie Li and Josh Winn for helpful correspondence, and the
referee for useful comments.
This work was supported in part by NASA ATP grant NNX13AH56G and has made
use of NASA's Astrophysics Data System Bibliographic Services and of
\texttt{matplotlib}, an open-source plotting library for Python
\citep{Hunter07}.
\bibliographystyle{apj}
|
\section{Introduction}
Neural networks and especially deep learning architectures have become more and more popular recently \cite{lecun2015deep}. We believe that deep neural networks are the most powerful tools in a majority of classification problems (as in the case of image classification \cite{resnet}). Unfortunately, the use of neural networks in regression tasks is limited and it has been recently showed that a softmax distribution of clustered values tends to work better, even when the target is continuous \cite{wavenet}. In some cases seemingly continuous values may be understood as categorical ones (e.g. image pixel intensities) and the transformation between the types is straightforward \cite{Oord16}. However, sometimes this transformation cannot be simply incorporated (as in the case when targets span a huge set of possible values). Furthermore, forcing a neural network to predict multiple targets instead of just a single one makes the evaluation slower.
We want to present a method which fulfils the following requirements:
\begin{itemize}
\item gains advantage from the categorical distribution which makes a prediction more accurate,
\item outputs a single value which is a solution to a given regression task,
\item may be evaluated as quickly as in the case of the original regression neural network.
\end{itemize}
The method proposed, called \emph{drawering}, bases on temporarily extending a given neural network that solves a regression task. That modified neural network has properties which improve learning. Once training is done, the original neural network is used standalone. The knowledge from the extended neural network seems to be transferred and the original neural network achieves better results on the regression task.
The method presented is general and may be used to enhance any given neural network which is trained to solve any regression task. It also affects only the learning procedure.
\section{Main idea}
\subsection{Assumptions}
The method presented may be applied for a regression task, hence we assume:
\begin{itemize}
\item the data $D$ consists of pairs $(x_i,y_i)$ where the input $x_i$ is a fixed size real valued vector and target $y_i$ has a continuous value,
\item the neural network architecture $f(\cdot)$ is trained to find a relation between input $x$ and target $y$, for \mbox{$(x,y) \in D$},
\item a loss function $\mathcal{L}_f$ is used to asses performance of $f(\cdot)$ by scoring $\sum_{(x,y)\in D} \mathcal{L}_f(f(x), y)$, the lower the better.
\end{itemize}
\subsection{Neural network modification} \label{firtsMentionOfPercentiles}
In this setup, any given neural network $f(\cdot)$ may be understood as a composition $f(\cdot) = g(h(\cdot))$, where $g(\cdot)$ is the last part of the neural network $f(\cdot)$ i.e. $g(\cdot)$ applies one matrix multiplication and optionally a non-linearity. In other words, a vector $z = h(x)$ is the value of last hidden layer for an input $x$ and the value $g(z)$ may be written as $g(z) = \sigma (Gh(x))$ for a matrix $G$ and some function $\sigma$ (one can notice that $G$ is just a vector). The job which is done by $g(\cdot)$ is just to squeeze all information from the last hidden layer into one value.
In simple words, the neural network $f(\cdot)$ may be divided in two parts, the first, core $h(\cdot)$, which performs majority of calculations and the second, tiny one $g(\cdot)$ which calculates a single value, a prediction, based on the output of $h(\cdot)$.
Our main idea is to extend the neural network $f(\cdot)$. For every input $x$ the value of the last hidden layer $z = h(x)$ is duplicated and processed by two independent, parameterized functions. The first of them is $g(\cdot)$ as before and the second one is called $s(\cdot)$. The original neural network $g(h(\cdot))$ is trained to minimize the given loss function $\mathcal{L}_f$ and the neural network $s(h(\cdot))$ is trained with a new loss function $\mathcal{L}_s$.
An example of the extension described is presended in the Figure \ref{drawering_example}.
For the sake of consistency the loss function $\mathcal{L}_f$ will be called $\mathcal{L}_g$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{drawering_example}
\caption{The sample extension of the function $f(\cdot)$. The function $g(\cdot)$ always squeezes the last hidden layer into one value. On the other hand the function $s(\cdot)$ may have hidden layers, but the simplest architecture is presented.}
\label{drawering_example}
\end{figure}
Note that functions $g(h(\cdot))$ and $s(h(\cdot))$ share parameters, because $g(h(\cdot))$ and $s(h(\cdot))$ are compositions having the same inner function. Since the parameters of $h(\cdot)$ are shared. It means that learning $s(h(\cdot))$ influences $g(h(\cdot))$ (and the other way around). We want to train all these functions jointly which may be hard in general, but the function $s(\cdot)$ and the loss function $\mathcal{L}_s$ are constructed in a special way presented below.
All real values are clustered into $n$ consecutive intervals i.e. disjoint sets $e_1, e_2, ..., e_n$ such that
\begin{itemize}
\item $\cup_{i=1}^n e_i$ covers all real numbers,
\item $r_j < r_k$ for $r_j \in e_j, r_k \in e_k$, when $j < k$.
\end{itemize}
The function $s(h(\cdot))$ (evaluated for an input $x$) is trained to predict which of the sets $(e_i)_{i=1}^n$ contains $y$ for a pair $(x,y) \in D$. The loss function $\mathcal{L}_s$ may be defined as a non-binary cross-entropy loss which is typically used in classifiation problems. In the simplest form the function $s(\cdot)$ may be just a multiplication by a matrix $S$ (whose first dimension is $n$).
To sum up, \emph{drawering} in its basic form is to add additional, parallel layer which takes as the input the value of the last hidden layer of the original neural network $f(\cdot)$. A modified (\emph{drawered}) neural network is trained to predict not only the original target, but also additional one which depicts order of magnitude of the original target. As a result extended neural network simultaneously solves the regression task and a related classification problem.
One possibility to define sets $e_i$, called \emph{drawers}, is to take suitable percentiles of target values to make each $e_i$ contain roughly the same number of them.
\subsection{Training functions $g(h(\cdot))$ and $s(h(\cdot))$ jointly} \label{howToTrainFunctions}
Training is done using gradient descent, hence it is sufficient to obtain gradients of all functions defined i.e. $h(\cdot)$, $g(\cdot)$ and $s(\cdot)$. For a given pair $(x,y) \in D$ the forward pass for $g(h(x))$ and $s(h(x))$ is calculated (note that a majority of calculations is shared). Afterwards two backpropagations are processed.
The backpropagation for the composition $g(h(x))$ using loss function $\mathcal{L}_g$ returns a vector which is a concatenation of two vectors $grad_g$ and $grad_{h,g}$, such that $grad_g$ is the gradient of function $g(\cdot)$ at the point $h(x)$ and $grad_{h,g}$ is the gradient of function $h(\cdot)$ at the point $x$. Similarly, the backpropagation for $s(h(x))$ using loss function $\mathcal{L}_s$ gives two gradients $grad_s$ and $grad_{h,s}$ for functions $s(\cdot)$ and $h(\cdot)$, respectively.
The computed gradients of $g(\cdot)$ and $s(\cdot)$ parameters (i.e. $grad_g$ and $grad_s$) can be applied as in the normal case -- each one of those functions takes part in only one of the backpropagations.
Updating the parameters belonging to the $h(\cdot)$ part is more complex, because we are obtaining two different gradients $grad_{h,g}$ and $grad_{h,s}$. It is worth noting that $h(\cdot)$ parameters are the only common parameters of the compositions $g(h(x))$ and $s(h(x))$. We want to take an average of the gradients $grad_{h,g}$ and $grad_{h,s}$ and apply (update $h(\cdot)$ parameters). Unfortunately, the orders of magnitute of them may be different. Therefore, taking an unweighted average may result in minimalizing only one of the loss functions $\mathcal{L}_g$ or $\mathcal{L}_s$. To address this problem, the averages $a_g$ and $a_s$ of absolute values of both gradients are calculated.
Formally, the norm $L^1$ is used to define:
\begin{equation}
a_g = \left\lVert grad_{h,g} \right\rVert_1,
\end{equation}
\begin{equation*}
a_s = \left\lVert grad_{h,s} \right\rVert_1.
\end{equation*}
The values $a_g$ and $a_s$ aproximately describe the impacts of the loss functions $\mathcal{L}_g$ and $\mathcal{L}_s$, respectively. The final vector $grad_h$ which will be used as a gradient of $h(\cdot)$ parameters in the gradient descent procedure equals:
\begin{equation}
grad_h = \alpha grad_{h,g} + (1 - \alpha) \frac{a_g}{a_s} grad_{h,s}
\end{equation}
for a hyperparameter $\alpha \in (0,1)$, typically $\alpha = 0.5$. This strategy makes updates of $h(\cdot)$ parameters be of the same order of magnitude as in the procces of learning the original neural network $f(\cdot)$ (without \emph{drawering}).
One can also normalize the gradient $grad_{h,g}$ instead of the gradient $grad_{h,s}$, but it may need more adjustments in the hyperparameters of the learning procedure (e.g. learning rate alteration may be required).
Note that for $\alpha = 1$ the learning procedure will be identical as in the original case where the function $f$ is trained using loss function $\mathcal{L}_g$ only.
It is useful to bear in mind that both backpropagations also share a lot of calculations. In the extreme case when the ratio $\frac{a_c}{a_d}$ is known in advance one backpropagation may be performed simultaneously for loss function $\mathcal{L}_g$ and the weighted loss function $\mathcal{L}_s$. We noticed that the ratio needed is roughly constant between batches iterations therefore may be calculated in the initial phase of learning. Afterwards may be checked and updated from time to time.
\textit{In this section we slightly abused the notation -- a value of gradient at a given point is called just a gradient since it is obvious what point is considered.}
\subsection{Defining \emph{drawers}}
\subsubsection{Regular and uneven}\label{evenUneven}
We mentioned in the subsection \ref{firtsMentionOfPercentiles} that the simplest way of defining \emph{drawers} is to take intervals whose endings are suitable percentiles that distribute target values uniformly. In this case $n$ \emph{regular drawers} are defined in the following way:
\begin{equation}
e_i = (q_{i-1,n}, q_{i,n}]
\end{equation}
where $q_{i,n}$ is $\frac{i}{n}$-quantile of targets $y$ from training set (the values $q_{0,n}$ and $q_{n,n}$ are defined as \emph{minus infinity} and \emph{plus inifinity}, respectively).
This way of defining \emph{drawers} makes each interval $e_i$ contain approximately the same number of target values.
However, we noticed that an alternative way of defining $e_i$'s, which tends to support classical mean square error (MSE) loss better, may be proposed. The MSE loss penalizes more when the difference between the given target and the prediction is larger. To address this problem $drawers$ may be defined in a way which encourages the learning procedure to focus on extreme values. \emph{Drawers} should group the middle values in bigger clusters while placing extreme values in smaller ones. The definition of $2n$ \emph{uneven drawers} is as follows:
\begin{equation}
e_i = (q_{1,2^{n-i+2}}, q_{2,2^{n-i+2}}], \text{ for } i \leq n,
\end{equation}
\begin{equation*}
e_i = (q_{2^{i-n+1}-2,2^{i-n+1}}, q_{2^{i-n+1}-1,2^{i-n+1}}], \text{ for } i>n.
\end{equation*}
In this case every \emph{drawer} $e_{i+1}$ contains approximately two times more target values as compared to \emph{drawer} $e_i$ for $i<n$. Finally, both $e_n$ and $e_{n+1}$ contain the maximum of $25\%$ of all target values. Similarly to the asceding intervals in the first half, $e_i$ are desceding for $i>n$ i.e. contain less and less target values.
The number of \emph{drawers} $n$ is a hyperparameter. The bigger $n$ the more complex distribution may be modeled. On the other hand each \emph{drawers} has to contain enough representants among targets from training set. In our experiments each \emph{drawer} contained at least 500 target values.
\subsubsection{Disjoint and nested} \label{disjointNested}
We observed that sometimes it may be better to train $s(h(\cdot))$ to predict whether target is in a set $f_j$, where $f_j = \cup_{i=j}^n e_i$. In this case $s(h(\cdot))$ has to answer a simpler question: \textit{"Is a target higher than a given value?"} instead of bounding the target value from both sides. Of course in this case $s(h(x))$ no longer solves a one-class classification problem, but every value of $s(h(x))$ may be assesed independently by binary cross-entropy loss.\\
Therefore, \emph{drawers} may be:
\begin{itemize}
\item \emph{regular} or \emph{uneven},
\item \emph{nested} or \emph{disjoint}.
\end{itemize}
These divisions are orthogonal. In all experiments described in this paper (the section \ref{experiments}) \emph{uneven drawers} were used.
\section{Logic behind our idea}
We believe that \emph{drawering} improves learning by providing the following properties.
\begin{itemize}
\item The extension $s(\cdot)$ gives additional expressive power to a given neural network. It is used to predict additional target, but since this target is closely related with the original one, it is believed that gained knowledge is transferred to the core of the given neural network $h(\cdot)$.
\item Since categorical distributions do not assume their shape, they can model arbitrary distribution -- they are more flexible.
\item We argue that classification loss functions provide better behaved gradients than regression ones. As a result evolution of classification neural network is more smooth during learning.
\item Additional target (even closely related) works as a regularization as typically in multitask learning \cite{thrun1996learning}.
\end{itemize}
\section{Model comparison}
Effectiveness of the method presented was established with the comparison. The original and \emph{drawered} neural network were trained on the same dataset and once trainings were completed the neural networks performances on a given test set were measured. Since \emph{drawering} affects just a learning procedure the comparision is fair.
All learning procedures depend on random initialization hence to obtain reliable results a few learning procedures in both setups were performed. Adam \cite{adam} was chosen for stochastic optimization.
The comparison was done on two datasets described in the following section. The results are described in the section \ref{experiments}.
\section{Data}
The method presented were tested on two datasets.
\subsection{Rossmann Store Sales}
The first dataset is public and was used during \emph{Rossmann Store Sales} competition on well-known platform \emph{kaggle.com}. The official description starts as follows:
\begin{quote}
Rossmann operates over 3,000 drug stores in 7 European countries. Currently, Rossmann store managers are tasked with predicting their daily sales for up to six weeks in advance. Store sales are influenced by many factors, including promotions, competition, school and state holidays, seasonality, and locality. With thousands of individual managers predicting sales based on their unique circumstances, the accuracy of results can be quite varied.
\end{quote}
The dataset contains mainly categorical features like information about state holidays, an indicator whether a store is running a promotion on a given day etc.
Since we needed ground truth labels, only the train part of the dataset was used (in \emph{kaggle.com} notation). We split this data into new training set, validation set and test set by time. The training set ($648k$ records) is consisted of all observations before year 2015. The validation set ($112k$ records) contain all observations from January, February, March and April 2015. Finally, the test set ($84k$ records) covers the rest of the observations from year 2015.
In our version of this task target $y$ is normalized logarithm of the turnover for a given day. Logarithm was used since the turnovers are exponentially distributed. An input $x$ is consisted of all information provided in the original dataset except for \emph{Promo2} related information. A day and a month was extracted from a given date (a year was ignored).
The biggest challenge linked with this dataset is not to overfit trained model, because dataset size is relatively small and encoding layers have to be used to cope with categorical variables. Differences between scores on train, validation and test sets were significant and seemed to grow during learning. We believe that \emph{drawering} prevents overfitting -- works as a regularization in this case.
\subsection{Conversion value task}
This private dataset depicts conversion value task i.e. a regression problem where one wants to predict the value of the next item bought for a given customer who clicked a displayed ad.
The dataset describes states of customers at the time of impressions. The state (input $x$) is a vector of mainly continuous features like a price of last item seen, a value of the previous purchase, a number of items in the basket etc. Target $y$ is the price of the next item bought by the given user. The price is always positive since only users who clicked an ad and converted are incorporated into the dataset.
The dataset was split into training set ($2,1$ million records) and validation set ($0,9$ million observations). Initially there was also a test set extracted from validation set, but it turned out that scores on validation and test sets are almost identical.
We believe that the biggest challenge while working on the conversion value task is to tame gradients which vary a lot. That is to say, for two pairs $(x_1, y_1)$ and $(x_2, y_2)$ from the dataset, the inputs $x_1$ and $x_2$ may be close to each other or even identical, but the targets $y_1$ and $y_2$ may even not have the same order of magnitude. As a result gradients may remain relatively high even during the last phase of learning and the model may tend to predict the last encountered target ($y_1$ or $y_2$) instead of predicting an average of them. We argue that \emph{drawering} helps to find general patterns by providing better behaved gradients.
\section{Experiments}\label{experiments}
In this section the results of the comparisons described in the previous section are presented.
\subsection{Rossmann Store Sales}
In this case the original neural network $f(\cdot)$ takes an input which is produced from 14 values -- 12 categorical and 2 continuous ones. Each categorical value is encoded into a vector of size $min(k,10)$, where $k$ is the number of all possible values of the given categorical variable. The minimum is applied to avoid incorporating a redundancy. Both continuous features are normalized. The concatenation of all encoded features and two continuous variables produces the input vector $x$ of size 75.
The neural network $f(\cdot)$ has a sequential form and is defined as follows:
\begin{itemize}
\item an input is processed by $h(\cdot)$ which is as follows:
\begin{itemize}
\item $Linear(75, 64)$,
\item $ReLU$,
\item $Linear(64, 128)$,
\item $ReLU$,
\end{itemize}
\item afterwards an output of $h(\cdot)$ is fed to a simple function $g(\cdot)$ which is just a $Linear(128, 1)$.
\end{itemize}
The \emph{drawered} neural network with incorporated $s(\cdot)$ is as follows:
\begin{itemize}
\item as in the original $f(\cdot)$, the same $h(\cdot)$ processes an input,
\item an output of $h(\cdot)$ is duplicated and processed independently by $g(\cdot)$ which is the same as in the original $f(\cdot)$ and $s(\cdot)$ which is as follows:
\begin{itemize}
\item $Linear(128, 1024)$,
\item $ReLU$,
\item $Dropout(0.5)$,
\item $Linear(1024, 19)$,
\item $Sigmoid$.
\end{itemize}
\end{itemize}
\emph{The torch notation were used, here:
\begin{itemize}
\item $Linear(a, b)$ is a linear transformation -- vector of size $a$ into vector of size $b$,
\item $ReLU$ is the rectifier function applied pointwise,
\item $Sigmoid$ ia the sigmoid function applied pointwise,
\item $Dropout$ is a dropout layer \cite{srivastava2014dropout}.
\end{itemize}
}
The \emph{drawered} neural network has roughly $150k$ more parameters. It is a huge advantage, but these additional parameters are used only to calculate new target and additional calculations may be skipped during an evaluation. We believe that patterns found to answer the additional target, which is related to the original one, were transferred to the core part $h(\cdot)$.
We used dropout only in $s(\cdot)$ since incorporating dropout to $h(\cdot)$ causes instability in learning. While work on regression tasks we noticed that it may be a general issue and it should be investigated, but it is not in the scope of this paper.
Fifty learning procedures for both the original and the extended neural network were performed. They were stopped after fifty iterations without any progress on validation set (and at least one hundred iterations in total). The iteration of the model which performed the best on validation set was chosen and evaluated on the test set. The loss function used was a classic square error loss.
The minimal error on the test set achieved by the \emph{drawered} neural network is $4.481$, which is $7.5\%$ better than the best original neural network. The difference between the average of Top5 scores is also around $7.5\%$ in favor of \emph{drawering}. While analyzing the average of all fifty models per method the difference seems to be blurred. It is caused be the fact that a few learning procedures overfited too much and achieved unsatisfying results. But even in this case the average for \emph{drawered} neural networks is about $3.8\%$ better. All these scores with standard deviations are showed in the Table \ref{rossmannScores}.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Rossmann Store Sales Scores}
\label{rossmannScores}
\centering
\begin{tabular}{c||c|c|c|c|c}
Model & Min & Top5 mean & Top5 std & All mean & All std\\
\hline
Original & $4.847$ & $4.930$ & $0.113$ & $5.437$ & $0.259$\\
Extended & $4.481$ & $4.558$ & $0.095$ & $5.232$ & $0.331$\\
\end{tabular}
\end{table}
\textit{We have to note that extending $h(\cdot)$ by additional $150k$ parameters may result in ever better performance, but it would drastically slow an evaluation. However, we noticed that simple extensions of the original neural netwok $f(\cdot)$ tend to overfit and did not achieve better results.}
The train errors may be also investigated. In this case the original neural network performs better which supports our thesis that \emph{drawering} works as a regularization. Detailed results are presented in the Table \ref{rossmannScoresTrain}.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Rossmann Store Sales Scores on Training Set}
\label{rossmannScoresTrain}
\centering
\begin{tabular}{c||c|c|c|c|c}
Model & Min & Top5 mean & Top5 std & All mean & All std\\
\hline
Original & $3.484$ & $3.571$ & $0.059$ & $3.494$ & $0.009$\\
Extended & $3.555$ & $3.655$ & $0.049$ & $3.561$ & $0.012$\\
\end{tabular}
\end{table}
\subsection{Conversion value task}
This dataset provides detailed users descriptions which are consisted of 6 categorical features and more than 400 continuous ones. After encoding the original neural network $f(\cdot)$ takes an input vector of size 700. The core part $h(\cdot)$ is the neural network with 3 layers that outputs a vector of size 200. The function $g(\cdot)$ and the extension $s(\cdot)$ are simple, $Linear(200,1)$ and $Linear(200, 21)$, respectively.
In case of the conversion value task we do not provide a detailed model description since the dataset is private and this experiment can not be reproduced. However, we decided to incorporate this comparison to the paper because two versions of \emph{drawers} were tested on this dataset (\emph{disjoint} and \emph{nested}). We also want to point out that we invented \emph{drawering} method while working on this dataset and afterwards decided to check the method out on public data. We were unable to achieve superior results without \emph{drawering}. Therefore, we believe that work done on this dataset (despite its privacy) should be presented.
To obtain more reliable results ten learning procedures were performed for each setup:
\begin{itemize}
\item \textit{Original} -- the original neural network $f(\cdot)$,
\item \textit{Disjoint} -- \emph{drawered} neural network for \emph{disjoint drawers},
\item \textit{Nested} -- \emph{drawered} neural network for \emph{nested drawers}.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{cv}
\caption{Sample evolutions of scores on validation set during learning procedures.}
\label{cvcurves}
\end{figure}
In the Figure \ref{cvcurves} six learning curves are showed. For each of three setups the best and the worst ones were chosen. It means that the minimum of the other eight are between representants showed. The first 50 iterations were skipped to make the figure more lucid. Each learning procedure was finished after 30 iterations without any progress on the validation set.
It may be easily inferred that all twenty \emph{drawered} neural networks performed significantly better than neural networks trained without the extension. The difference between \textit{Disjoint} and \textit{Nested} versions is also noticeable and $\textit{Disjoint}$ \emph{drawers} tends to perform slightly better.
In the Rossmann Stores Sales case we experienced the opposite, hence the version of \emph{drawers} may be understood as a hyperparameter. We suppose that it may be related with the size of a given dataset.
\section{Analysis of $s(h(x))$ values}
Values of $s(h(x))$ may be analyzed. For a pair \mbox{$(x,y) \in D$} the $i$-th value of the vector $s(h(x))$ is the probability that target $y$ belongs to the \emph{drawer} $f_i$. In this section we assume that \emph{drawers} are nested, hence values of $s(h(x))$ should be descending. Notice that we do not force this property by the architecture of \emph{drawered} neural network, so it is a side effect of the nested structure of \emph{drawers}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{rss}
\caption{Sample values of $s(h(x))$ for a randomly chosen model solving Rossmann Store Sales problem (nested \emph{drawers}).}
\label{rss}
\end{figure}
In the Figure \ref{rss} a few sample distributions are showed. Each according label is ground truth ($i$ such that $e_i$ contains target value). The values of $s(h(x))$ are clearly monotonous as expected. It seems that $s(h(x))$ performs well -- values are close to one in the beginning and to zero in the end. A switch is in the right place, close to the ground truth label and misses by maximum one \emph{drawer}.
\section{Conclusion}
The method presented, \emph{drawering}, extends a given regression neural network which makes training more effective. The modification affects the learning procedure only, hence once \emph{drawered} model is trained, the extension may be easily omitted during evaluation without any change in prediction. It means that the modified model may be evaluated as fast as the original one but tends to perform better.
\newpage
We believe that this improvement is possible because \emph{drawered} neural network has bigger expressive power, is provided with better behaved gradients, can model arbitrary distribution and is regularized. It turned out that the knowledge gained by the modified neural network is contained in the parameters shared with the given neural network.
Since the only cost is an increase in learning time, we believe that in cases when better performance is more important than training time, \emph{drawering} should be incorporated into a given regression neural network.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
At the meeting of the American Mathematical Society in Hayward, California, in April 1977, Olga Taussky-Todd \cite{TausskyTodd} asked whether one could characterize the values of the group determinant when the entries are all integers.
For a prime $p,$ a complete description was obtained for $\mathbb Z_{p}$ and $\mathbb Z_{2p}$, the cyclic groups of order $p$ and $2p$, in \cite{Newman1} and \cite{Laquer}, and for $D_{2p}$ and $D_{4p}$ the dihedral groups of order $2p$ and $4p$ in \cite{dihedral}. The values for $Q_{4n}$, the dicyclic group of order $4n$ were explored in \cite{dicyclic}
with a near complete description for $Q_{4p}$. In general though this quickly becomes a hard problem,
with only partial results known even for $\mathbb Z_{p^2}$ once $p\geq 7$ (see \cite{Newman2} and \cite{Mike}).
The remaining groups of order less than 15 were tackled in \cite{smallgps} and $\mathbb Z_{15}$ in \cite{bishnu1}.
The integer group determinants have been determined for all five abelian groups of order 16 ($\mathbb Z_2 \times \mathbb Z_8$, $\mathbb Z_{16}$, $\mathbb Z_2^4$, $\mathbb Z_4^2$, $\mathbb Z_2^2 \times\mathbb Z_4$ in \cite{Yamaguchi1,Yamaguchi2,Yamaguchi3,Yamaguchi4,Yamaguchi5}), and for three of the non-abelian groups
($D_{16}$, $\mathbb Z_2\times D_8$, $\mathbb Z_2 \times Q_8$ in \cite{dihedral,ZnxH}).
Here we determine the the group determinants for $Q_{16}$, the dicyclic or generalized quaternion group of order 16.
$$ Q_{16}=\langle X,Y \; | \; X^8=1,\; Y^2=X^4,\; XY=YX^{-1}\rangle. $$
This leaves five unresolved non-abelian groups of order 16
\begin{theorem} The even integer group determinants for $Q_{16}$ are exactly the multiples of $2^{10}$.
The odd integer group determinants are all the integers $n\equiv 1$ mod 8 plus those $n\equiv 5$ mod 8 of the form
$n=mp^2$ where $m\equiv 5$ mod 8 and $p\equiv 7$ mod $8$ is prime.
\end{theorem}
We shall think here of the group determinant as being defined on elements of the group ring $\mathbb Z [G]$
$$ \mathcal{D}_G\left( \sum_{g\in G} a_g g \right)=\det\left( a_{gh^{-1}}\right) .$$
\begin{comment}
We observe the multiplicative property
\begin{equation} \label{mult} \mathcal{D}_G(xy)= \mathcal{D}_G(x)\mathcal{D}_G(y), \end{equation}
using that
$$ x=\sum_{g \in G} a_g g,\;\;\; y=\sum_{g \in G} b_g g \; \Rightarrow \; xy=\sum_{g\in G} \left(\sum_{hk=g}a_hb_k\right) g. $$
\end{comment}
Frobenius \cite{Frob} observed that the group determinant can be factored using the groups representations (see for example \cite{Conrad} or \cite{book})
and an explicit expression for a dicyclic group determinant was given in \cite{smallgps}. For $Q_{16}$, arranging the
16 coefficients into two polynomials of degree 7
$$ f(x)=\sum_{j=0}^7 a_j x^j,\;\; g(x)=\sum_{j=0}^7 b_jx^j, $$
and writing the primitive 8th root of unity $\omega:=e^{2\pi i/8}=\frac{\sqrt{2}}{2}(1+i)$, this becomes
\begin{equation} \label{form}\mathcal{D}_G\left( \sum_{j=0}^7 a_j X^j + \sum_{j=0}^7 b_j YX^j\right) =ABC^2D^2 \end{equation}
with integers $A,B,C,D$ from
\begin{align*}
A=& f(1)^2- g(1)^2\\
B=& f(-1)^2-g(-1)^2\\
C=& |f(i)|^2-|g(i)|^2 \\
D=& \left(|f(\omega)|^2+|g(\omega)|^2\right)\left(|f(\omega^3)|^2+|g(\omega^3)|^2\right).
\end{align*}
From \cite[Lemma 5.2]{dicyclic} we know that the even values must be multiples of $2^{10}$. The odd values must be
1 mod 4 (plainly $f(1)$ and $g(1)$ must be of opposite parity and $A\equiv B\equiv \pm 1$ mod 4 with $(CD)^2\equiv 1$ mod 4).
\section{Achieving the values $n\not \equiv 5$ mod 8}
We can achieve all the multiples of $2^{10}$.
Writing $h(x):=(x+1)(x^2+1)(x^4+1),$ we achieve the $2^{10}(-3+4m)$ from
$$
f(x) = (1-m)h(x),\quad
g(x)=1+x^2+x^3+x^4-mh(x), $$
the $2^{10}(-1+4m)$ from
$$ f(x)= 1+x+x^4+x^5-mh(x),\;\;\;\;
g(x)= 1+x-x^3-x^7-mh(x), $$
the $2^{11}(-1+2m)$ from
$$ f(x)= 1+x+x^2+x^3+x^4+x^5-mh(x),\;\;\quad
g(x)=1+x^4-mh(x), $$
and the $2^{12}m$ from
$$ f(x)= 1+x+x^4+x^5-x^6-x^7-mh(x),\;\;
g(x)= 1+x-x^3+x^4+x^5-x^7+mh(x). $$
We can achieve all the $n\equiv 1$ mod 8; the $1+16m$ from
$$ f(1)=1+mh(x),\;\; g(x)=mh(x), $$
and the $-7+16m$ from
$$f(x)= 1-x+x^2+x^3+x^7- mh(x),\;\;
g(x)= 1+x^3+x^4+x^7-mh(x). $$
\section{ The form of the $n\equiv 5$ mod 8}
This leaves the $n\equiv 5$ mod 8. Since $(CD)^2\equiv 1$ mod 8 we must have $AB\equiv 5$ mod 8. Switching $f$ and $g$ as necessary we assume that $f(1),f(-1)$ are odd and $g(1),g(-1)$ even. Replacing $x$ by $-x$ if needed we can assume that $g(1)^2\equiv 4$ mod 8 and $g(-1)^2\equiv 0$ mod 8.
We write
$$ F(x)=f(x)f(x^{-1})= \sum_{j=0}^7 c_j (x+x^{-1})^j, \quad G(x)=g(x)g(x^{-1})= \sum_{j=0}^7 d_j (x+x^{-1})^j, $$
with the $c_j,d_j$ in $\mathbb Z$.
From $F(1),F(-1)\equiv 1$ mod 8 we have
$$ c_0+2c_1+4c_2 \equiv 1 \text{ mod }8, \quad c_0-2c_1+4c_2 \equiv 1 \text{ mod }8, $$
and $c_0$ is odd and $c_1$ even.
From $G(1)\equiv 4$, $G(-1)\equiv 0$ mod 8 we have
$$ d_0+2d_1+4d_2 \equiv 4 \text{ mod 8}, \quad d_0-2d_1+4d_2 \equiv 0 \text{ mod } 8, $$
and $d_0$ is even and $d_1$ is odd.
Since $\omega+\omega^{-1}=\sqrt{2}$ we get
\begin{align*} F(\omega) & = (c_0+2c_2+4c_4+\ldots ) + \sqrt{2}(c_1+2c_3+4c_5+\cdots),\\
G(\omega) & = (d_0+2d_2+4d_4+\ldots ) + \sqrt{2}(d_1+2d_3+4d_5+\cdots),
\end{align*}
and
$$|f(\omega)|^2+|g(\omega)|^2= F(\omega)+G(\omega) = X+ \sqrt{2} Y>0, \;\; \quad X, Y \text{odd, } $$
with $ |f(\omega^3)|^2+|g(\omega^3)|^2=F(\omega^3)+G(\omega^3) = X- \sqrt{2} Y>0$. Hence the positive integer $D=X^2-2Y^2\equiv -1$ mod 8.
Notice that primes 3 and 5 mod 8 do not split in $\mathbb Z[\sqrt{2}]$ so only their squares can occur in $D$. Hence
$D$ must contain at least one prime $p\equiv 7$ mod 8, giving the claimed form of the values 5 mod 8.
\section{Achieving the specified values 5 mod 8}
Suppose that $p\equiv 7$ mod 8 and $m\equiv 5$ mod 8. We need to achieve $mp^2$.
Since $p\equiv 7$ mod 8 we know that $\left(\frac{2}{p}\right)=1$ and $p$ splits in $\mathbb Z[\sqrt{2}].$ Since $\mathbb Z[\sqrt{2}]$ is
a UFD, a generator for the prime factor gives a solution to
$$ X^2-2Y^2=p, \;\; X,Y\in \mathbb N. $$
Plainly $X,Y$ must both be odd and $X+\sqrt{2}Y$ and $X-\sqrt{2}Y$ both positive.
Since $(X+\sqrt{2}Y)(3+2\sqrt{2})=(3X+4Y)+\sqrt{2}(2X+3Y)$ there will be $X,Y$ with $X\equiv 1$ mod 4 and with
$X\equiv -1$ mod 4.
Cohn \cite{Cohn} showed that $a+b\sqrt{2}$ in $\mathbb Z[\sqrt{2}]$ is a sum of four squares in $\mathbb Z[\sqrt{2}]$ if and only if $2\mid b$. Hence we can write
$$ 2(X+\sqrt{2}Y)= \sum_{j=1}^4 (\alpha_j + \beta_j\sqrt{2})^2, \;\;\alpha_j,\beta_j\in \mathbb Z. $$
That is,
$$ 2X=\sum_{j=1}^4 \alpha_j^2+ 2\sum_{j=0}^4 \beta_j^2,\;\;\quad Y=\sum_{j=1}^4\alpha_j\beta_j.$$
Since $Y$ is odd we must have at least one pair, $\alpha_1$, $\beta_1$ say, both odd. Since $2X$ is even we must have two
or four of the $\alpha_i$ odd. Suppose that $\alpha_1$, $\alpha_2$ are odd and $\alpha_3,\alpha_4$ have the same parity.
We get
\begin{align*} X+\sqrt{2}Y & = \left( \frac{\alpha_1+\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1+\beta_2)\right)^2+ \left( \frac{\alpha_1-\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1-\beta_2)\right)^2 \\
& \quad + \left( \frac{\alpha_3+\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3+\beta_4)\right)^2+ \left( \frac{\alpha_3-\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3-\beta_4)\right)^2.
\end{align*}
Writing
$$ f(\omega)=a_0+a_1\omega+a_2\omega^2+a_3\omega^3=a_0+ \frac{\sqrt{2}}{2}(1+i)a_1+a_2i+ \frac{\sqrt{2}}{2}(-1+i)a_3,$$
we have
$$ \abs{f(\omega)}^2 =\left(a_0+ \frac{\sqrt{2}}{2}(a_1-a_3)\right)^2 + \left(a_2+ \frac{\sqrt{2}}{2}(a_1+a_3)\right)^2 $$
and can make
$$ |f(\omega)|^2+|g(\omega)|^2 = X + \sqrt{2}Y $$
with the selection of integer coefficients for $f(x)=\sum_{j=0}^3a_jx^j$ and $g(x)=\sum_{j=0}^3 b_jx^j$
\begin{align*} a_0=&\frac{1}{2}(\alpha_1-\alpha_2),\quad a_1 =\beta_1,\quad a_2=\frac{1}{2}(\alpha_1+\alpha_2), \quad a_3=\beta_2, \\
b_0=& \frac{1}{2}(\alpha_3-\alpha_4),\quad b_1 =\beta_3,\quad b_2=\frac{1}{2}(\alpha_3+\alpha_4), \quad b_3=\beta_4.
\end{align*}
These $f(x)$, $g(x)$ will then give $D=p$ in \eqref{form}.
We can also determine the parity of the coefficients.
\vskip0.1in
\noindent
{\bf Case 1}: the $\alpha_i$ are all odd.
Notice that $a_0$ and $a_2$ have opposite parity, as do $b_0$ and $b_2$. Since $Y$ is odd we must have one or three of the
$\beta_i$ odd.
If $\beta_1$ is odd and $\beta_2,\beta_3,\beta_4$ all even, then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4.
Then $a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even and $f(x)=u(x)+2k(x)$
with $u(x)=1+x$ or $x(1+x)$. Likewise $b_0,b_1,b_2,b_3$ are odd, even, even, even or even, even, odd, even
and $g(x)=v(x)+2s(x)$ with $v(x)=1$ or $x^2$. Hence if we take
\begin{equation} \label{shift} f(x)=u(x)+(1-x^4)k(x)-mh(x),\quad g(x)=v(x)+(1-x^4)s(x)-mh(x), \end{equation}
we get $A=3-16m$, $B=-1$, $C=1$, $D=p$ and we achieve $(16m-3)p^2$ in \eqref{form}.
If three $\beta_i$ are odd then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. We assume $\beta_1,\beta_2,\beta_3$ are
odd and $\beta_4$ even. Hence $a_0,a_1,a_2,a_3$ are either odd, odd, even, odd or even, odd, odd, odd and
$f(x)=u(x)+2k(x)$ with $u(x)=1+x+x^3$ or $x(1+x+x^2)$ and $b_0,b_1,b_2,b_3$ are odd, odd, even, even or even, odd, odd, even
and $g(x)=v(x)+2s(x)$ with $v(x)=1+x$ or $x(1+x)$. In this case \eqref{shift} gives
$A=(5-16m)$, $B=1$, $C=-1$, $D=p$ achieving $(5-16m)p^2$.
\vskip0.1in
\noindent
{\bf Case 2}: $\alpha_1$, $\alpha_2$ are odd, $\alpha_3$, $\alpha_4$ are even.
In this case $a_0$, $a_2$ will have opposite parity and $b_0$, $b_2$ the same parity.
Since $Y$ is odd we must have $\beta_1$ odd, $\beta_2$ even. Since $2X\equiv 2$ mod 4 we must have one more odd $\beta_i$, say $\beta_3$ odd and $\beta_4$ even.
If $\alpha_3\equiv \alpha_4$ mod 4 then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4. Hence
$a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are even, odd, even, even and $v(x)=x^2$ and again \eqref{shift} gives $(16m-3)p^2$.
If $\alpha_3\not\equiv \alpha_4$ mod 4 then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. In this case
$a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are odd, odd, odd, even and $v(x)=1+x+x^2$ and again \eqref{shift} gives $(5-16m)p^2$.
Hence, in either case, starting with an $X\equiv 1$ mod 4 gives the $mp^2$ with $m\equiv 5$ mod 16 and
an $X\equiv -1$ mod 4 the $mp^2$ with $m\equiv -3$ mod 16.
\section*{Acknowledgement}
\noindent
We thank Craig Spencer for directing us to Cohn's four squares theorem in $\mathbb Z[\sqrt{2}]$.
|
\section{Introduction}
The Fock space representation of the
quantum affine algebra $U_q(\widehat{sl}_n)=U_q(A^{(1)}_{n-1})$
was constructed by Hayashi \cite{H}.
A combinatorial version of this construction was then used by
Misra and Miwa \cite{MM} to describe Kashiwara's crystal basis of
the basic representation $V(\Lambda_0)$.
This made it possible to compute the global crystal basis of
$V(\Lambda_0)$ \cite{LLT}. Then, it was conjectured
that the degree $m$ part of the transition matrices
giving the coefficients of the global basis
on the natural basis of the Fock space
were $q$-analogues of the decomposition matrices of the type $A$
Hecke algebras $H_m$ at an $n$th root of unity \cite{LLT}.
According to a conjecture
of James \cite{J}, these should coincide, for $n$ prime and
large enough,
with the decomposition matrices of symmetric groups ${\rm S}_m$ over a field
of characteristic $n$.
The conjecture of \cite{LLT} has been proved by Ariki \cite{Ar},
and by Grojnowski \cite{Gr} using the results of \cite{G}.
There is another approach to the calculation of decomposition
matrices of type $A$ Hecke algebras, relying upon Soergel's
results on tilting modules for quantum groups at roots of
unity \cite{Soe1,Soe2}.
This approach also leads to $q$-analogues of decomposition
numbers expressed in terms of Kazhdan-Lusztig polynomials.
It seems that these $q$-analogues are the same as those
of \cite{LLT} but there is no proof of this coincidence.
In fact, the relationship between the two approaches is still unclear.
The results of \cite{LLT,Ar,Gr} have been applied recently
by Foda {\it et al.} \cite{FLOTW} to determine which simple
$H_m$-modules remain simple after restriction to $H_{m-1}$
and to show that this problem is equivalent to the decomposition
of a tensor product of level 1 $A_{n-1}^{(1)}$-modules.
This provided an explanation for an intriguing correspondence
previously observed in \cite{FOW} between a class of RSOS models
and modular representations of symmetric groups.
Another description of the $U_q(A^{(1)}_{n-1})$-Fock space,
as a deformation of the infinite wedge realization of
the fermionic Fock space, was obtained by Stern \cite{St}.
In \cite{KMS}, the $q$-bosons needed for the decomposition
of the Fock space into irreducible $U_q(A^{(1)}_{n-1})$-modules
were introduced. This construction was used in \cite{LLTrib}
to give a combinatorial formula for the highest weight
vectors, and in \cite{LT} to define a canonical basis
of the whole Fock space which was conjectured to
yield the decomposition matrices
of $q$-Schur algebras at roots of unity.
Moreover, strong support in favor of this conjecture was
obtained by establishing its compatibility with a version
of the Steinberg tensor product theorem proved by James
in this context \cite{J,LT}.
Recently, the theory of perfect crystals \cite{KMN1,KMN2} allowed
Kashiwara {\it et al.} \cite{KMPY} to define a general
notion of $q$-Fock space, extending the results of \cite{KMS}
to several series of affine algebras.
Their results apply in particular to the twisted affine algebra
of type $A^{(2)}_{2n}$, which is the case considered in this note.
It has been noticed by Nakajima and Yamada \cite{NY} that the combinatorics
of the basic representation
$V(\Lambda_n)$ of $A^{(2)}_{2n}$ was similar to the
one encountered in the $(2n+1)$-modular representation theory of the spin
symmetric groups ${\rm \widehat{S}}_m$ by Morris \cite{Mo1} as early as 1965.
This can be explained to a certain extent by observing that
the $(r,\bar{r})$-inducing operators of Morris and Yaseen \cite{MY}
coincide with the Chevalley lowering operators of the
Fock space representation of $A^{(2)}_{2n}$. This provides
a further example of the phenomenon observed in \cite{LLT}
in the case of symmetric groups and $A_{n-1}^{(1)}$-algebras.
In this note, we give the analogues for $U_q(A^{(2)}_{2n})$ of the
results of \cite{LLT}.
Using the level~1 $q$-Fock spaces of \cite{KMPY},
we describe an algorithm for computing
the canonical basis of the basic representation $V(\Lambda_n)$, which
allows us to prove that this basis is in the ${\bf Z}[q]$-lattice
spanned by the natural basis of the $q$-Fock space, and that
the transition matrices have an upper triangle of zeros
(Theorem 4.1).
We conjecture that the specialization $q=1$ gives, up to splitting of rows and
columns for pairs of associate characters, and for sufficiently
large primes $p=2n+1$, the decomposition matrices of spin symmetric groups.
However, the reduction $q=1$ is more tricky than in the $A_{n-1}^{(1)}$ case.
Indeed, the $q$-Fock space of $A^{(2)}_{2n}$ is strictly larger than
the classical one, and one has to factor out the null space
of a certain quadratic form \cite{KMPY} to recover the usual
description.
The missing ingredient in the spin case when we compare it to
\cite{LLT} is that, since the spin symmetric groups are
not Coxeter groups, there is no standard way of associating
to them a Hecke algebra, and this is an important obstruction
for proving our conjecture.
What we can actually prove is that all self-associate
projective characters of ${\rm \widehat{S}}_m$ are linear combinations
of characters obtained from smaller groups by a sequence
of $(r,\overline r)$-inductions (Theorem 6.1).
This proof is constructive in the sense that the intermediate
basis $\{A(\mu)\}$ of our algorithm for the canonical basis,
suitably specialized at $q=1$, is a basis for the space spanned
by such characters.
This should have implications on the way of labelling the
irreducible modular spin representations of ${\rm \widehat{S}}_m$.
Up to now, a coherent labelling scheme has been found
only for $p=3$ \cite{BMO} and $p=5$ \cite{ABO}.
The case $p\ge 7$ led to formidable difficulties.
To overcome this problem, we propose to use the labels
of the crystal graph of $V(\Lambda_n)$, which may contain
partitions with repeated parts not arising in the
representation theory of ${\rm \widehat{S}}_m$, and corresponding to ghost vectors
of the $q$-Fock space at $q=1$.
\section{The Fock space representation of $U_q(A^{(2)}_{2n})$}
The Fock space representation of the affine Lie algebra
$A^{(2)}_{2n}$ can be constructed by means of its
embedding in $b_\infty=\widehat{go}_\infty$, the completed infinite
rank affine Lie algebra of type $B$ \cite{DJKM1,DJKM2}.
The (bosonic) Fock space of type $B$ is
the polynomial algebra ${\cal F} = {\bf C}[p_{2j+1}, j\ge 0 ]$ in an infinite
number of generators $p_{2j+1}$ of odd degree $2j+1$. If one identifies
$p_k$ with the power sum symmetric function $p_k=\sum_i x_i^k$
in some infinite set of variables, the natural basis of weight
vectors for $b_\infty$ is given by Schur's $P$-functions $P_\lambda$
(where $\lambda$ runs over the set ${\rm DP}$ of partitions
into distinct parts) \cite{DJKM1,You,JY}.
The Chevalley generators $e^\infty_i$, $f^\infty_i$ ($i\ge 0$)
of $b_\infty$ act on $P_\lambda$ by
\begin{equation}\label{FP}
e^\infty_i P_\lambda = P_\mu \ , \qquad f^\infty_i P_\lambda = P_\nu
\end{equation}
where $\mu$ (resp. $\nu$) is obtained from $\lambda$ by replacing its part $i+1$
by $i$ (resp. its part $i$ by $i+1$), the result being $0$
if $i+1$ (resp. $i$) is not a part of $\lambda$.
Also, it is
understood that $P_\mu=0$ as soon as $\mu$ has a multiple part.
For example, $f^\infty_0 P_{32}=P_{321}$,
$f^\infty_3 P_{32}=P_{42}$,
$e^\infty_1 P_{32}= P_{31}$ and $e^\infty_2 P_{32}=P_{22}=0$.
Let $h=2n+1$.
The Chevalley generators $e_i$, $f_i$ of $A^{(2)}_{2n}$
will be realized as
\begin{equation}\label{FNP}
f_i=\sum_{j\equiv n\pm i} f^\infty_j \qquad
(i=0,\ldots ,n) \,,
\end{equation}
\begin{equation}
e_i=\sum_{j\equiv n\pm i } e^\infty_j \qquad
(i=0,\ldots, n-1)\,,\qquad
e_n=e^\infty_0 +
2 \sum_{\scriptstyle j>0 \atop \scriptstyle j \equiv 0,-1} e^\infty_j \,,
\end{equation}
where all congruences are taken modulo $h$.
Let $A_{2n}^{(2)}{}'$ be the derived algebra of $A_{2n}^{(2)}$
(obtained by omitting the degree operator $d$).
The action of $A_{2n}^{(2)}{}'$ on ${\cal F}$ is centralized
by the Heisenberg algebra generated by the operators
$\displaystyle{\partial\over\partial p_{hs}}$ and $p_{hs}$ for odd $s\ge 1$.
This implies that the Fock space decomposes under $A_{2n}^{(2)}$ as
\begin{equation}\label{DEC1}
{\cal F} = \bigoplus_{k\ge 0} V(\Lambda_n-k\delta)^{\oplus p^* (k)}
\end{equation}
where $p^*(k)$ is the number of partitions of $k$ into odd parts.
In particular, the subrepresentation generated by the vacuum vector
$|0\rangle=P_0 = 1$ is the basic
representation $V(\Lambda_n)$ of $A_{2n}^{(2)}$, and its principally
specialized character is \cite{KKLW}
\begin{equation}\label{CHAR}
{\rm ch}_t\,V(\Lambda_n) =
\sum_{m\ge 0}\dim V(\Lambda_n)_m\,t^m =
\prod_{\scriptstyle i \ {\rm odd} \atop \scriptstyle i\not\equiv 0{\ \rm mod\ } h}
{1\over 1-t^i}\,.
\end{equation}
The $q$-deformation of this situation has been discovered
by Kashiwara {\it et al.} \cite{KMPY}.
Contrary to the case of
$A^{(1)}_{n-1}$, the $q$-Fock space is strictly larger than
the classical one.
We recall here briefly their construction, referring to
\cite{KMPY} for details and notation.
Let ${\rm DP}_h(m)$ be the set
of partitions $\lambda=(1^{m_1}2^{m_2}\ldots r^{m_r})$
of $m$ for which $m_i\le 1$ when $i\not\equiv 0 {\ \rm mod\ } h$.
For example, ${\rm DP}_3(7)=\{(7),(61),(52),(43),(421),(331)\}$.
Set ${\rm DP}_h=\bigcup_m {\rm DP}_h(m)$.
Then, the $q$-Fock space of type $A_{2n}^{(2)}$ is
\begin{equation}
{\cal F}_q = \bigoplus_{\lambda\in {\rm DP}_h} {\bf Q}(q)\, |\lambda\>
\end{equation}
where for $\lambda=(\lambda_1,\ldots,\lambda_r)$,
$|\lambda\>$ denotes the infinite $q$-wedge product
\[
|\lambda\> = u_\lambda = u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots\wedge_q
u_{\lambda_r}\wedge_q u_0 \wedge_q u_0 \wedge_q \cdots
\]
of basis vectors $u_i$ of the representation $V_{\rm aff}$.
The quantum affine algebra $U_q(A_{2n}^{(2)})$ acts on
$V_{\rm aff}=\bigoplus_{i\in{\bf Z}}{\bf Q}(q) u_i$
by
\begin{eqnarray}
f_i u_j = \cases{ u_{j+1} & if $j\equiv n\pm i{\ \rm mod\ } h$ \\
0 & otherwise \\}
\qquad (i=0,\ldots,n-1)
\\
\label{ACTF}
f_n u_j = \cases{ u_{j+1} & if $j\equiv -1 {\ \rm mod\ } h$ \\
(q+q^{-1}) u_{j+1} & if $j\equiv 0 {\ \rm mod\ } h$ \\
0 & otherwise \\}
\\
e_i u_j = \cases{ u_{j-1} & if $j\equiv n+1\pm i{\ \rm mod\ } h$ \\
0 & otherwise \\}
\qquad (i=0,\ldots,n-1)
\\
e_n u_j = \cases{ u_{j-1} & if $j\equiv 1 {\ \rm mod\ } h$ \\
(q+q^{-1}) u_{j-1} & if $j\equiv 0 {\ \rm mod\ } h$ \\
0 & otherwise \\}
\\
t_0 u_j = \cases{ q^4 u_j & if $j\equiv n{\ \rm mod\ } h$ \\
q^{-4} u_j & if $j\equiv n+1 {\ \rm mod\ } h$ \\
u_j & otherwise \\}
\\
t_i u_j = \cases{ q^2 u_j & if $j\equiv n\pm i{\ \rm mod\ } h$ \\
q^{-2} u_j & if $j\equiv n+1\pm i {\ \rm mod\ } h$ \\
u_j & otherwise \\ }
\qquad (i=1,\ldots,n-1)
\\
t_n u_j = \cases{ q^2 u_j & if $j\equiv -1{\ \rm mod\ } h$ \\
q^{-2} u_j & if $j\equiv 1 {\ \rm mod\ } h$ \\
u_j & otherwise \\}
\end{eqnarray}
The only commutation rules we will need to describe the
action of $e_i$ and $f_i$ on ${\cal F}_q$ are:
\begin{eqnarray}
u_j \wedge_q u_j &=& 0 \ {\rm if}\ j\not\equiv 0 {\ \rm mod\ } h \\
u_j \wedge_q u_{j+1} &=& -q^2 u_{j+1}\wedge_q u_j \ {\rm if} \label{STR2}
j\equiv 0,-1 {\ \rm mod\ } h \ .
\end{eqnarray}
The action on the vacuum vector
$|0\> = u_0\wedge_qu_0\wedge_q\cdots $
is given by
\begin{equation}
e_i|0\> = 0, \qquad
f_i|0\> = \delta_{i n}|1\>, \qquad
t_i|0\> = q^{\delta_{i n}}|0\>,
\end{equation}
and on a $q$-wedge
$|\lambda\>=u_{\lambda_1}\wedge_q\cdots\wedge_q u_{\lambda_r}
\wedge_q |0\>$,
\begin{eqnarray}
f_i |\lambda\>
=&
f_iu_{\lambda_1}\wedge_q t_iu_{\lambda_2}\wedge_q\cdots t_iu_{\lambda_r}
\wedge_q t_i|0\> \nonumber \\
& +
u_{\lambda_1}\wedge_q f_iu_{\lambda_2}\wedge_q\cdots t_iu_{\lambda_r}
\wedge_q t_i|0\> \nonumber \\
& + \cdots +
u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots u_{\lambda_r}
\wedge_q f_i|0\>
\end{eqnarray}
\begin{eqnarray}
e_i |\lambda\>
=&
t_i^{-1} u_{\lambda_1}\wedge_q t_i^{-1}u_{\lambda_2}\wedge_q\cdots t_i^{-1}u_{\lambda_r}
\wedge_q e_i|0\> \nonumber \\
& +
t_i^{-1}u_{\lambda_1}\wedge_q t_i^{-1}u_{\lambda_2}\wedge_q\cdots e_iu_{\lambda_r}
\wedge_q |0\> \nonumber \\
& + \cdots +
e_i u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots u_{\lambda_r}
\wedge_q |0\>
\end{eqnarray}
\begin{equation}
t_i |\lambda\> =
t_i u_{\lambda_1}\wedge_q t_i u_{\lambda_2}\wedge_q\cdots\wedge_q
t_i u_{\lambda_r}\wedge_q t_i|0\> \ .
\end{equation}
For example, with $n=2$, one has
\[
f_2 |542\>= (q^4+q^2)|642\>+q|552\>+|5421\>,
\]
and
\[
f_2 |552\> = (q^2+1)(|652\>+|562\>)+|5521\>
= (1-q^4)|652\>+|5521\>,
\]
the last equality resulting from (\ref{STR2}).
It is proved in \cite{KMPY} that ${\cal F}_q$ is an integrable
highest weight $U_q(A_{2n}^{(2)})$-module whose decomposition
into irreducible components, obtained by means of $q$-bosons, is
\begin{equation}\label{DEC2}
{\cal F}_q = \bigoplus_{k\ge 0} V(\Lambda_n-k\delta)^{\oplus p(k)}
\end{equation}
where $p(k)$ is now the number of all partitions of $k$
(compare (\ref{DEC1})).
Thus, the submodule $U_q(A_{2n}^{(2)}) \,|0\>$ is a realization
of the basic representation $V(\Lambda_n)$.
\section{The crystal graph of the $q$-Fock space}
The first step in computing the global basis
of $V(\Lambda_n) \subset {\cal F}_q$ is to determine
the crystal basis of ${\cal F}_q$ whose description
follows from \cite{KMPY,KMN1,KMN2}.
Let $A$ denote the subring of ${\bf Q}(q)$ consisting of
rational functions without pole at $q=0$.
The crystal lattice of ${\cal F}_q$ is
$L = \bigoplus_{\lambda\in {\rm DP}_h} A\,|\lambda\>$,
and the crystal basis of the ${\bf Q}$-vector space $L/qL$ is
$B=\{|\lambda\> {\ \rm mod\ } qL, \lambda \in {\rm DP}_h\}$.
We shall write $\lambda$ instead of $|\lambda\> {\ \rm mod\ } qL$.
The Kashiwara operators $\tilde{f}_i$ act on $B$ in
a simple way recorded on the crystal graph $\Gamma({\cal F}_q)$.
To describe this graph, one starts with the crystal graph
$\Gamma(V_{\rm aff})$ of $V_{\rm aff}$. This is the graph with vertices
$j\in {\bf Z}$, whose arrows labelled by $i\in \{0,1,\ldots ,n\}$
are given, for $i \not = n$, by
\[
j \stackrel{i}{\longrightarrow} j+1 \quad \Longleftrightarrow \quad
j \equiv n \pm i {\ \rm mod\ } h \,,
\]
and for $i=n$ by
\[
j \stackrel{n}{\longrightarrow} j+1 \quad \Longleftrightarrow \quad
j \equiv -1,0 {\ \rm mod\ } h \,.
\]
Thus for $n=2$ this graph is
\[
\cdots \stackrel{1}{\longrightarrow} -1
\stackrel{2}{\longrightarrow} 0
\stackrel{2}{\longrightarrow} 1
\stackrel{1}{\longrightarrow} 2
\stackrel{0}{\longrightarrow} 3
\stackrel{1}{\longrightarrow} 4
\stackrel{2}{\longrightarrow} 5
\stackrel{2}{\longrightarrow} 6
\stackrel{1}{\longrightarrow} 7
\stackrel{0}{\longrightarrow}
\cdots
\]
The graph $\Gamma({\cal F}_q)$ is obtained inductively
from $\Gamma(V_{\rm aff})$ using the following rules.
Let $\lambda = (\lambda_1,\ldots ,\lambda_r)\in B$,
and write $\lambda = (\lambda_1,\lambda^*)$
where $\lambda^* = (\lambda_2,\ldots ,\lambda_r)$.
Then one has $\tilde{f}_i (0) = \delta_{in} (1)$,
$\varphi_i(0)= \delta_{in}$, and
\[
\tilde{f}_i\lambda = \left\{
\matrix{(\tilde{f}_i \lambda_1, \lambda^*)
\ {\rm if} \ \varepsilon_i(\lambda_1) \ge \varphi_i(\lambda^*), \cr
(\lambda_1,\tilde{f}_i\lambda^*)
\ {\rm if} \ \varepsilon_i(\lambda_1) < \varphi_i(\lambda^*). } \right.
\]
Here, $\varepsilon_i(\lambda_1)$ means the distance in $\Gamma(V_{\rm aff})$
from $\lambda_1$ to the origin of its
$i$-string, and $\varphi_i(\lambda^*)$ means the distance in
$\Gamma({\cal F}_q)$ from $\lambda^*$ to the end of its $i$-string.
Thus for $n=1$ one computes successively the following $1$-strings
of $\Gamma({\cal F}_q)$
\[
(0) \stackrel{1}{\longrightarrow} (1)
\]
\[
(2)=(2,0)\stackrel{1}{\longrightarrow}(2,1)
\stackrel{1}{\longrightarrow} (3,1)
\stackrel{1}{\longrightarrow} (4,1)
\]
\[
(3,2)=(3,2,0)\stackrel{1}{\longrightarrow}(3,2,1)
\stackrel{1}{\longrightarrow} (3,3,1)
\stackrel{1}{\longrightarrow} (4,3,1)
\]
from which one deduces that $\tilde{f_1}(3,3,1) = (4,3,1)$
and $\varphi_1(3,3,1)=1$.
The first layers of the crystal $\Gamma({\cal F}_q)$ for $n=1$
are shown in Fig.~\ref{FIG1}.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize = 15cm
\epsffile{crystalFock.eps}
\end{center}
\caption{\label{FIG1} The graph $\Gamma({\cal F}_q)$ for $A_2^{(2)}$ up to degree $7$}
\end{figure}
One can observe that the decomposition of $\Gamma({\cal F}_q)$ into connected
components reflects the decomposition (\ref{DEC2})
of ${\cal F}_q$ into simple modules.
More precisely, the connected components of
$\Gamma({\cal F}_q)$ are all isomorphic as colored graphs
to the component $\Gamma(\Lambda_n)$ containing the
empty partition.
Their highest vertices are the partitions $\nu$ whose
parts are all divisible by $h$.
This follows from the fact, easily deduced from the
rules we have just explained, that if
$\nu = h\mu = (h\mu_1,\ldots ,h\mu_r)$ is such a partition,
then the map
\begin{equation}\label{MAP}
\lambda \mapsto \lambda + \nu = (\lambda_1+h\mu_1,\lambda_2+h\mu_2,\ldots \ )
\end{equation}
is a bijection from $\Gamma(\Lambda_n)$ onto the connected component
of $\Gamma({\cal F}_q)$ containing $\nu$, and this bijection commutes with
the operators $\tilde{e}_i$ and $\tilde{f}_i$.
This implies that the vertices of $\Gamma(\Lambda_n)$
are the partitions $\lambda=(\lambda_1,\ldots ,\lambda_r,0)\in {\rm DP}_h$ such that
for $i=1,2,\ldots ,r$, one has $\lambda_i- \lambda_{i+1} \le h$ and
$\lambda_i- \lambda_{i+1} < h$ if $\lambda_i \equiv 0 {\ \rm mod\ } h$.
We shall call a partition that satisfies these conditions
$h$-regular.
The set of $h$-regular partitions of $m$ will be denoted by ${\rm DPR}_h(m)$,
and we shall write ${\rm DPR}_h=\bigcup_m {\rm DPR}_h(m)$.
For example,
\[
{\rm DPR}_3(10) = \{ (3331), (4321), (532), (541) \} \,.
\]
\section{The canonical basis of $V(\Lambda_n)$}\label{SECT4}
In this section, we describe an algorithm for computing
the canonical basis (global lower crystal basis) of the
basic representation $V(\Lambda_n)=U_q(A_{2n}^{(2)}) |0\>$
in terms of the natural basis $|\lambda\>$ of the
$q$-Fock space. To characterize the canonical basis, we
need the following notations
\begin{equation}
q_i =
\cases{q & if $i=n$ \\
q^2& if $1\le i<n$ \\
q^4& if $i=0$ \\ }
\qquad
t_i =
\cases{q^{h_n}& if $i=n$\\
q^{2h_i}& if $1\le i<n$ \\
q^{4h_0}& if $i=0$ \\}
\end{equation}
and
\begin{equation}
[k]_i = {q_i^k-q_i^{-k}\over q_i-q_i^{-1}}\ ,
\qquad
[k]_i! = [k]_i [k-1]_i \cdots [1]_i \ .
\end{equation}
The $q$-divided powers of the Chevalley generators are defined by
\begin{equation}
e_i^{(k)} = {e_i^k\over [k]_i!}\ ,\qquad
f_i^{(k)} = {f_i^k\over [k]_i!}\ .
\end{equation}
The canonical basis is defined in terms of an involution
$v\mapsto\overline{v}$ of $V(\Lambda_n)$.
Let $x\mapsto \overline{x}$ be the ring automorphism of
$U_q(A_{2n}^{(2)})$ such that $\overline{q}=q^{-1}$,
$\overline{q^h}=q^{-h}$ for $h$ in the Cartan subalgebra
of $A_{2n}^{(2)}$, and $\overline{e_i}=e_i$,
$\overline{f_i}=f_i$. Then, for $v=x|0\>\in V(\Lambda_n)$,
define $\overline{v}=\overline{x}|0\>$.
We denote by $U_{\bf Q}^-$ the sub-${\bf Q}[q,q^{-1}]$-algebra
of $U_q(A_{2n}^{(2)})$ generated by the $f_i^{(k)}$
and set $V_{\bf Q}(\Lambda_n)=U_{\bf Q}^-|0\>$.
Then, as shown by Kashiwara \cite{K}, there exists a unique
${\bf Q}[q,q^{-1}]$-basis $\{G(\mu), \mu\in {\rm DPR}_h\}$
of $V_{\bf Q}(\Lambda_n)$, such that
\begin{quote}
(G1) $G(\mu) \equiv |\mu\> {\ \rm mod\ } qL$
(G2) $\overline{G(\mu)}= G(\mu)$.
\end{quote}
To compute $G(\mu)$, we follow the same strategy as in
\cite{LLT}. We first introduce an auxiliary basis
$A(\mu)$ satisfying (G2), from which we manage to construct
combinations satisfying also (G1). More precisely,
let ${\cal F}_q^m$ be the subspace of ${\cal F}_q$ spanned
by $|\lambda\>$ for $\lambda \in {\rm DP}_h(m)$ and set
$V(\Lambda_n)_m={\cal F}_q^m\cap V(\Lambda_n)$. Denote
by $\unlhd$ the natural order on partitions.
Then, the auxiliary
basis will satisfy
\begin{quote}
(A0) $\{A(\mu),\mu\in {\rm DPR}_h(m)\}$ is a ${\bf Q}[q,q^{-1}]$-basis
of $V_{\bf Q}(\Lambda_n)_m$,
(A1) $A(\mu)=\sum_\lambda a_{\lambda\mu}(q)|\lambda\>$,
where $a_{\lambda\mu}(q)=0$ unless $\lambda\unrhd\mu$,
$a_{\mu\mu}(q)=1$ and $a_{\lambda\mu}(q)\in{\bf Z}[q,q^{-1}]$,
(A2) $\overline{A(\mu)}=A(\mu)$.
\end{quote}
The basis $A(\mu)$ is obtained
by applying monomials in the $f_i^{(k)}$ to the highest weight vector,
that is, $A(\mu)$ is of the form
\begin{equation}\label{defA}
A(\mu) = f_{r_s}^{(k_s)}f_{r_{s-1}}^{(k_{s-1})}\cdots f_{r_1}^{(k_1)}|0\>
\end{equation}
so that (A2) is satisfied.
The two sequences $(r_1,\ldots,r_s)$ and $(k_1,\ldots,k_s)$ are, as
in \cite{LLT}, obtained by peeling off the $A_{2n}^{(2)}$-ladders
of the partition $\mu$, which are defined as follows. We first fill
the cells of the Young diagram $Y$ of $\mu$ with integers
(called residues), constant in
each column of $Y$. If $j\equiv n\pm i {\ \rm mod\ } h$
($0\le i\le n$), the numbers filling
the $j$-th column of $Y$ will be equal to $i$. A ladder of $\mu$
is then a sequence of cells with the same residue, located
in consecutive rows at horizontal distance $h$, except when the residue
is $n$, in which case two consecutive $n$-cells in a row belong also
to the same ladder. For example, with $n=3$ and $\mu=(11,7,7,4)$,
one finds $22$ ladders (indicated by subscripts), the longest
one being the 7th, containing three 3-cells:
\[
\young{
3_{19} & 2_{20} & 1_{21} & 0_{22} \cr
3_{13} & 2_{14} & 1_{15} & 0_{16} & 1_{17} & 2_{18} & 3_{19} \cr
3_{7}&2_8&1_9&0_{10}&1_{11}&2_{12}&3_{13}\cr
3_1&2_2&1_3&0_4&1_5&2_6&3_7&3_7&2_8&1_9&0_{10}\cr}
\]
Note that this definition of ladders agrees with that of \cite{BMO}
for $n=1$, but differs from that of \cite{ABO} for $n=2$.
Then, in (\ref{defA}), $s$ is the number of ladders,
$r_i$ the residue of the $i$th ladder, and $k_i$ the number of its
cells. Thus, proceeding with our example,
\[
\fl
A(11,7,7,4)=
f_0f_1f_2f_3^{(2)}f_2f_1f_0f_1f_2f_3^{(2)}f_2f_1f_0^{(2)}
f_1^{(2)}f_2^{(2)}f_3^{(3)} f_2f_1f_0f_1f_2f_3 |0\> \ .
\]
The proof of (A0) and (A1) can be readily adapted from
\cite{LLT}. In particular, (A1) follows from the fact that
a partition $\lambda$ belongs to ${\rm DPR}_h$ if and only if all cells of a given
ladder intersecting $\lambda$ occupy the highest
possible positions on this ladder.
Another choice of an intermediate basis,
more efficient for practical computations,
would be to use inductively the vectors $G(\nu)$ already computed
and to set $A(\mu)=f_{r_s}^{(k_s)} G(\nu)$, where
$\nu$ is the partition obtained from $\mu$ by removing
its outer ladder.
Define now the coefficients $b_{\nu\mu}(q)$ by
\begin{equation}
G(\mu) =\sum_\nu b_{\nu\mu}(q) A(\nu) \ .
\end{equation}
Still following \cite{LLT}, one can check that $b_{\nu\mu}(q)=0$
unless $\nu\ge \mu$, where $\ge$ denote the lexicographic
ordering on partitions, and that $b_{\mu\mu}(q)=1$. Therefore,
one can apply the triangular process of \cite{LLT} as follows.
Let $\mu^{(1)} < \mu^{(2)} <\ldots < \mu^{(t)}$ be the set
${\rm DPR}_h(m)$ sorted in lexicographic order, so that
$A(\mu^{(t)})=G(\mu^{(t)})$. Suppose that the expansion
on the basis $|\lambda\>$ of $G(\mu^{(i+1)}),\ldots, G(\mu^{(t)})$
has already been calculated. Then,
\begin{equation}
G(\mu^{(i)})=
A(\mu^{(i)})-\gamma_{i+1} (q) G(\mu^{(i+1)}) - \cdots - \gamma_t(q)G(\mu^{(t)}) \ ,
\end{equation}
where the coefficients are determined by the conditions
\[
\gamma_s(q^{-1})=\gamma_s(q), \qquad G(\mu^{(i)}) \equiv |\mu^{(i)}\> {\ \rm mod\ } qL.
\]
Thus, for $n=1$, the first partition for which $A(\mu)\not = G(\mu)$
is $\mu=(3321)$ and
\begin{eqnarray}
\fl A(3321) =
|3321\> + q|333\> + (q^2-q^6)|432\> + (1+2q^2)|531\> + (q^2+q^4)|54\>
\nonumber \\
\lo+ (2q^2+q^4)|621\> +2q^3 |63\> + (q^4+q^6)|72\> +q^4|81\> +q^5|9\>
\end{eqnarray}
Indeed, $A(3321)\equiv |3321\>+|531\> {\ \rm mod\ } qL$.
On the other hand, $ A(531)=|531\>+q^2|54\>+q^2|621\>+q^3|63\>+q^6|72\> $
is equal to $G(531)$,
and one finds by subtracting this from $A(3321)$ that
\begin{eqnarray}
\fl G(3321) = |3321\> + q|333\> + (q^2-q^6)|432\> + 2q^2|531\> + q^4|54\> \\
\lo+ (q^2+q^4)|621\> +q^3 |63\> + q^4|72\> +q^4|81\> +q^5|9\> \ .
\end{eqnarray}
Since $A(432)=|432\>+q^4|531\>+q^2|72\>+q^6|81\>$ satisfies
(G1) and (G2), it has to be equal to
$G(432)$, which completes the determination of the canonical
basis for $m=9$. For $m=10$, the results are displayed as the
columns of Table \ref{TAB1}.
\begin{table}
\caption{\label{TAB1}
The canonical basis for $n=1$ and $m=10$.
}
\begin{indented}
\item[]\begin{tabular}{@{}llllllll}
\br
&$(3 3 3 1)$&$(4 3 2 1)$&$(5 3 2)$&$(5 4 1)$\\
\mr
$(3 3 3 1)$&1&0&0&0\\
$(4 3 2 1)$&$q-q^{5}$&1&0&0\\
$(4 3 3)$&$q^{2}$&$q$&0&0\\
$(5 3 2)$ &0&0&1&0\\
$(5 4 1)$&$q+q^{3}$&$q^{2}+q^{4}$&0&1\\
$(6 3 1)$&$2\ q^{2}$&$q^{3}$&0&$q$ \\
$(6 4)$&$q^{4}$&0&0&$q^{3}$\\
$(7 2 1)$&$q^{3}+q^{5}$&$q^{2}$&0&$q^{4}$\\
$(7 3)$&$q^{4 }$&$q^{3}$&0&$q^{5}$\\
$(8 2)$ & 0 & 0 & $q^2$ & 0 \\
$(9 1)$&$q^{4}$&$q^{5}$&0&0\\
$(10)$&$q^{6}$&0&0&0\\
\br
\end{tabular}
\end{indented}
\end{table}
In the Fock space representation of $A_{n-1}^{(1)}$,
the weight of a basis vector $|\lambda\>$ is determined by
the $n$-core of the partition $\lambda$ (and its degree) \cite{ANY,LLT}.
There is a similar result of Nakajima and Yamada \cite{NY}
for $A_{2n}^{(2)}$, in terms of the notion of $\overline{h}$-core
of a strict partition
introduced by Morris \cite{Mo1} in the context of the modular
representation theory of spin symmetric groups.
One way to see this is to use a theorem of \cite{MY1}
according to which $\lambda, \mu \in {\rm DP}(m)$
have the same $\overline{h}$-core if and only if
they have, for each $i$, the same number $n_i$ of nodes of residue $i$.
On the other hand, it follows from the
implementation of the Chevalley generators
that $|\lambda\>$ has $A_{2n}^{(2)}$-weight
$\Lambda_n - \sum_{0\le i \le n} n_i \alpha_i$,
and the statement follows.
The definition of $\overline{h}$-cores can be extended to ${\rm DP}_h$
by deciding that if $\lambda$ has repeated parts, its $\overline{h}$-core
is equal to that of the partition obtained by removing those repeated parts.
Then it is clear that if $|\lambda\>$ and $|\mu\>$ have the same
$U_q(A_{2n}^{(2)})$-weight, the two partitions $\lambda$ and $\mu$
have the same $\overline{h}$-core.
It follows, since $G(\mu)$ is obviously a weight vector, that its
expansion on the basis $|\lambda\>$ involves only partitions
$\lambda$ with the same $\overline{h}$-core as $\mu$.
Summarizing the discussion, we have:
\begin{theorem}\label{TH}
For $\mu \in {\rm DPR}_h(m)$, define $d_{\lambda\mu}(q)$ by
$\displaystyle G(\mu)=\sum_{\lambda \in {\rm DP}_h(m)} d_{\lambda\mu}(q)|\lambda\>$.
Then,
{\rm (i)} $d_{\lambda\mu}(q)\in {\bf Z}[q]$,
{\rm (ii)} $d_{\lambda\mu}(q)=0$ unless $\lambda\unrhd\mu$,
and $d_{\mu\mu}(q) = 1$,
{\rm (iii)} $d_{\lambda\mu}(q)=0$ unless $\lambda$ and $\mu$
have the same $\overline{h}$-core.
\end{theorem}
\section{The reduction $q=1$}
As observed by Kashiwara {\it et al.} \cite{KMPY}, to recover the
classical Fock space representation ${\cal F}$ of $A_{2n}^{(2)}$, one has
to introduce the inner product on ${\cal F}_q$ for
which the vectors $|\lambda\>$ are orthogonal and the adjoint
operators of the Chevalley generators are
\begin{equation} \label{ADJOINT}
f_i^{\dag} = q_i e_i t_i, \qquad
e_i^{\dag} = q_i f_i t_i^{-1}, \qquad
t_i^{\dag} = t_i.
\end{equation}
It can be checked that, for $\lambda \in {\rm DP}_h$,
\begin{equation} \label{NORM}
\<\lambda|\lambda\> = \prod_{k>0}\prod_{i=1}^{m_{kh}} (1-(-q^2)^i),
\end{equation}
where $m_{kh}$ is the multiplicity of the part $kh$ in $\lambda$.
Let ${\cal F}_1$ denote the $A_{2n}^{(2)}$-module obtained by specializing
$q$ to 1 as in \cite{KMPY}. This space is strictly larger than the classical Fock space
${\cal F}$, since the dimension of its $m$th homogeneous component
(in the principal gradation) is $|{\rm DP}_h(m)|$ whereas that of ${\cal F}$
is only $|{\rm DP}(m)|$.
Let ${\cal N} = {\cal F}_1^\perp$ denote the nullspace. It follows
from (\ref{ADJOINT}) that ${\cal N}$ is a $A_{2n}^{(2)}$-module, and
from (\ref{NORM}) that ${\cal N}$ is the subspace of ${\cal F}_1$
spanned by the wedge products $|\lambda\>$ labelled by $\lambda \in {\rm DP}_h - {\rm DP}$.
Therefore ${\cal F}_1/{\cal N}$ is a $A_{2n}^{(2)}$-module that can
be identified with ${\cal F}$.
In this identification one has, for
$\lambda=(\lambda_1,\ldots,\lambda_r) \in {\rm DP}$,
\begin{equation}
P_\lambda = 2^{\sum_{i=1}^r\lfloor (\lambda_i-1)/h \rfloor} |\lambda \>.
\end{equation}
The power of $2$ comes from the fact that if $\lambda_i = kh$ for $k>0$,
and $\nu$ denotes the partition obtained from $\lambda$ by replacing $\lambda_i$
by $\nu_i = \lambda_i+1$, then it follows from (\ref{FP}), (\ref{FNP})
that $f_n P_\lambda$ contains $P_\nu$ with
coefficient 1, while
$f_n |\lambda\>$ contains $|\nu\>$ with coefficient 2 by (\ref{ACTF}).
For later use we set
\begin{equation}\label{AHN}
a_h(\lambda) = \sum_{i=1}^r\left\lfloor {\lambda_i-1\over h} \right\rfloor \,.
\end{equation}
\section{Modular representations of ${\rm \widehat{S}}_m$}
We refer the reader to \cite{B} for an up-to-date review of the
representation theory of the spin symmetric groups
and their combinatorics.
Let ${\rm \widehat{S}}_m$ be the spin symmetric group as defined by Schur \cite{S},
that is,
the group of order $2\,m!$ with generators $z,s_1,\ldots,s_{m-1}$
and relations $z^2=1$, $zs_i=s_iz$, $s_i^2 = z$, $(1\le i\le m-1)$,
$s_is_j=zs_js_i$ ($|i-j|\ge 2$) and $(s_is_{i+1})^3=z$
($1\le i\le m-2)$.
On an irreducible representation of ${\rm \widehat{S}}_m$, the central element $z$ has to act
by $+1$ or by $-1$. The representations for which $z=1$ are
actually linear representations of the symmetric group ${\rm S}_m$,
and those with $z=-1$, called spin representations
correspond to two-valued representations of ${\rm S}_m$. The irreducible spin
representations over a field of characteristic $0$
are labelled, up to association, by strict partitions
$\lambda\in {\rm DP}(m)$. More precisely, let ${\rm DP}_+(m)$ (resp. ${\rm DP}_-(m)$)
be the set of strict partitions of $m$ having an even (resp. odd)
number of even parts. Then, to each $\lambda\in{\rm DP}_+(m)$ corresponds
a self-associate irreducible spin character $\pr{\lambda}$, and to each
$\lambda\in{\rm DP}_-(m)$ a pair of associate irreducible spin characters denoted
by $\pr{\lambda}$ and $\pr{\lambda}'$.
According to Schur \cite{S}, the values $\pr{\lambda}(\rho)$
of the spin character $\pr{\lambda}$ on conjugacy classes of cycle-type
$\rho=(1^{m_1},3^{m_3},\ldots )$ are given by the expansion
of the symmetric function $P_\lambda$ on the basis of power sums,
namely
\begin{equation}
P_\lambda = \sum_\rho 2^{\lceil (\ell(\rho)-\ell(\lambda))/2\rceil}
\pr{\lambda}(\rho) {p_\rho\over z_\rho}
\end{equation}
where $z_\rho=\prod_j j^{m_j} m_j!$ and $\ell(\lambda)$ stands for the length
of $\lambda$, that is the number of parts of $\lambda$.
For $\lambda\in{\rm DP}(m)$, one introduces the self-associate spin character
\begin{equation}
\prh{\lambda} = \cases{\pr{\lambda} & if $\lambda\in{\rm DP}_+(m)$,\\
\pr{\lambda}+\pr{\lambda}'& if $\lambda\in{\rm DP}_-(m)$.\\}
\end{equation}
The branching theorem for spin characters of Morris \cite{Mo1} implies that
if $\prh{\lambda}$ gets identified with a weight vector of ${\cal F}$
by setting
\begin{equation}\label{IDENT}
P_\lambda = 2^{\lfloor (m - \ell(\lambda))/2\rfloor} \, \prh{\lambda} ,
\end{equation}
then the $b_\infty$-operator $f = \sum_{i\ge 0} f^{\infty}_i$ implements
the induction of self-associate spin characters
from ${\rm \widehat{S}}_m$ to ${\rm \widehat{S}}_{m+1}$.
Similarly, $e= e^{\infty}_0 + 2\sum_{i>0} e^{\infty}_i$ implements
the restriction from ${\rm \widehat{S}}_m$ to ${\rm \widehat{S}}_{m-1}$.
Thus, the Fock space representation of $b_\infty$ may be viewed
as the sum
${\cal F} = \bigoplus_m {\cal C}(m)$
of additive groups generated by self-associate spin characters
of ${\rm \widehat{S}}_m$ in characteristic 0.
In this setting, the Chevalley generators of $b_\infty$ act as
refined induction and restriction operators.
Now, similarly to the case $A_{n-1}^{(1)}$, the reduction from
$b_\infty$ to $A_{2n}^{(2)}$ parallels the reduction modulo $p=h=2n+1$
of representations of ${\rm \widehat{S}}_m$ (from now on we assume that $h$
is an odd prime).
More precisely, using (\ref{FP}) (\ref{FNP}) (\ref{IDENT}),
one sees immediately that the Chevalley generators $f_i$ of
$A_{2n}^{(2)}$ act on $\prh{\lambda}$ as
the $(r,\overline{r})$-induction operators of Morris and
Yaseen $(r=n+1-i)$ \cite{MY}.
Hence the vectors of degree $m$ of
$V(\Lambda_n) = U(A_{2n}^{(2)})^-\,|0\>$ can be identified
with linear combinations of self-associate spin characters
obtained by a sequence of $(r,\overline{r})$-inductions.
It is known from modular representation theory that the maximal
number of linearly independent self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$ is equal to the number of partitions
of $m$ into odd summands prime to $p$.
Therefore the following result follows at once from (\ref{CHAR}).
\begin{theorem}
The self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$ are linear combinations
of characters obtained by a sequence of $(r,\overline{r})$-inductions.
\end{theorem}
This was proved by Bessenrodt {\it et al.} for $p=3$ \cite{BMO}
and Andrews {\it et al.} for $p=5$ \cite{ABO}, but the question
remained open for $p\ge 7$ \cite{B}.
Moreover, the construction of
Section~\ref{SECT4} gives an explicit basis for the space spanned
by such characters.
Denote by $\underline{A}(\mu)$ the column vector obtained
from $A(\mu)$ by reduction $q=1$ and expansion on the basis
$\prh{\lambda}$.
Then, $\underline{A}(\mu)$ is a projective character by
(\ref{defA}) and
$\{\underline{A}(\mu)\ | \ \mu \in {\rm DPR}_p(m) \}$ is a
basis of the ${\bf Q}$-vector space of self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$.
These observations and the results of \cite{LLT,Ar,Gr,LT} lead us to
formulate a conjecture relating the global basis of $V(\Lambda_n)$
and the decomposition matrices for spin characters of the groups ${\rm \widehat{S}}_m$.
Let $\mu\in {\rm DPR}_p(m)$ and let $\underline{G}(\mu)$ stand for the image of the
global basis $G(\mu)$ in ${\cal F}={\cal F}_1/{\cal N}$, that is,
\begin{equation}
\underline{G}(\mu) = \sum_{\lambda \in {\rm DP}(m)}
2^{b(\lambda) - a_p(\lambda)}
d_{\lambda\mu}(1) \prh{\lambda} \,,
\end{equation}
where $a_p(\lambda)$ is given by (\ref{AHN}) and
\begin{equation}
b(\lambda)= \left\lfloor {m - \ell(\lambda)\over 2}\right\rfloor\,.
\end{equation}
Then denote by $\underline{\underline{G}}(\mu)$ the vector obtained
by factoring out the largest power of $2$ dividing the coefficients
of $\underline{G}(\mu)$ on the basis $\prh{\lambda}$.
For simplicity of notation, we shall identify $\underline{\underline{G}}(\mu)$
with the column vector of its coordinates on $\prh{\lambda}$.
Finally, let us call reduced decomposition matrix of ${\rm \widehat{S}}_m$ in characteristic
$p$ the matrix obtained from the usual decomposition matrix for spin characters
by adding up pairs of associate columns and expanding the column vectors
so obtained on the basis $\prh{\lambda}$.
This is a matrix with $|{\rm DP}(m)|$ rows and $|{\rm DPR}_p(m)|$ columns.
The definition is illustrated in Table~\ref{TAB2} and Table~\ref{TAB3}.
(Table~\ref{TAB2} is taken from \cite{MY}, except for the column labels
which are ours and will be explained in the next section.)
\begin{table}
\caption{\label{TAB2}
The decomposition matrix of ${\rm \widehat{S}}_{10}$ in characteristic 3.}
\begin{indented}
\item[]\begin{tabular}{@{}llllllll}
\br
&(3331)&(3331)'&(4321)&(4321)'&(532)&(541)&(541)'\\
\mr
$\pr{4321}$ &0&0&1&1&0&0&0\\
$\pr{532}$ &0&0&0&0&1&0&0\\
$\pr{532}'$ &0&0&0&0&1&0&0\\
$\pr{541}$ &1&1&1&1&0&0&1\\
$\pr{541}'$ &1&1&1&1&0&1&0\\
$\pr{631}$ &2&2&1&1&0&1&1\\
$\pr{631}'$ &2&2&1&1&0&1&1\\
$\pr{64}$ &1&1&0&0&0&1&1\\
$\pr{721}$ &1&1&0&1&0&0&1\\
$\pr{721}'$ &1&1&1&0&0&1&0\\
$\pr{73}$ &1&1&1&1&0&1&1\\
$\pr{82}$ &0&0&0&0&1&0&0\\
$\pr{91}$ &1&1&1&1&0&0&0\\
$\pr{10}$ &0&1&0&0&0&0&0\\
$\pr{10}'$ &1&0&0&0&0&0&0\\
\br
\end{tabular}
\end{indented}
\end{table}
\begin{table}
\caption{\label{TAB3}
The reduced decomposition matrix of ${\rm \widehat{S}}_{10}$ in characteristic 3.}
\begin{indented}
\item[]\begin{tabular}{@{}lllll}
\br
&(3331)&(4321)&(532)&(541)\\
\mr
$\prh{4 3 2 1}$& 0 &2&0&0\\
$\prh{5 3 2}$ &0&0&1&0\\
$\prh{5 4 1}$&2&2&0&1\\
$\prh{6 3 1}$&4&2&0&2 \\
$\prh{6 4}$&2 &0&0&2\\
$\prh{7 2 1}$&2 &1 &0&1\\
$\prh{7 3}$&2 & 2&0& 2 \\
$\prh{8 2}$& 0&0&1&0\\
$\prh{9 1}$&2&2&0&0\\
$\prh{10}$&1 &0&0&0 \\
\br
\end{tabular}
\end{indented}
\end{table}
\begin{conjecture}
(i) The set of column vectors of the reduced decomposition matrix of ${\rm \widehat{S}}_m$
in odd characteristic $p$ such that $p^2 > m$
coincides with $\{\underline{\underline{G}}(\mu) \ | \ \mu \in {\rm DPR}_p(m)\}$.
(ii) For $p^2\le m$,
the reduced decomposition matrix of ${\rm \widehat{S}}_m$
is obtained by postmultiplying the matrix whose columns are
$\underline{\underline{G}}(\mu)$ by a unitriangular matrix with
nonnegative entries.
\end{conjecture}
Our conjecture has been checked on the numerical tables
computed by Morris and Yaseen ($p=3$) \cite{MY} and
Yaseen ($p=5,7,11$) \cite{Ya}.
Thus, for $p=3$, $m=11$, the columns of the reduced decomposition matrix
are
\[
\underline{\underline{G}}(3332),\
\underline{\underline{G}}(4331)+\underline{\underline{G}}(641),\
\underline{\underline{G}}(5321),\
\underline{\underline{G}}(542),\
\underline{\underline{G}}(641).
\]
\section{Labels for irreducible modular spin characters
and partition identities}
The labels for irreducible modular representations of symmetric
groups form a subset of the ordinary labels
\cite{JK}. It is therefore natural to look for a labelling scheme
for irreducible modular spin representations of
${\rm \widehat{S}}_m$ using a subset of ${\rm DP}(m)$. This was accomplished for
$p=3$ by Bessenrodt {\it et al.} \cite{BMO}, who found
that the Schur regular partitions of $m$ form a convenient
system of labels. These are the partitions
$\lambda=(\lambda_1,\ldots,\lambda_r)$ such that
$\lambda_i-\lambda_{i+1}\ge 3$ for $i=1,\ldots,r-1$, and
$\lambda_i -\lambda_{i+1}>3$ whenever $\lambda_i\equiv 0{\ \rm mod\ } 3$.
In \cite{BMO}, it was also conjectured that for $p=5$, the labels
should be the partitions $\lambda=(\lambda_1,\ldots,\lambda_r)$
satisfying the following conditions: (1) $\lambda_i>\lambda_{i+1}$
for $i\le r-1$,
(2) $\lambda_i -\lambda_{i+2} \ge 5$ for $i\le r-2$,
(3) $\lambda_i -\lambda_{i+2} > 5$ if $\lambda_i\equiv 0{\ \rm mod\ } 5$
or if $\lambda_i+\lambda_{i+1}\equiv 0{\ \rm mod\ } 5$ for $i\le r-2$,
and (4) there are no subsequences of the following types
(for some $j\ge 0$): $(5j+3,5j+2)$, $(5j+6,5j+4,5j)$,
$(5j+5,5j+1,5j-1)$, $(5j+6,5j+5,5j,5j-1)$.
This conjecture turned out to be equivalent to a
$q$-series identity conjectured long ago by Andrews
in the context of extensions of the Rogers-Ramanujan identities,
and was eventually proved by Andrews {\it et al.} \cite{ABO}.
The authors of \cite{ABO} observed however that such a labelling
scheme could not be extended to $p=7,11,13$ (see also \cite{B}).
In terms of canonical bases, the obstruction can be understood as
follows.
Assuming our conjecture and using the results of \cite{BMO,ABO},
one can see that for $p=3,5$, the labels of \cite{BMO} and \cite{ABO}
are exactly the partitions indexing the lowest nonzero entries
in the columns of the matrices $D_m(q) = [d_{\lambda\mu}(q)]_{\lambda,\mu\vdash m}$.
For example, in Table \ref{TAB1}, these are
$(10),(91),(82)$ and $(73)$, which are indeed the Schur regular partitions
of $10$. The problem is that for $p\ge 7$, it can happen that
two columns have the same partition indexing the lowest nonzero
entry. For example, with $p=7$ ($n=3$) and $m=21$, the two canonical basis
vectors
$
G(75432)
=
\ket{7 5 4 3 2}+q^{2}\ket{7 6 4 3 1}+q\ket{7 7 5 2}+q^{3}\ket{7 7 6 1}
+q^{2}\ket{8 6 4 3}+\left (q^{2}+q^{4}\right )\ket{8 6 5 2}+
q^{3}\ket{8 7 6}+q^{4}\ket{9 5 4 3}+\left (q^{4}+q^{6}\right )\ket{9 6 5 1}+
q^{5}\ket{9 7 5}
$
\noindent and
$
G(654321)
=
\ket{6 5 4 3 2 1}+q\ket{7 5 4 3 2}+ q\ket{7 6 4 3 1}+ q\ket{7 6 5 2 1}+
q^{2}\ket{7 7 4 3}+q^{2}\ket{7 7 5 2}+q^{2}\ket{7 7 6 1}+q^{3}\ket{7 7 7}+
\left (q^{3}+q^{5} \right )\ket{8 6 4 3}+
\left (q^{3}+q^{5}\right )\ket{8 6 5 2}+\left (q^{4}-q^{8}\right )\ket{8 7 6}+
\left (q^{3}+q^{5}\right )\ket{9 6 5 1}+\left (q^{4}+q^{6}\right )\ket{9 7 5}
$
\noindent have the same bottom partition $(975)$
(compare \cite{B}, end of Section 3).
On the other hand the partitions
indexing the highest nonzero entries in the columns of $D_m(q)$ are
the labels of the crystal graph (by Theorem \ref{TH}(ii)), so that
they are necessarily distinct. Therefore, we propose to use the set
\begin{eqnarray*}
{\rm DPR}_p(m) = &\{ \lambda = (\lambda_1,\ldots ,\lambda_r)\vdash m \ | \
0<\lambda_i-\lambda_{i+1}\le p \ {\rm if} \ \lambda_i \not \equiv 0 {\ \rm mod\ } p, \\
& 0 \le\lambda_i-\lambda_{i+1} < p \ {\rm if} \ \lambda_i \equiv 0 {\ \rm mod\ } p,
(1\le i \le r)\}
\end{eqnarray*}
for labelling the irreducible spin representations of ${\rm \widehat{S}}_m$
in characteristic $p$.
Indeed its definition is equally simple for all $p$.
Moreover, because of Theorem~\ref{TH}(iii), this labelling would be compatible with
the $p$-block structure, which can be read on the
$\overline{p}$-cores.
Also, it is adapted to the calculation of the vectors
$\underline{A}(\mu)$ which give an approximation to the
reduced decomposition matrix.
Finally, we note that since ${\rm DPR}_p$ provides the right number
of labels we have the following partition identity
\begin{equation}\label{PARTID}
\sum_{m\ge 0} | {\rm DPR}_p(m) | t^m
=
\prod_{\scriptstyle i \ {\rm odd} \atop \scriptstyle i\not\equiv 0{\ \rm mod\ } p}
{1\over 1-t^i}
\end{equation}
which for $p=3,5$ is a counterpart to the
Schur and Andrews-Bessenrodt-Olsson identities.
This happens to be a particular case of a theorem of Andrews
and Olsson \cite{AO}.
Namely, one gets (\ref{PARTID}) by taking
$A=\{1,2,3,\ldots ,p-1\}$ and $N=p$ in Theorem~2 of~\cite{AO}.
A combinatorial proof of a refinement of the Andrews-Olsson
partition identity has been given by Bessenrodt \cite{B1}.
One can also get a direct proof of (\ref{PARTID})
without using representation theory by
simply considering the bijections (\ref{MAP}).
\section{Discussion}
We have used the level 1 $q$-deformed Fock spaces of Kashiwara {\it et al.}
to compute the canonical basis of the basic representation
of $U_q(A_{2n}^{(2)})$, and we have formulated a conjectural
relation with the decomposition matrices of the spin symmetric groups
in odd characteristic $p=2n+1$.
As in the case of $A_{n-1}^{(1)}$,
it is reasonable to expect that in general,
that is when $2n+1$ is not required to be a prime, the canonical basis
is related to a certain family of Hecke
algebras at $(2n+1)$th roots of unity. A good candidate might be the
Hecke-Clifford superalgebra introduced by Olshanski \cite{Ol}.
The case of $2n$th roots of unity should then be related
to the Fock space representation of the affine Lie algebras
of type $D_{n+1}^{(2)}$.
In particular we believe that the fact used by Benson \cite{Be}
and Bessenrodt-Olsson \cite{BO} that the 2-modular irreducible
characters of ${\rm \widehat{S}}_m$ can be identified with the 2-modular irreducible
characters of ${\rm S}_m$ corresponds in the realm of affine Lie algebras
to the isomorphism $D_2^{(2)} \simeq A_1^{(1)}$.
\section*{Acknowledgements}
We thank T. Miwa and A.O. Morris for stimulating discussions,
and G.E. Andrews for bringing references \cite{AO,B1} to
our attention.
\section*{References}
|
\section{Introduction}
Multigrid algorithms are effective in the solution of elliptic
problems and have found many applications, especially in fluid
mechanics \cite[e.g.]{Mavriplis97}, chemical reactions in flows
\cite[e.g.]{Sheffer98} and flows in porous media \cite{Moulton98}.
Typically, errors in a solution may decrease by a factor of 0.1 each
iteration \cite[e.g.]{Mavriplis97b}. The simple algorithms I present
decrease errors by a factor of $0.05$ (see Tables~\ref{tbl:2d}
and~\ref{tbl:3dopt} on pages~\pageref{tbl:2d}
and~\pageref{tbl:3dopt}). Further gains in the rate of convergence
may be made by further research.
Conventional multigrid algorithms use a hierarchy of grids whose grid
spacings are all proportional to $2^{-\ell}$ where $\ell$ is the level
of the grid \cite[e.g.]{Zhang98}. The promising possibility I report
on here is the use of a richer hierarchy of grids with levels of the
grids oriented diagonally to other levels. Specifically, in 2D I
introduce in Section~\ref{S2d} a hierarchy of grids with grid spacings
proportional to $2^{-\ell/2}$ and with grids aligned at $45^\circ$ to
adjacent levels, see Figure~\ref{Fgrid2d}
(p\pageref{Fgrid2d}).\footnote{This paper is best viewed and printed
in colour as the grid diagrams and the ensuing discussions are all
colour coded.} In 3D the geometry of the grids is much more
complicated. In Section~\ref{S3d} we introduce and analyse a
hierarchy of 3D grids with grid spacings roughly $2^{-\ell/3}$ on the
different levels, see Figure~\ref{Famal} (p\pageref{Famal}). Now
Laplace's operator is isotropic so that its discretisation is
straightforward on these diagonally oriented grids. Thus in this
initial work I explore only the solution of Poisson's equation.
\begin{equation}
\nabla^2 u=f\,.
\label{Epois}
\end{equation}
Given an approximation $\tilde u$ to a solution, each complete
iteration of a multigrid scheme seek a correction $v$ so that
$u=\tilde u+v$ is a better approximation to a solution of Poisson's
equation~(\ref{Epois}). Consequently the update $v$ has to
approximately satisfy a Poisson's equation itself, namely
\begin{equation}
\nabla^2v=r\,,
\quad\mbox{where}\quad
r=f-\nabla^2\tilde u\,,
\label{Evpois}
\end{equation}
is the residual of the current approximation. The multigrid
algorithms aim to estimate the error $v$ as accurately as possible
from the residual $r$. Accuracy in the ultimate solution $u$ is
determined by the accuracy of the spatial discretisation in the
computation of the residual $r$: here we investigate second-order and
fourth-order accurate discretisations \cite[e.g.]{Zhang98} but so far
only find remarkably rapid convergence for second-order
discretisations.
The diagonal grids employed here are perhaps an alternative to the
semi-coarsening hierarchy of multigrids used by Dendy
\cite{Dendy97} in more difficult problems.
In this initial research we only examine the simplest reasonable
V-cycle on the special hierarchy of grids and use only one Jacobi
iteration on each grid. We find in Sections~\ref{SSsr2}
and~\ref{SSsr3} that the smoothing restriction step from one grid to
the coarser diagonally orientated grid is done quite simply. Yet the
effective smoothing operator from one level to that a factor of 2
coarser, being the convolution of two or three intermediate steps, is
relatively sophisticated. One saving in using these diagonally
orientated grids is that there is no need to do any interpolation.
Thus the transfer of information from a coarser to a finer grid only
involves the simple Jacobi iterations described in
Sections~\ref{SSjp2} and~\ref{SSjp3}. Performance is enhanced within
this class of simple multigrid algorithms by a little over relaxation
in the Jacobi iteration as found in Sections~\ref{SSopt2}
and~\ref{SSopt3}. The proposed multigrid algorithms are found to be
up to twice as fast as comparably simple conventional multigrid
algorithms.
\section{A diagonal multigrid for the 2D Poisson equation}
\label{S2d}
\begin{figure}[tbp]
\centering
\includegraphics{grid2d.eps} \caption{three levels of grids in the
2D multigrid hierarchy: the dotted green grid is the finest,
spacing $h$ say; the dashed red grid is the next finest diagonal
grid with spacing $\sqrt2h$; the solid blue grid is the coarsest
shown grid with spacing $2h$. Coarser levels of the multigrid
follow the same pattern.}
\label{Fgrid2d}
\end{figure}
To approximately solve Poisson's equation~(\ref{Evpois}) in
two-dimensions we use a novel hierarchy of grids in the multigrid
method. The length scales of the grid are $2^{-\ell/2}$. If the
finest grid is aligned with the coordinate axes with grid spacing $h$
say, the first coarser grid is at $45^\circ$ with spacing $\sqrt2h$, the
second coarser is once again aligned with the axes and of spacing $2h$,
as shown in Figure~\ref{Fgrid2d}, and so on for all other levels on
the multigrid. In going from one level to the next coarser level the
number of grid points halves.
\subsection{The smoothing restriction}
\label{SSsr2}
\begin{figure}[tbp]
\centering
{\tt \setlength{\unitlength}{0.25ex}
\begin{picture}(270,130)
\thicklines {\color{red}
\put(220,60){\line(1,-1){30}}
\put(170,110){\line(1,-1){30}}
\put(220,80){\line(1,1){30}}
\put(170,30){\line(1,1){30}}
\put(250,110){\framebox(20,20){1/8}}
\put(250,10){\framebox(20,20){1/8}}
\put(150,110){\framebox(20,20){1/8}}
\put(150,10){\framebox(20,20){1/8}}
}
\put(200,60){\color{blue}\framebox(20,20){1/2}}
{\color{green}
\put(70,30){\line(0,1){30}}
\put(30,70){\line(1,0){30}}
\put(80,70){\line(1,0){30}}
\put(70,80){\line(0,1){30}}
\put(60,10){\framebox(20,20){1/8}}
\put(110,60){\framebox(20,20){1/8}}
\put(60,110){\framebox(20,20){1/8}}
\put(10,60){\framebox(20,20){1/8}}
}
\put(60,60){\color{red}\framebox(20,20){1/2}}
\end{picture}}
\caption{restriction stencils are simple weighted averages of
neighbouring grid points on all levels of the grid.}
\label{Erest2}
\end{figure}
The restriction operator smoothing the residual from one grid to the next
coarser grid is the same at all levels. It is simply a weighted
average of the grid point and the four nearest neighbours on the finer
grid as shown in Figure~\ref{Erest2}. To restrict from a fine green
grid to the diagonal red grid
\begin{equation}
r_{i,j}^{\ell-1}=\frac{1}{8}\left( 4r_{i,j}^\ell +r_{i-1,j}^\ell
+r_{i,j-1}^\ell +r_{i+1,j}^\ell +r_{i,j+1}^\ell \right)\,,
\label{Erest2r}
\end{equation}
whereas to restrict from a diagonal red grid to the coarser blue grid
\begin{equation}
r_{i,j}^{\ell-1}=\frac{1}{8}\left( 4r_{i,j}^\ell +r_{i-1,j-1}^\ell
+r_{i+1,j-1}^\ell +r_{i+1,j+1}^\ell +r_{i-1,j+1}^\ell \right)\,.
\label{Erest2b}
\end{equation}
Each of these restrictions takes $6\,\mbox{flops}$ per grid element. Thus
assuming the finest grid is $n\times n$ with $N=n^2$ grid points, the
restriction to the next finer diagonal grid (red) takes approximately
$3N\,\mbox{flops}$, the restriction to the next finer takes approximately
$3N/2\,\mbox{flops}$, etc. Thus to restrict the residuals up $\ell=2L$ levels
to the coarsest grid spacing of $H=2^Lh$ takes
\begin{equation}
K_r\approx 6N\left(1-\frac{1}{4^L}\right)\,\mbox{flops} \approx 6N\,\mbox{flops}\,.
\label{Ekrest2}
\end{equation}
In contrast a conventional nine point restriction operator from one
level to another takes $11\,\mbox{flops}$ per grid point, which then totals to
approximately $3\frac{2}{3}N\,\mbox{flops}$ over the whole conventional
multigrid hierarchy. This is somewhat better than the proposed
scheme, but we make gains elsewhere. In restricting from the green
grid to the blue grid, via the diagonal red grid, the restriction
operation is equivalent to a 17-point stencil with a much richer and
more effective smoothing than the conventional 9-point stencil.
\subsection{The Jacobi prolongation}
\label{SSjp2}
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{prol2d.eps}
\caption{the interpolation in a prolongation step is replaced
by simply a ``red-black'' Jacobi iteration: (a) compute the new
values at the red grid points, then refine the values at the blue
points; (b) compute the new values at the green points, then refine
those at the red points.}
\label{Fprol2d}
\end{figure}
One immediate saving is that there is no need to interpolate in the
prolongation step from one level to the next finer level. For
example, to prolongate from the blue grid to the finer diagonal red grid,
shown in Figure~\ref{Fprol2d}(a), estimate the new value of $v$ at
the red grid points on level $\ell$ by the red-Jacobi iteration
\begin{equation}
v_{i,j}^\ell=\frac{1}{4}\left( -2h^2r_{i,j}^\ell +v_{i-1,j-1}^{\ell-1}
+v_{i+1,j-1}^{\ell-1} +v_{i+1,j+1}^{\ell-1} +v_{i-1,j+1}^{\ell-1}
\right)\,,
\label{Ejacr}
\end{equation}
when the grid spacing on the red grid is $\sqrt2h$. Then the values
at the blue grid points are refined by the blue-Jacobi iteration
\begin{equation}
v_{i,j}^\ell=\frac{1}{4}\left( -2h^2r_{i,j}^\ell +v_{i-1,j-1}^\ell
+v_{i+1,j-1}^\ell +v_{i+1,j+1}^\ell +v_{i-1,j+1}^\ell \right)\,.
\label{Ejacb}
\end{equation}
A similar green-red Jacobi iteration will implicitly prolongate from
the red grid to the finer green grid shown in Figure~\ref{Fprol2d}(b).
These prolongation-iteration steps take $6\,\mbox{flops}$ per grid point.
Thus to go from the red to the green grid takes $6N\,\mbox{flops}$. As each
level of the grid has half as many grid points as the next finer, the
total operation count for the prolongation over the hierarchy from
grid spacing $H=2^Lh$ is
\begin{equation}
K_p\approx 12N\left( 1-\frac{1}{4^L} \right)\,\mbox{flops} \approx 12N\,\mbox{flops}\,.
\label{Ekprol2}
\end{equation}
The simplest (bilinear) conventional interpolation direct from the
blue grid to the green grid would take approximately $2N\,\mbox{flops}$, to be
followed by $6N\,\mbox{flops}$ for a Jacobi iteration on the fine green grid
(using simply $\nu_1=0$ and $\nu_2=1$). Over the whole hierarchy this
takes approximately $10\frac{2}{3}N\,\mbox{flops}$. This is a little smaller
than that proposed here, but the proposed diagonal method achieves
virtually two Jacobi iterations instead of just one and so is more
effective.
\subsection{The V-cycle converges rapidly}
Numerical experiments show that although the operation count of the
proposed algorithm is a little higher than the simplest usual
multigrid scheme, the speed of convergence is much better. The
algorithm performs remarkably well on test problems such as those in
Gupta et al \cite{Gupta97}. I report a quantitative comparison
between the algorithms that show the diagonal scheme proposed here is
about twice as fast.
Both the diagonal and usual multigrid algorithms use $7N\,\mbox{flops}$ to
compute the residuals on the finest grid. Thus the proposed method
takes approximately $25N\,\mbox{flops}$ per V-cycle of the multigrid
iteration, although 17\% more than the simplest conventional algorithm
that takes $21\frac{1}{3}N\,\mbox{flops}$, the convergence is much faster.
Table~\ref{tbl:2d} shows the rate of convergence $\bar\rho_0\approx
0.1$ for this diagonal multigrid based algorithm. The data is
determined using \matlab's sparse eigenvalue routine to find the
largest eigenvalue and hence the slowest decay on a $65\times 65$
grid. This should be more accurate than limited analytical methods
such as a bi-grid analysis \cite{Ibraheem96}. Compared with
correspondingly simple schemes based upon the usual hierarchy of
grids, the method proposed here takes much fewer iterations, even
though each iteration is a little more expensive, and so should be
about twice as fast.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for various
algorithms for solving Poisson's equation in two spatial
dimensions. The column headed ``per iter'' shows the number of
flops per iteration, whereas columns showing ``per dig'' are
$\,\mbox{flops}/\log_{10}\bar\rho$ and indicate the number of flops needed
to compute each decimal digit of accuracy. The right-hand columns
show the performance for the optimal over relaxation parameter
$p$.}
\begin{tabular}{|l|r|lr|llr|}
\hline
algorithm & per iter & $\bar\rho_0$ & per dig & $p$ &
$\bar\rho$ &
per dig \\
\hline
diagonal, $\Ord{h^2}$ & $25.0N$ & .099 & $25.0N$ & 1.052 &
.052 & $19.5N$ \\
usual, $\Ord{h^2}$ & $21.3N$ & .340 & $45.5N$ & 1.121 &
.260 & $36.4N$ \\
\hline
diagonal, $\Ord{h^4}$ & $30.0N$ & .333 & $62.8N$ & 1.200 &
.200 & $42.9N$ \\
usual, $\Ord{h^4}$ & $26.3N$ & .343 & $56.6N$ & 1.216 &
.216 & $39.4N$ \\
\hline
\end{tabular}
\label{tbl:2d}
\end{table}
Fourth-order accurate solvers in space may be obtained using the above
second-order accurate V-cycle as done by Iyengar \& Goyal
\cite{Iyengar90}. The only necessary change is to compute the
residual $r$ in~(\ref{Evpois}) on the finest grid with a fourth-order
accurate scheme, such as the compact ``Mehrstellen'' scheme
\begin{eqnarray}
r_{i,j}&=&\frac{1}{12}\left( 8f_{i,j}
+f_{i+1,j} +f_{i,j+1} +f_{i-1,j} +f_{i,j-1} \right)
\nonumber\\&&{}
-\frac{1}{6h^2}\left[ -20u_{i,j}
+4\left(u_{i,j-1} +u_{i,j+1} +u_{i-1,j} +u_{i+1,j}\right)
\right.\nonumber\\&&\quad\left.{}
+u_{i+1,j+1} +u_{i-1,j+1} +u_{i-1,j-1} +u_{i+1,j-1}
\right]\,.
\label{Efos2}
\end{eqnarray}
Use the V-cycles described above to determine an approximate
correction $v$ to the field $u$ based upon these more accurate
residuals. The operation count is solely increased by the increased
computation in the residual, from $7N\,\mbox{flops}$ per iteration to
$12N\,\mbox{flops}$ (the combination of $f$ appearing on the right-hand side
of~(\ref{Efos2}) need not be computed each iteration). Numerical
experiments summarised in Table~\ref{tbl:2d} show that the multigrid
methods still converge, but the diagonal method has lost its
advantages. Thus fourth order accurate solutions to Poisson's
equation are most quickly obtained by initially using the diagonal
multigrid method applied to the second order accurate computation of
residuals. Then use a few multigrid iterations based upon the fourth
order residuals to refine the numerical solution.
\subsection{Optimise parameters of the V-cycle}
\label{SSopt2}
The multigrid iteration is improved by introducing a small amount of
over relaxation.
First we considered the multigrid method applied to the second-order
accurate residuals. Numerical optimisation over a range of
introduced parameter values suggested that the simplest, most robust
effective change was simply to introduce a parameter $p$ into the
Jacobi iterations~(\ref{Ejacr}--\ref{Ejacb}) to become
\begin{eqnarray}
v_{i,j}^\ell&=&\frac{1}{4}\left( -2ph^2r_{i,j}^\ell
+v_{i-1,j-1}^{\ell-1}
+v_{i+1,j-1}^{\ell-1} +v_{i+1,j+1}^{\ell-1} +v_{i-1,j+1}^{\ell-1}
\right)\,,
\label{Eajacr}\\
v_{i,j}^\ell&=&\frac{1}{4}\left( -2ph^2r_{i,j}^\ell +v_{i-1,j-1}^\ell
+v_{i+1,j-1}^\ell +v_{i+1,j+1}^\ell +v_{i-1,j+1}^\ell \right)\,,
\label{Eajacb}
\end{eqnarray}
on a diagonal red grid and similarly for a green grid. An optimal
value of $p$ was determined to be $p=1.052$. The parameter $p$ just
increases the weight of the residuals at each level by about 5\%.
This simple change, which does not increase the operation count,
improves the factor of convergence to $\bar\rho\approx 0.052$, which
decreases the necessary number of iterations to achieve a given
accuracy. As Table~\ref{tbl:2d} shows, this diagonal multigrid is
still far better than the usual multigrid even with its optimal choice
for over relaxation.
Then we considered the multigrid method applied to the fourth-order
accurate residuals. Numerical optimisation of the parameter $p$
in~(\ref{Eajacr}--\ref{Eajacb}) suggests that significantly more
relaxation is preferable, namely $p\approx 1.20$. With this one
V-cycle of the multigrid method generally reduces the residuals by a
factor $\bar\rho\approx 0.200$. This simple refinement reduces the
number of iterations required by about one-third in converging to the
fourth-order accurate solution.
\section{A diagonal multigrid for the 3D Poisson equation}
\label{S3d}
The hierarchy of grids we propose for solving Poisson's
equation~(\ref{Evpois}) in three-dimensions is significantly more
complicated than that in two-dimensions. Figure~\ref{Famal} shows the
three steps between levels that will be taken to go from a fine
standard grid (green) of spacing $h$, via two intermediate grids (red
and magenta), to a coarser regular grid (blue) of spacing $2h$. As we
shall discuss below, there is some unevenness in the hierarchy that
needs special treatment.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dgrmb.eps}
\caption{one cell of an amalgam of four levels of the hierarchy of
grids used to form the multigrid V-cycle in 3D: green is the finest
grid shown; red is the next level coarser grid; magenta shows the next
coarser grid; and the blue cube is the coarsest to be shown. This
stereoscopic view is to be viewed cross-eyed as this seems to be more
robust to changes of viewing scale.}
\label{Famal}
\end{figure}
\subsection{The smoothing restriction steps}
\label{SSsr3}
The restriction operation in averaging the residuals from one grid to
the next coarser grid is reasonably straightforward.
\begin{itemize}
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dgr.eps}
\caption{the green and red grids superimposed showing the nodes
of the red grid at the corners and faces of the cube, and their
relationship to their six neighbouring nodes on the finer green grid.}
\label{Fggr}
\end{figure}
The nodes of the red grid are at the corners of the cube and the
centre of each of the faces as seen in Figure~\ref{Fggr}. They
each have six neighbours on the green grid so the natural
restriction averaging of the residuals onto the red grid is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{12}\left( 6r_{i,j,k}^\ell
+r_{i+1,j,k}^\ell +r_{i-1,j,k}^\ell +r_{i,j+1,k}^\ell
+r_{i,j-1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i,j,k+1}^\ell +r_{i,j,k-1}^\ell \right)\,,
\label{Erred}
\end{eqnarray}
for $(i,j,k)$ corresponding to the (red) corners and faces of the
coarse (blue) grid. When the fine green grid is $n\times n\times
n$ so that there are $N=n^3$ unknowns on the fine green grid, this
average takes $8\,\mbox{flops}$ for each of the approximately $N/2$ red
nodes. This operation count totals $4N\,\mbox{flops}$.
Note that throughout this discussion of restriction from the green
to blue grids via the red and magenta, we index variables using
subscripts appropriate to the fine green grid. This also holds for
the subsequent discussion of the prolongation from blue to green grids.
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3drm.eps}
\caption{the red and magenta grids superimposed showing the
nodes of the magenta grid at the corners and the centre of the
(blue) cube.}
\label{Frmag}
\end{figure}
The nodes of the next coarser grid, magenta, are at the corners and
centres of the cube as seen in Figure~\ref{Frmag}. Observe that the
centre nodes of the magenta grid are not also nodes of the finer red
grid; this causes some complications in the treatment of the two
types of nodes. The magenta nodes at the corners are connected to
twelve
neighbours on the red grid so the natural average of the residuals
is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{24}\left( 12r_{i,j,k}^\ell
+r_{i+1,j+1,k}^\ell +r_{i+1,j-1,k}^\ell
+r_{i-1,j-1,k}^\ell +r_{i-1,j+1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i+1,j,k+1}^\ell +r_{i+1,j,k-1}^\ell
+r_{i-1,j,k-1}^\ell +r_{i-1,j,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i,j+1,k+1}^\ell +r_{i,j+1,k-1}^\ell
+r_{i,j-1,k-1}^\ell +r_{i,j-1,k+1}^\ell
\right)\,,
\label{Ermagc}
\end{eqnarray}
for $(i,j,k)$ corresponding to the magenta corner nodes. This
average takes $14\,\mbox{flops}$ for each of $N/8$ nodes. The magenta
nodes at the centre of the coarse (blue) cube is not connected to
red nodes by the red grid, see Figure~\ref{Frmag}. However, it
has six red nodes in close proximity, those at the centre
of the faces, so the natural average is
\begin{equation}
r_{i,j,k}^{\ell-1}=\frac{1}{6}\left( r_{i+1,j,k}^\ell
+r_{i-1,j,k}^\ell
+r_{i,j+1,k}^\ell +r_{i,j-1,k}^\ell +r_{i,j,k+1}^\ell
+r_{i,j,k-1}^\ell
\right)\,,
\label{Ermagm}
\end{equation}
for $(i,j,k)$ corresponding to the magenta centre nodes. This
averaging takes $6\,\mbox{flops}$ for each of $N/8$ nodes. The operation
count for all of this restriction step from red to magenta is
$2\frac{1}{2}N\,\mbox{flops}$.
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dmb.eps}
\caption{the magenta and blue grids superimposed showing the
common nodes at the corners of the blue grid and the
connections to
the magenta centre node.}
\label{Frblu}
\end{figure}
The nodes of the coarse blue grid are at the corners of the shown
cube, see Figure~\ref{Frblu}. On the magenta grid they are
connected to eight neighbours, one for each octant, so the natural
average of residuals from the magenta to the blue grid is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{16}\left( 8r_{i,j,k}^\ell
+r_{i+1,j+1,k+1}^\ell +r_{i+1,j+1,k-1}^\ell
+r_{i+1,j-1,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i+1,j-1,k-1}^\ell
+r_{i-1,j+1,k+1}^\ell +r_{i-1,j+1,k-1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i-1,j-1,k+1}^\ell +r_{i-1,j-1,k-1}^\ell
\right)\,,
\label{Erblu}
\end{eqnarray}
for $(i,j,k)$ corresponding to the blue corner nodes. This
averaging takes $10\,\mbox{flops}$ for each of $N/8$ blue nodes which thus
totals $1\frac{1}{4}N\,\mbox{flops}$.
\end{itemize}
These three restriction steps, to go up three levels of grids, thus
total approximately $7\frac{3}{4}N\,\mbox{flops}$. Hence, the entire
restriction process, averaging the residuals, from a finest grid of
spacing $h$ up $3L$ levels to the coarsest grid of spacing $H=2^Lh$
takes
\begin{equation}
K_r\approx\frac{62}{7}N\left( 1-\frac{1}{8^L} \right)\,\mbox{flops} \approx
{\textstyle 8\frac{6}{7}}N\,\mbox{flops}\,.
\label{Ekrest3}
\end{equation}
The simplest standard one-step restriction direct from the fine green
grid to the blue grid takes approximately $3\frac{3}{4}N\,\mbox{flops}$.
Over the whole hierarchy this totals $4\frac{2}{7}N\,\mbox{flops}$ which is
roughly half that of the proposed method. We anticipate that rapid
convergence of the V-cycle makes the increase worthwhile.
\subsection{The Jacobi prolongation steps}
\label{SSjp3}
As in 2D, with this rich structure of grids we have no need to
interpolate when prolongating from a coarse grid onto a finer grid; an
appropriate ``red-black'' Jacobi iteration of the residual
equation~(\ref{Evpois}) avoids interpolation. Given an estimate of
corrections $v_{i,j,k}^\ell$ at some blue level grid we proceed to the
finer green grid via the following three prolongation steps.
\begin{itemize}
\item Perform a magenta-blue Jacobi iteration on the nodes of the
magenta grid shown in Figure~\ref{Frblu}. See that each node on
the magenta grid is connected to eight neighbours distributed
symmetrically about it, each contributes to an estimate of the
Laplacian at the node. Thus, given initial approximations on the
blue nodes from the coarser blue grid,
\begin{eqnarray}
v_{i,j,k}^\ell&=&\frac{1}{8}\left( -4p_mh^2r_{i,j,k}^\ell
+v_{i+1,j+1,k+1}^{\ell-1} +v_{i+1,j+1,k-1}^{\ell-1}
+v_{i+1,j-1,k+1}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j-1,k-1}^{\ell-1}
+v_{i-1,j+1,k+1}^{\ell-1} +v_{i-1,j+1,k-1}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i-1,j-1,k+1}^{\ell-1} +v_{i-1,j-1,k-1}^{\ell-1}
\right)\,,
\label{Emprolm}
\end{eqnarray}
for $(i,j,k)$ on the centre magenta nodes. The following blue-Jacobi
iteration uses these updated values in the similar formula
\begin{eqnarray}
v_{i,j,k}^\ell&=&\frac{1}{8}\left( -4p_mh^2r_{i,j,k}^\ell
+v_{i+1,j+1,k+1}^{\ell} +v_{i+1,j+1,k-1}^{\ell}
+v_{i+1,j-1,k+1}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j-1,k-1}^{\ell}
+v_{i-1,j+1,k+1}^{\ell} +v_{i-1,j+1,k-1}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i-1,j-1,k+1}^{\ell} +v_{i-1,j-1,k-1}^{\ell}
\right)\,,
\label{Emprolb}
\end{eqnarray}
for $(i,j,k)$ on the corner blue nodes. In these formulae the
over relaxation parameter $p_m$ has been introduced for later fine
tuning; initially take $p_m=1$. The operation count for this
magenta-blue Jacobi iteration is $10\,\mbox{flops}$ on each of $N/4$ nodes
giving a total of $2\frac{1}{2}N\,\mbox{flops}$.
\item Perform a red-magenta Jacobi iteration on the nodes of the
red grid shown in Figure~\ref{Frmag}. However, because the centre
node (magenta) is not on the red grid, two features follow: it is
not updated in this prolongation step; and it introduces a little
asymmetry into the weights used for values at the nodes. The red
nodes in the middle of each face are surrounded by four magenta
nodes at the corners and two magenta nodes at the centres of the
cube. However, the nodes at the centres are closer and so have twice
the weight in the estimate of the Laplacian. Hence, given initial
approximations on the magenta nodes from the coarser grid,
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{8}\left( -2p_{r1}h^2r_{i,j,k}^\ell
+2\left[v_{i,j,k+1}^{\ell-1}+v_{i,j,k-1}^{\ell-1}\right]
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j+1,k}^{\ell-1} +v_{i+1,j-1,k}^{\ell-1}
+v_{i-1,j-1,k}^{\ell-1} +v_{i-1,j+1,k}^{\ell-1}
\right)\,,
\label{Erprolr}
\end{eqnarray}
for $(i,j,k)$ corresponding to the red nodes on the centre of
faces normal to the $z$-direction. Similar formulae apply for red
nodes on other faces, cyclically permute the role of the indices.
The over relaxation parameters $p_{r1}$ and $p_{r2}$ are
introduced for later fine tuning; initially take
$p_{r1}=p_{r2}=1$. The following magenta-Jacobi iteration uses
these updated values. Each magenta corner node in
Figure~\ref{Frmag} is connected to twelve red nodes and so is
updated according to
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{12}\left(
-4p_{r2}h^2r_{i,j,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j+1,k}^\ell +v_{i+1,j-1,k}^\ell
+v_{i-1,j-1,k}^\ell +v_{i-1,j+1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j,k+1}^\ell +v_{i+1,j,k-1}^\ell
+v_{i-1,j,k-1}^\ell +v_{i-1,j,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j+1,k+1}^\ell +v_{i,j+1,k-1}^\ell
+v_{i,j-1,k-1}^\ell +v_{i,j-1,k+1}^\ell
\right)\,,
\label{Erprolm}
\end{eqnarray}
for all $(i,j,k)$ corresponding to corner magenta nodes.
The operation count for this red-magenta Jacobi iteration
is $9\,\mbox{flops}$ on each of $3N/8$ nodes and $14\,\mbox{flops}$ on each
of $N/8$ nodes. These total $5\frac{1}{8}N\,\mbox{flops}$.
\item Perform a green-red Jacobi iteration on the nodes of
the fine green grid shown in Figure~\ref{Fggr}. The green
grid is a standard rectangular grid so the Jacobi
iteration is also standard. Given initial approximations on
the red nodes from the coarser red grid,
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{6}\left( -p_gh^2r_{i,j,k}^\ell
+v_{i+1,j,k}^{\ell-1} +v_{i-1,j,k}^{\ell-1}
+v_{i,j+1,k}^{\ell-1} +v_{i,j-1,k}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j,k+1}^{\ell-1} +v_{i,j,k-1}^{\ell-1} \right)\,,
\label{Egprolg}
\end{eqnarray}
for $(i,j,k)$ corresponding to the green nodes (edges and centre
of the cube). The over relaxation parameter $p_g$, initially
$p_g=1$, is introduced for later fine tuning. The red-Jacobi
iteration uses these updated values in the similar formula
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{6}\left( -p_gh^2r_{i,j,k}^\ell
+v_{i+1,j,k}^{\ell} +v_{i-1,j,k}^{\ell}
+v_{i,j+1,k}^{\ell} +v_{i,j-1,k}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j,k+1}^{\ell} +v_{i,j,k-1}^{\ell} \right)\,,
\label{Egprolr}
\end{eqnarray}
for the red nodes in Figure~\ref{Fggr}. This prolongation
step is a standard Jacobi iteration and takes $8\,\mbox{flops}$ on
each of $N$ nodes for a total of $8N\,\mbox{flops}$.
\end{itemize}
These three prolongation steps together thus total
$15\frac{5}{8}N\,\mbox{flops}$. To prolongate over $\ell=3L$ levels
from the coarsest grid of spacing $H=2^Lh$ to the finest grid thus takes
\begin{equation}
K_p\approx\frac{125}{7}N\left(1-\frac{1}{8^L}\right)\,\mbox{flops} \approx
{\textstyle 17\frac{6}{7}}N\,\mbox{flops}\,.
\label{Ekprol3}
\end{equation}
The simplest trilinear interpolation direct from the blue grid to the
green grid would take approximately $3\frac{1}{4}N\,\mbox{flops}$, to be
followed by $8N\,\mbox{flops}$ for a Jacobi iteration on the fine green grid.
Over the whole hierarchy this standard prolongation takes
approximately $12\frac{6}{7}N\,\mbox{flops}$. This total is smaller, but the
proposed diagonal grid achieves virtually three Jacobi
iterations instead of one and so is more effective.
\subsection{The V-cycle converges well}
Numerical experiments show that, as in 2D, although the operation
count of the proposed algorithm is a little higher, the speed of
convergence is much better. Both algorithms use $9N\,\mbox{flops}$ to compute
second-order accurate residuals on the finest grid. Thus the proposed
method takes approximately $35\frac{5}{7}N\,\mbox{flops}$ for one V-cycle,
some 37\% more than the $26\frac{1}{7}N\,\mbox{flops}$ of the simplest
standard algorithm. It achieves a mean factor of convergence
$\bar\rho\approx0.140$. This rapid rate of convergence easily
compensates for the small increase in computations taking half the
number of flops per decimal digit accuracy determined.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for
unoptimised
algorithms for solving Poisson's equation in three spatial
dimensions on a $17^3$ grid. The column headed ``per iter'' shows
the number of
flops per iteration, whereas column showing ``per dig'' is
$\,\mbox{flops}/\log_{10}\bar\rho_0$ and indicates the number of flops needed
to compute each decimal digit of accuracy.}
\begin{tabular}{|l|r|lr|}
\hline
algorithm & per iter & $\bar\rho_0$ & per dig \\
\hline
diagonal, $\Ord{h^2}$ & $35.7N$ & 0.140 & $42N$ \\
usual, $\Ord{h^2}$ & $26.1N$ & 0.477 & $81N$ \\
\hline
diagonal, $\Ord{h^4}$ & $48.7N$ & 0.659 & $269N$ \\
usual, $\Ord{h^4}$ & $39.1N$ & 0.651 & $210N$ \\
\hline
\end{tabular}
\label{tbl:3d}
\end{table}
As in 2D, fourth-order accurate solvers may be obtained simply by
using the above second-order accurate V-cycle on the fourth-order
accurate residuals evaluated on the finest grid. A compact
fourth-order accurate scheme for the residuals is the 19~point
formula
\begin{eqnarray}
r_{i,j,k}&=&\frac{1}{12}\left( 6f_{i,j,k}
+f_{i+1,j,k} +f_{i,j+1,k} +f_{i-1,j,k} +f_{i,j-1,k}
+f_{i,j,k+1}
+\right.\nonumber\\&&\quad\left.{}
+f_{i,j,k-1}
\right)
-\frac{1}{6h^2}\left[ -24 u_{i,j,k}
+2\left(u_{i,j-1,k} +u_{i,j+1,k} +u_{i-1,j,k}
+\right.\right.\nonumber\\&&\quad\left.\left.{}
+u_{i+1,j,k}
+u_{i,j,k+1} +u_{i,j,k-1} \right)
+u_{i+1,j+1,k} +u_{i-1,j+1,k}
+\right.\nonumber\\&&\quad\left.{}
+u_{i-1,j-1,k} +u_{i+1,j-1,k}
+u_{i,j+1,k+1} +u_{i,j+1,k-1} +u_{i,j-1,k-1}
+\right.\nonumber\\&&\quad\left.{}
+u_{i,j-1,k+1}
+u_{i+1,j,k+1} +u_{i-1,j,k+1} +u_{i-1,j,k-1} +u_{i+1,j,k-1}
\right]\,.
\label{Efos4}
\end{eqnarray}
Then using the V-cycle described above to determine corrections $v$ to
the field $u$ leads to an increase in the operation count of
$13N\,\mbox{flops}$ solely from the extra computation in finding the finest
residuals. Numerical experiments show that the multigrid iteration
still converge, albeit slower, with $\bar\rho\approx 0.659$.
Table~\ref{tbl:3d} shows that the rate of convergence on the diagonal
hierarchy of grids is little different than that for the simplest
usual multigrid algorithm. As in 2D, high accuracy, 4th order
solutions to Poisson's equation are best found by employing a first
stage that finds 2nd order accurate solutions which are then refined
in a second stage.
\subsection{Optimise parameters of the V-cycle}
\label{SSopt3}
As in 2D, the multigrid algorithms are improved by introducing some
relaxation in the Jacobi iterations. The four parameters $p_m$,
$p_{r1}$, $p_{r2}$ and $p_g$ were introduced in the Jacobi iterations
(\ref{Emprolm}--\ref{Egprolr}) to do this, values bigger than 1
correspond to some over relaxation.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for
optimised algorithms for solving Poisson's equation in three
spatial dimensions on a $17^3$ grid varying over relaxation
parameters to determine the best rate of convergence. The column
headed ``per iter'' shows the number of flops per iteration,
whereas column showing ``per dig'' is $\,\mbox{flops}/\log_{10}\bar\rho$
and indicates the number of flops needed to compute each decimal
digit of accuracy.}
\begin{tabular}{|l|r|lllllr|}
\hline
algorithm & per iter & $p_{m}$ & $p_{r1}$
& $p_{r2}$ & $p_{g}$ & $\bar\rho$ &
per dig \\
\hline
diag, $\Ord{h^2}$ & $35.7N$ & 1.11 & 1.42 & 1.08 &
0.99 & 0.043 & $26N$ \\
usual, $\Ord{h^2}$ & $26.1N$ & & & & 1.30 & 0.31 &
$51N$ \\
\hline
diag, $\Ord{h^4}$ & $48.7N$ & 0.91 & 0.80 & 0.70 &
1.77 & 0.39 & $119N$ \\
usual, $\Ord{h^4}$ & $39.1N$ & & & & 1.70 & 0.41 &
$101N$ \\
\hline
\end{tabular}
\label{tbl:3dopt}
\end{table}
The search for the optimum parameter set used the Nelder-Mead simplex
method encoded in the procedure \textsc{fmins} in \matlab{}. Searches
were started from optimum parameters found for coarser grids. As
tabulated in Table~\ref{tbl:3dopt} the optimum parameters on a $17^3$
grid\footnote{Systematic searches on a finer grid were infeasible
within one days computer time due to the large number of unknowns:
approximately 30,000 components occur in the eigenvectors on a $33^3$
grid.} were $p_m=1.11$, $p_{r1}=1.42$, $p_{r2}=1.08$ and $p_g=0.99$
and achieve an astonishingly fast rate of convergence of
$\bar\rho\approx 0.043$. This ensures convergence to a specified
precision at half the cost of the similarly optimised, simple
conventional multigrid algorithm.
For the fourth-order accurate residuals an optimised diagonal
multigrid performs similarly to the optimised conventional multigrid
with a rate of convergence of $\bar\rho\approx 0.39$. Again fourth
order accuracy is best obtained after an initial stage in which second
order accuracy is used.
\section{Conclusion}
The use of a hierarchy of grids at angles to each other can halve the
cost of solving Poisson's equation to second order accuracy in grid
spacing. Each iteration of the optimised \emph{simplest} multigrid
algorithm decreases errors by a factor of at least 20. This is true
in both two and three dimensional problems. Further research is
needed to investigate the effective of extra Jacobi iterations at each
level of the diagonal grid.
When compared with the amazingly rapid convergence obtained for the
second order scheme, the rate of convergence when using the fourth
order residuals is relatively pedestrian. This suggests that a
multigrid V-cycle specifically tailored on these diagonal grids for
the fourth order accurate problem may improve convergence markedly.
There is more scope for W-cycles to be effective using these diagonal
grids because there are many more levels in the multigrid hierarchy.
An exploration of this aspect of the algorithm is also left for
further research.
\paragraph{Acknowledgement:} This research has been
supported by a grant from the Australian Research Council.
\bibliographystyle{plain} |
\section{Introduction}\medskip
A hallmark of animal brain is the capability of forming decisions from sensory inputs to guide meaningful behavioral responses. Understanding the relationship between behavioral responses and how they are encoded in brains is a major goal in the neuroscience.
To this end, behavior training of nonhuman primates has been studied in a variety of decision tasks, such as perceptual discrimination \citep{shadlen2001neural}.
These electrophysiological experiments have uncovered that neural signals at the single-neuron level are correlated with specific aspects of decision computation. However, in the mammalian brain, a decision is made not by a single neuron, but by the collective dynamics of neural circuits. Unfortunately, the animal-based experiment does not allow us to access all of the relevant neural circuits in the brain. To address this problem, neural circuit modeling with recurrent neural network has been used to uncover circuit mechanisms underlying complex behaviors \citep{mante2013context}.
The contributions of the prefrontal cortex-basal ganglia to complex behaviors are still not completely understood. A wide array of evidence~\citep{o2004dissociable,sohal2009parvalbumin} shows that the prefrontal cortex-basal ganglia circuit appears to implement RL algorithm and is driven by a reward prediction error (RPE). This RPE signal, conveyed by dopamine, is thought to gate Hebbian synaptic plasticity in the striatum \citep{montague1996framework}. Over the last decade, many explicit RL models have been produced to understand the functions of dopamine and prefrontal cortex-basal ganglia circuits~\citep{cohen2009neurocomputational,maia2009reinforcement}. Recent functional magnetic resonance imaging (fMRI) studies in humans revealed that the activation in the hippocampus, a central for storing episodic memory \citep{paller2002observing}), is modulated by reward, demonstrating a link between episodic memory and RL \citep{wittmann2005reward,krebs2009novelty}. However, the existing RL models do not take into account the effect of episodic memory, which is necessary for those who want to explore decision-making by modeling circuits.
In this paper, we construct an Actor-Critic framework (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{right}) based on RL theories in prefrontal cortex-basal ganglia systems (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{left}) and RL algorithms for artificial systems. The Actor-Critic framework was modeled by recurrent neural network, which is a natural class of models to study mechanisms in neuroscience systems because they are both dynamical and computational \citep{mante2013context}.
This framework was trained for two classical decision tasks, \textit{i.e.}, random dots motion (RDM) direction discrimination task \citep{roitman2002response} and value-based economic choice task \citep{padoa2006neurons}. For RDM task, a monkey is asked to arbitrarily choose the direction (left or right) of a flow of moving dots (\textcolor{blue}{Fig.~\ref{rdm_task}}a). We show that an agent reproduces qualitative results, that is, behavioral data generated by our framework can be fitted with: (i) psychometric function, a tool for analyzing the relationship between accuracy and stimulus strength (\textcolor{blue}{Fig.~\ref{rdm_task}}b, top), and (ii) chronometric function, a tool for analyzing the relationship between response time and stimulus strength (\textcolor{blue}{Fig.~\ref{rdm_task}}b, bottom). For value-based economic choice task, in which a monkey is asked to choose between two types of juice offered in different amounts (\textcolor{blue}{Fig.~\ref{fig3}}).
The activity of units in the critic network shows similar types of response observed in the orbitofrontal cortex of monkeys (\textcolor{blue}{Fig.~\ref{fig4}}). These results confirm that our framework can serve as a platform for studying diverse cognitive computations and mechanisms.
Moreover, anatomical and electrophysiological studies in animals, including humans, suggest that the episodic memory in the hippocampus is critical for adaptive behavior. Particularly, the latest research suggests that the hippocampus supports deliberation during value-based economic choice task \citep{bakkour2019hippocampus}. Our computational framework also supports this experimental conclusion (\textcolor{blue}{Fig.~\ref{fig5}}). Yet how the brain selects experiences, from many possible options, to govern the decisions has always been an open question. To address this gap, we investigated which episodic memories should be accessed to govern future decisions by conducting experiment on this validated Actor-Critic framework in Section~\ref{investigate}. The results show that salient events sampled from episodic memories can effectively shorten deliberation time than common events in the decision-making process, suggesting that salient events stored in the hippocampus could be prioritized to propagate reward information and guide decisions.
\section{Background}\medskip
In the present work, we first trained our RNN-based Actor-Critic model using two classical decision tasks, and then conduct experiment on this optimized model to explore how episodic memory govern decision-making. The framework we designed is based on four assumptions listed below:
1. \textbf{Actor-critic architecture for RL in biological system.} This assumption states that a cortex-basal ganglia circuit (PFC-BG) can be modeled as an actor-critic architecture \citep{dayan2002reward,o2004dissociable,haber2014place}. In this process, the midbrain dopamine neurons play a central role, which code reinforcement prediction error. The actor-critic view of action selection in the brain suggests that the dorsal striatum in PFC-BG is responsible for learning stimulus-response association, which can be thought of as the `actor' in the actor-critic architecture. The ventral striatum in basal ganglia, together with cortex, is mainly used to learns state values, which is akin to the `critic'~\citep{maia2009reinforcement,maia2010two}.
2. \textbf{Recurrent neural networks reproduce neural population dynamics.} This assumption states that we can conceptualize a PFC-BG system using recurrent neural networks (RNNs), for both actor and critic. RNN is a class of artificial neural networks (ANN) with feedback connection, which has been successfully applied in both artificial intelligence~\citep{ijcai2018-98,liu2019GPN,10.1145/3390891} and computational neuroscience. There are many essential similarities between RNNs and biological neural circuits: First, RNNs units are nonlinear and numerous. Second, the units have feedback connections, which allows them to generates temporal dynamic behavior within the circuit. Third, individual units are simple, so they need to work together in a parallel and distributed manner to implement complex computations. Both dynamical and computational features of RNNs make it an ideal model for studying the mechanisms of system neuroscience \citep{rajan2016recurrent,sussillo2014neural,mante2013context}. Since basal ganglia can perform dynamic gating via reinforcement learning mechanisms (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{left}), here we consider more sophisticated units, i.e., gated recurrent units (GRUs), to implement this gating mechanism.
3. \textbf{Episodic memory contributes to decision-making process.} This assumption states that episodic memory, depending crucially on the
hippocampus and surrounding medial temporal lobe
(MTL) cortices, can be used as a complementary system for reinforcement learning to influence decisions. First, in addition to its role in remembering the past, the MTL also supports the ability to imagine specific episodes in the future \citep{hassabis2007patients}, with direct implications for decision making \citep{peters2010episodic}. Second, episodic memories are constructed in a way that allows relevant elements of a past event to guide future decisions \citep{shohamy2008integrating}.
4. \textbf{There are two different forms of learning in biological systems: slow learning and fast learning.} Many evidence suggests that cortex-basal ganglia circuits appear to implement reinforcement learning \citep{frank2004carrot}. Hence, the synaptic weights of dopamine targets (striatum in BG) in the circuit, including the PFC network, can be modulated by a model-free RL procedure. This method of incremental parameter adjustment makes it a slow form of learning. On the other hand, as mentioned above, episodic memories stored in the hippocampus impact reward-based learning, suggesting that the hippocampus can serve as a supplementary system to reinforcement learning. From this, episodic memories in replay buffer (a function similar to the hippocampus) can be used to estimate the value of actions and states to guide reward-based decision-making \citep{wimmer2014episodic}, which is a fast form of learning.
These assumptions are all based on existing research. For demonstration, we abstract the neural basis of RL in biological systems (\textcolor{blue}{Fig.~\ref{fig1}} \textit{left}) into a simple computational model (\textcolor{blue}{Fig.~\ref{fig1}} \textit{right}), an actor-critic equipped with episodic memory architecture, in which actor network leverages noisy and incomplete perceptual information about the environment to make a choice, while the critic network emits the value of the selected option. We exploit recent advances in deep RL, specifically the application of the policy gradient algorithm on RNN \citep{bakker2002reinforcement}, to train our model to perform decision-making task.
\section{Methods}\medskip
\subsection{Computational Model}\smallskip
\textbf{RNN unit.} The Actor architecture used in our framework, which represents a particular RNN form, is depicted in \textcolor{blue}{Fig.~\ref{fig1}}c.
RNNs have been introduced by neuroscientists into the field of neuroscience systems to describe the average firing rate of neural populations within a biological context \citep{wilson1972excitatory}. A general definition of an RNN unit is given by \cite{sussillo2014neural}:
\begin{align}
\mathrm {\tau} \frac{\mathrm d \bm{\mathrm x}}{\mathrm d \bm{t}}=-\bm {\bm{\mathrm x}}+{\bm{\mathrm W}}_{rec}\bm{\mathrm r}+{\bm{\mathrm W}}_{in}{\bm{\mathrm u}}+\bm {\mathrm b},
\label{eq:general-rnn}
\end{align}%
Where the $\bm{\mathrm x}$ is a vector, and the $i$th component is ${x}_i$, which can be viewed as the sum of the filtered synaptic currents at the soma of a biological neuron. The variable ${r}_i$ denotes the instantaneous, positive `firing rate', which is obtained by a threshold-linear activation function $[x]^{+}=max(0,x)$, the vector $\bm{\mathrm u}$ presents the external inputs provided to the network. $b_i$ is the bias each unit in the network receives, and the time constant ${\mathrm \tau}$ sets the timescale of the network. In our model, we use gated recurrent units (GRUs), a variant of the RNN architecture introduced by \cite{Chung2014Empirical}. GRUs use gating mechanisms to control and manage the flow of information between cells in the neural network. There are two main reasons for using GRUs: (1) Since the basal ganglia in the brain can perform dynamic gating via RL mechanisms, this gating mechanism can be implemented using GRUs; (2) A parallel neural system allows the biological agents to solve learning problems on a different timescale, and learning with multiple timescales have been shown to improve the performance and speed up the learning process by theoretical and modeling studies \citep{o2006making, neil2016phased}. This multiplicity of timescales is also an important feature of GRUs, as indicated by \cite{Chung2014Empirical}, in which each unit learns to adaptively capture dependencies over different time scales. In this work, we perform a little modification on the used GRUs according to Equation~(\ref{eq:general-rnn}). A continuous-time form of the modified GRUs is described as follows.
\begin{equation}
\begin{split}
\bm{\mathrm \alpha} &=\sigma(\bm{\mathrm W}^\alpha_{rec}{\bm{\mathrm r}}+\bm{\mathrm W}^\alpha_{in}{\bm{\mathrm u}}+\bm{\mathrm b}^\alpha),\\
\bm{\mathrm \beta} &=\sigma(\bm{\mathrm W}^\beta_{rec}{\bm{\mathrm r}}+\bm{\mathrm W}^\beta_{in}\bm{\mathrm u}+\bm{\mathrm b}^\beta),\\
{\mathrm \tau}{\frac{\mathrm d {\bm{\mathrm x}}}{\mathrm d {\bm{t}}}} &=-{\bm{ \alpha}} \circ {\bm{\mathrm x}}+{\bm{{ \alpha}}} \circ (\bm{\mathrm W}_{rec}(\bm{\beta} \circ {\bm{\mathrm r}})+\bm{\mathrm W}_{in}{\bm{\mathrm u}}+{\bm{\mathrm b}}+\sqrt{2\mathrm \tau { k_{rec}^2}}{\bm{\mathrm \xi}}),\\
{\bm{\mathrm r}}&=[{\bm{\mathrm x}}]^{+}
\end{split}
\label{equ:2}
\end{equation}
Where $\circ$ denotes the Hadamard product, $\sigma(x)=\frac{1}{1+e^{-x}}$ is the sigmoid function. The vector $\bm{\mathrm \xi}$ are independent Gaussian white noise scaled by $k_{rec}$, which present noise intrinsic to the RNN. The matrices $\bm{\mathrm W}^\alpha_{rec}$, $\bm{\mathrm W}^\alpha_{rec}$, and $\bm{\mathrm W}^\alpha_{rec}$ are $N\times N$ weight matrices of recurrent connection. While $\bm{\mathrm W}^\alpha_{in}$, $\bm{\mathrm W}^\alpha_{in}$, and $\bm{\mathrm W}^\alpha_{in}$ are $N\times N_{in}$ weight matrices of connection from input units to recurrent units. The vectors ${\bm{\mathrm b_\alpha}}$, ${\bm{\mathrm b_\beta}}$, and ${\bm{\mathrm b}}$ are bias.
Threshold-linear activation function $[x]^{+}$ guarantees that Equation~(\ref{equ:2}) is a nonlinear dynamic system. These leaky threshold-linear units in GRUs are modulated by the time constant $\mathrm \tau$, with an update gate $\bm{\alpha}$ and reset gate $\bm{\beta}$. Based on the dynamics equation of the GRU defined above, the following section will provide a detailed description of Actor-Critic model.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figs/fig1.pdf}
\caption{Actor-Critic framework equipped with episodic memory. \textbf{(a)} Anatomy of a model of reinforcement learning.
The model is focused on \textbf{PFC} (robust active maintenance of task-relevant information), \textbf{BG} (dynamic gating of PFC active maintenance), \textbf{DA} (encoding a reward prediction error), \textbf{Hippocampus} (storing episodic memory). Sensory inputs are processed by PFC-BG circuit, and corresponding motor signals are sent out by Thalamus (not shown here). Working memory representations in PFC are updated via dynamic gating by the BG. These gating functions are learned by BG based on modulatory input from dopaminergic neuron (purple dotted line), \textit{i.e.}, dopamine drives reinforcement learning (slow RL) in BG regions. Moreover, dopamine modulates episodic memories in the hippocampus, supporting adaptive behaviors (fast RL). The synaptic weights in the PFC-BG network are adjusted by an RL procedure, in which DA conveys an RPE signal. \textbf{(b)} The computational model of reinforcement learning. The PFC-BG circuits in the brain were mapped to the Actor-Critic framework (green box). At each time step, the actor receives an observation from environment (corresponding to Sensory input) and selects an action (corresponding to Motion output) based on the past experience (working memory stored in RNN) and current sensory input. The reward is given followed by the chosen action and the environment moves the next state. The critic will estimate the action by computing the state-value function. Then the TD RPE (purple) is estimated through a Temporal Difference algorithm drives by DA, which adjusts the weight of the actor and critic network. Replay buffers (yellow) are used to store and replay episodic memories, similar to the function of the hippocampus. \textbf{(c)} A more detailed schematic of the actor network implementation used in our model: $\mathrm u$ represents sensory input, $\mathrm a$ represents action, and $t$ is time-step. Input units in the Actor model encode the current observation, which connect all-to-all with GRU units. The GRUs layer is composed of a fully connected set of GRU units ($N$ units shown by orange circles), which connect all-to-all with softmax layer encoding the probability of selecting each action. The critic network shown in \textbf{(d)} has the same GRUs layer as actor network, which also receives observations as input from the environment. The output in the critic network is a linear unit encoding estimated state ${\mathrm V}$, combining with the reward ${\eta}$ to calculate TD error.}
\label{fig1}
\end{figure*}
\textbf{Actor-Critic model}. Based on the model constructed by \cite{Amir2019Models}, our Actor model is composed of three layers: an input layer, an RNN (GRUs) layer, and an output softmax layer. The RNN layer in our model consisted of $N=256$ GRU units, and the output layer contains three nodes (since there are $N_a=3$ actions in the RDM task and value-based choice task) (\textcolor{blue}{Fig.~\ref{fig1}}c). At each time step $t$, the input to the Actor model is current observation provided by the environment, and the outputs are the probabilities of choosing action given by the agent's policy. Here, the policy $\mathrm \pi (a_t|u_t;\theta)$ (parameterized by $\theta$) is implemented through the output of a linear readout by softmax normalization, which is determined by the activity $\mathrm r^{\pi}$ of GRU in actor network:
\begin{align}
\bm {\mathrm z}_t &=\bm {\mathrm W}_{out}^{\pi}\bm {\mathrm r}_t^\pi+\bm {\mathrm b}_{out}^{\pi},\\
\mathrm \pi(a_t=j|u_t;\theta) &=\frac{e^{(z_t)_j}}{\sum_{l=1}^{N_a}e^{(z_t)_l}},\quad (j=1,...,N_a)
\label{equ:4}
\end{align}
Where $\bm {\mathrm W}_{out}^{\pi} \in \mathbb{R}^{N_a\times N}$ is matrix of connection weights from GRU layer to the softmax layer, $\bm{\mathrm z}_{t}$ is $N_a$ linear readouts and $\bm{\mathrm b}_{out}^{\pi}$ is $N_a$ bias. The process of action selection is carried out through random sampling from the probability distribution in equation~(\ref{equ:4}). This sampling can be considered as an abstract representation of action selection in the downstream circuitry through basal ganglia, which is the process for selecting `what to do next' in dynamic and unpredictable environments in real time.
The Critic model contains an input layer and a GRUs layer \textcolor{blue}{Fig.~\ref{fig1}}d. In particular, the inputs to the Critic model include not only the observation provided by the environment but also the activity of GRU in the actor network. The output is the state value function $\mathrm V$ (parameterized by $\theta_\mathrm v$), estimating the expected return from sensory input $\bm {\mathrm u}$ and telling the actor how good its action. The state value is predicted by the activity of GRU in Critic network through a linear readout.
\begin{align}
\mathrm V(u_t;\theta_\mathrm v) &=\bm {\mathrm W}_{out}^{\mathrm v}\bm {\mathrm r}_t^\mathrm v+ {\mathrm b}_{out}^{\mathrm v},
\label{equ:5}
\end{align}
Where $\bm {\mathrm W}_{\mathrm out}^{\mathrm v} \in \mathbb{R}^{1\times N}$ is matrix of connection weights from GRU layer to the single linear readout layer ${\mathrm v_t}$, and ${\mathrm b}_{out}^\mathrm v$ is bias.
The Actor network and Critic network have the same GRU structure. The GRUs layer consists of a set of interconnected GRU units (the memory part of the GRU), which is presented by $x_t^i$ in \textcolor{blue}{Fig.~\ref{fig1}}c for the ith GRU unit at time $t$. The value of each unit is updated based on the current input and the last value of all GRU units $(x_{t-1}^i, i=1,2, … ,N )$.
In this way, GRUs layer can keep track of information about the history of past rewards and actions. In Actor model, each GRU unit takes its updated value as the current value $(x_{t-1}^i)$ and then transmits it to the softmax layer through a set of all-to-all connections.
These connections determine the impact of each unit's output on the prediction of the next action. In Critic model, each GRU unit transmits its output to one unit (output layer of Critic model) and a scalar value is calculated, which evaluates the action value. As a whole, overall architecture will learn to perform decision-making task by learning the optimal policy using the Actor model and evaluating the action using Critic model.
\subsection{Behavior tasks }\smallskip
\label{sec:task}
\textbf{RDM direction discrimination task.} In the RDM discrimination task (`reaction-time' version), a monkey chooses between two visual targets; a general description is shown in~\textcolor{blue}{Fig.~\ref{rdm_task}}a. First, the monkey was required to fixate a central point until the random dot motion appears on the screen. Then, the monkey indicated its decision in the direction of dots, by making a saccadic eye movement to the target of choice. In the standard RL model, an RL agent learns by interacting with its surrounding environment and receiving rewards for performing actions. Accordingly, in the RDM task, the actual direction of the moving dots can be considered to be a state of the environment. This state is partially observable, since the monkey does not know the precise direction of the coherent motion. Therefore, the monkey needs to integrate the noisy sensory stimuli to figure out the direction. The monkey is given a positive reward, such as fruit juice, for choosing the correct target after the fixation cue turns off, while a negative reward is given, in the form of timeouts, when either the fixation is broken too early or no choice is made during the stimulus period. During the simulation, the incorrect response was rewarded with a zero reward. Given the reward schedule, the policy could be modeled and optimized using the method of policy gradient.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{figs/fig2.pdf}
\caption{\textbf{(a)}. RDM direction discrimination task (`reaction-time' version). Monkeys are trained to discriminate the direction of motion in a random-dot stimulus that contained coherent horizontal motion. After fixation (screen 1), the two choice targets appeared in the periphery (screen 2). After a variable delay period (was randomly selected from an exponential distribution with mean $700$ ms), dynamic random dots appeared in a $5^{\circ}$ diameter aperture (screen 3). The monkey was allowed to make a saccadic eye movement to a choice target at any time after onset of random-dot motion to indicate the direction of perceived motion (screen 4). Reaction time (RT) is defined as the elapsed time from motion onset to the initiation of the saccade, which was controlled by the monkeys and could be measured. \textit{(Buttom)} Examples of random-dot motion stimulus of variable motion coherence. Stimulus strength is varied by changing the proportion of dots moving coherently in a single direction, which determines the difficulty of the task. The lower (higher) the coherence levels, the more difficult (easier) the task is. Coherently moving dots are the `signal', and randomly moving dots are the `noise'. \textbf{(b)}. Behavior comparison of the animal and the agent. During training for the RDM task, the behaviors of the agent reflected in psychometric functions \textit{(top)} and chronometric functions \textit{(bottom)}. \textit{Left}: animal behavioral data from one experience (reproduced from \cite{roitman2002response}. \textit{Right}: our agent behavioral data. \textit{Top}: Psychometric functions from reaction time version of the RDM. The probability of a correct direction judgment is plotted as a function of motion strength and fitted by sigmoid functions. \textit{Bottom}: Effect of motion strength on reaction time (average reaction time of correct trials). The relationship between the log scaled motion strength and the reaction time fits a linear function.}
\label{rdm_task}
\end{figure*}
\textbf{Value-based economic choice Task.} In the economic choice task experiment, reported by \cite{padoa2006neurons}, the monkey chooses between two types of juice (labeled A and B, with A being preferred) offered in different amounts \textcolor{blue}{Fig.~\ref{fig3}. Each trial began with a fixation period of $1.5s$} and then the offer, which indicated the juice type and amount for the left and right choices, was presented for $1-2s$ before it disappeared. The network was required to indicate its decision in a decision period of $0.75s$.
Since there is a choice that leads to higher rewards, in this sense, there is a `correct' answer in each trial.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/pado_task.pdf}
\caption{Value-based economic choice task. At the beginning of each trial, the monkey fixated a center point on the monitor. Then two offers appeared on the two sides of the center fixation. The offers were represented by two sets of squares, with the color linked to the juice type and the number of squares indicating juice amount, which remained on the monitor for a randomly variable delay. The monkey continued fixation the center point until it was extinguished (`go' signal), at which point the monkey indicated its choice by making a saccade towards one of two targets.}
\label{fig3}
\end{figure}
\section{Experiment}\medskip
\label{sec:5}
In this section, we will describe in detail how the Actor-Critic model learns a behavioral policy to maximize the cumulative reward.
The interaction between a monkey and an experimentalist is regarded as the interaction between agent $\mathcal A$ and environment $\mathcal E$. At each time step $t$, the agent observes the inputs $u_t$ from the environment and then selects an action $a_t$ to be performed. The probability of selecting action $a_t$ is given by the policy function $\pi$. After performing the action $a_t$, the environment provides the agent with a scalar reward $\eta_{t}$ (here we use $\eta$ to distinguish it from ${\mathrm r}$, the firing rates of the GRU). In summary, the actor network attempts to learn a policy $\pi$ by receiving feedback from the critic network, and the critic network learns a value function $\mathrm V$ (the expected return in rewards), used to determine how advantageous it is to be in a particular state.
\subsection{Experiment 1: Training our framework to perform RDM task} \smallskip
For the RDM task, the actual direction of the moving dots can be considered to be a state of the environment. For the monkey, this state is partially observable. Learning this behavioral task by an RL algorithm is to solve a partially observable Markov decision process (POMDP). At each time $t$, an observable information is drawn from a set of environment states according to a probability distribution ${\mathrm P}(\mathrm u_t |\mathrm s_t)$. The sensory input, \textit{i.e.}, the observation received by the agent, is denoted as a tuple $\bm{\mathrm u}=(\mathrm {c_F}, \mathrm{c_L}, \mathrm{c_R})$, where $\mathrm{c_F}$ is fixation cue, $\mathrm {c_L}$ is the percentage of dots moving in the left direction, $\mathrm {c_R}$ is the percentage of dots moving in the right direction. These percentages represent the noisy evidence for two choices (left) and (right). At each time, the agent selects to perform one from the set of actions $\mathrm {A=\{{F, L, R}\}}$: fixation $(a_t=\mathrm F)$, select left $( a_t=\mathrm L)$, select right $( a_t=\mathrm R)$. A trial ends as long as the agent makes a decision (select left or right): the agent is reward with $\eta=8$ for making a correct decision and with $\eta=0$ for making a wrong decision. Aborting trial, i.e., breaking fixation early before the `go’ cue, results in a negative reward $\eta=-2$.
If the agent has not made a choice at the maximum time $t_{max}$, the reward is $\eta=0$. Here we use $e^{-t/\tau_{\eta}}$ to discount future rewards \citep{doya2000reinforcement}, where $\tau_{\eta}$ is time constant. Discounted rewards still denote as $\eta$. Given reward function $\eta=\eta(\mathrm u_t,\mathrm a_t)$, the learning is implemented by single-threaded Advantage Actor-Critic (A2C) algorithm described by \cite{mnih2016asynchronous}.
The goal of the agent is to learn a policy that maximizes the expected future reward to be received, starting from $t=0$ until the terminal time $T$($\leq t_{max}$).
\begin{align}
J(\theta) &=\mathbb{E}[\sum_{t=0}^{T-1} \eta_{t+1}],
\label{equ:6}
\end{align}
For policy network, \textit{i.e.}, actor network, the loss function $\mathcal{L}^\pi (\theta)$ is defined as following.
\begin{align}
\mathcal{L}^\pi (\theta) &=-J(\theta)+\beta_e H^{\pi}(\theta),
\label{equ:7}
\end{align}
We introduce entropy $H^{\pi}(\theta)$ to the policy network, which encourages exploration by preventing the agent from being too decisive and converging at local optima and $\beta_e$ is hyperparameter controlling the relative contribution of entropy regularization term. The key gradient $\nabla_\theta J(\theta)$ is given for each trial by the A2C algorithm.
\begin{align}
\nabla_\theta J(\theta) &=\sum_{t=0}^{T}\nabla_\theta \log \pi(\mathrm a_t|\mathrm u_t;\theta)A(u_t, r^{\pi}_t),\label{equ:8}\\
A(\mathrm u_t, \mathrm r^{\pi}_t) &=\eta_t+\gamma \mathrm V(\mathrm u_{t+1},\mathrm r^{\pi_{t+1}};\theta_ {\mathrm v})-\mathrm V(\mathrm u_{t},r^{\pi}_t;\theta_ {\mathrm v}),
\label{equ:9}
\end{align}
\noindent
where the parameters $\theta$ and $\theta_\mathrm v$ consist of connection weight, biases of the actor network and critic network respectively, \textit{i.e.}, $\theta=\{\bm {\mathrm W}_{in}^\pi,\bm {\mathrm W}_{rec}^\pi,\bm {\mathrm W}_{out}^\pi,$ $\bm {\mathrm b}_{in}^\pi,\bm {\mathrm b}_{rec}^\pi,$ $\bm {\mathrm b}_{out}^\pi\}$, $\theta_\mathrm v=\{\bm {\mathrm W}_{in}^{\mathrm v}, \bm {\mathrm W}_{rec}^{\mathrm v},\bm {\mathrm W}_{out}^{\mathrm v},\bm {\mathrm b}_{in}^{\mathrm v},\bm {\mathrm b}_{rec}^{\mathrm v},\bm {\mathrm b}_{out}^{\mathrm v}\}$. The actor learns a policy $\pi$ (the rule that the agent follows) by receiving feedback from a critic. The critic learns a state value function $\mathrm V(\mathrm u_t,\mathrm r^{\pi}_t;\theta_ {\mathrm v})$ (the expected return in rewards), which is used to determine how advantageous it is to be in a particular state by estimating the advantage function $A(\mathrm u_t,\mathrm r^{\pi}_t)$, \textit{i.e.}, TD error. The parameter $\gamma$ is the discount factor.
For value network, the loss function $\mathcal{L}^\mathrm v (\theta)$ is Mean Square Error
\begin{align}
\mathcal{L}^\mathrm v (\theta) &= \sum_{t=0}^{T}[\eta_t+\gamma \mathrm V(\mathrm u_{t+1},\mathrm r^{\pi}_{t+1})-\mathrm V(\mathrm u_{t},\mathrm r^{\pi}_t)]^2,
\label{equ:10}
\end{align}
We can get the loss function for the model overall through combining the two loss functions
\begin{align}
\mathcal{L} (\theta) &= \mathcal{L}^\pi (\theta)+\beta_\mathrm v \mathcal{L}^\mathrm v (\theta),
\label{equ:11}
\end{align}
Here, the hyperparameter $\beta_\mathrm v$ controls the relative contribution of the value estimate loss.
After every trial, the policy network and value network use Adam stochastic gradient descent (SGD) to find the parameters $\theta$ that minimizes an objective function $\mathcal{L} (\theta)$.
\begin{equation}
\begin{split}
&\nabla_\theta \mathcal{L}(\theta) = \nabla_\theta \mathcal{L}(\theta)+\beta_\mathrm v \nabla_\theta \mathcal{L}(\theta)\\
&=-\sum_{t=0}^{T}\nabla_\theta \log \pi_\theta(\mathrm a_t|\mathrm u_t)A(\mathrm u_t, \mathrm r^\pi_t)+\beta_\mathrm e \nabla_\theta H^\pi (\theta)+\beta_\mathrm v \nabla_\theta \mathcal{L}^\mathrm v (\theta),
\label{equ:12}
\end{split}
\end{equation}
The gradient $\nabla_\theta \log \pi_\theta(\mathrm a_t|\mathrm u_t)$, $\nabla_\theta H^\pi (\theta)$, $\nabla_\theta \mathcal{L}^\mathrm v (\theta)$ are computed using the backpropagation through time (BPTT). Through this training, the actor network learns to extract history experiences into the hidden state, in the form of working memory (WM). This working memory is thought to be facilitated by the PFC, which can be summarized to instruct the actor system to select rewarding actions. Meanwhile, the critic system learns a value function to train the actor network, which in turn furnishes a dynamic gating mechanism to control updating the working memory.
\subsection{Experiment 2: Training our framework to perform value-based economic task}\smallskip
We also trained the Actor-Critic model to perform the value-based economic choice task, described in Section~\ref{sec:task}, with a training procedure similar to the above-described one for the RDM task. In this task, we noticed that there was no real correct or wrong choice for the monkey. However, there is a choice that allowed the monkey to receive the highest reward, this choice can thus be considered as a 'correct' choice. Unlike the RDM task, the information regarding whether an answer is correct is not included in the inputs, but rather in the correlation between the inputs and rewards.
\subsection{Test behavioral characteristics of our framework}\smallskip
\label{sec:test}
Next, we investigated whether the Actor-Critic framework captures the behavioral characteristics of animals in the cognitive experiments. In the previous section, we have trained the Actor-Critic framework to perform the RDM and value-based economic choice tasks. Here, we compare the behavioral characteristics exhibited by the trained model with those observed in the animal experiments.\smallskip
\textbf{RDM task}. The results are consistent with the behavioral findings from the animal experiments, which are mainly reflected in the psychometric and chronometric functions, as shown in \textcolor{blue}{Fig.~\ref{rdm_task}}b.
The performance accuracy in the RDM task depends on the strength of the sensory input, and the psychometric function is a good tool to analyze such a relationship. The percentage of correct direction judgments is plotted as a function of the motion strength (measured by the proportion of coherently moving dots). \textcolor{blue}{Fig.~\ref{rdm_task}}b \textit{(top)} shows a high accuracy during a strong motion, while less accuracy is shown with more chance and a weaker motion, which suggests that the agent in our Actor-Critic framework captures this important behavioral feature. Moreover, the theory of chronometric functions puts a constraint on the relationship between the response time and accuracy. A difficult task (weaker stimuli strength) requires the agent to take more time to make a decision (\textcolor{blue}{Fig.~\ref{rdm_task}}b \textit{(bottom)}), which means that the additional viewing time for difficult trials was devoted to integrating the sensory information. As a result, the appropriate trade-off between speed and accuracy is learned by this Actor-Critic framework. It is worth emphasizing that unlike the usual machine learning goals, our objective is not to achieve the 'perfect' performance, but rather to train the agents to match the smooth psychometric and chronometric characteristics observed in the behavior of the monkeys.\smallskip
\textbf{Value-based economic choice task}. The activity of the units in the critic network exhibits similar types of response to those observed in the orbitofrontal cortex of the monkeys \citep{padoa2006neurons}. First, roughly $20\%$, $60\%$, and $20\%$ of the active units are selective to the chosen value, the offered value, and to choose alone, respectively, as defined in the animal experiment. Second, there is a trade-off between the juice type and its quantity (upper panel of \textcolor{blue}{Fig.~\ref{fig4}}). Third, the patterns of neural activity are consistent with the behavioral findings from the animal experiment, with three main response patterns: (i) similar U-shaped response pattern (\textcolor{blue}{Fig.~\ref{fig4}}a-c, {deep blue circles}); (ii) the response pattern associated with the `offer value' variable (\textcolor{blue}{Fig.~\ref{fig4}}d-e, purple circles); (iii) the response pattern related to the juice `taste' variable. For this task, the network architecture has not been changed, and we only change the initial value of the critic network's input weight.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{figs/fig4.pdf}
\caption{The units in our model exhibit diverse selectivity for the task variables, as observed in the orbitofrontal cortex. The top panel shows the percentage of trials in which the agent chose `juice' B ($y$ axis) for various offer types ($x$ axis). The relative value is indicated on the top left. For example, in \textbf{a}, the relative value is $3.2$, which indicates that the reward contingencies are indifferent between $1$ `juice' of A and $3.2$ `juice' of B. Different relative values indicate different choice patterns. The bottom panel of the figure shows the mean activity ($y$ axis, defined as the period of $800$ ms before the decision) of example value network units for various offer types ($x$ axis) under different choice patterns: $1$A = $3.2$B (\textbf{a}, deep blue), $1$A = $2.5$B (\textbf{b}, deep blue), $1$A = $3.3$B (\textbf{c}, deep blue), $1$A = $2.5$B (\textbf{d}, purple), $1$A = $2.2$B (\textbf{e}, purple), and $1$A = $4.1$B (\textbf{f}, green and blue). For each case, the grey circles show the mean the activity of value network units during the fixation period. \textbf{a-c}, The units in the value network exhibit selectivity for the `chosen value'. \textbf{d-e}, The units in the value network exhibit selectivity for the `offer value'. \textbf{f}, The trials are separated into choice A (green diamonds) and choice B (blue circles).
}
\label{fig4}
\end{figure*}
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{Parameter for Actor-Critic model training.}\label{tbl1}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
Parameter & Value & Parameter & Value\\
\midrule
Learnig rate & 0.004 & $t_{max}$ & 275 \\
$\tau$ & 50ms & $k_{rec}^2$ & 0.01 \\
$\tau_{\eta}$ & 200ms & $\beta_\mathrm v$ & 0.5 \\
$\gamma$ & 0.99 & $\beta_\mathrm e$ & 0.5 \\
\bottomrule
\label{table1}
\end{tabular*}
\end{table}
\section{Analysis}\medskip
In Section~\ref{sec:test}, which suggests that it can serve as a computational platform to study the impact of memory on the cognitive function. It has been shown by a number of experimental studies that memory is essential to make decisions, enabling the organisms to predict possible future outcomes by drawing on past events. For instance, working memory, which is a temporary storage in the brain \citep{repovvs2006multi}, has been shown to guide the choice by maintaining and manipulating task-relevant information. Besides, episodic memory has also been shown to be involved in the decision-making process. Moreover, a recent study suggests that the hippocampus supports deliberation about the value during the value-based economic choice task: thus, the hippocampus contributes to the construction of internal samples of evidence that are related to decision-making \citep{bakkour2019hippocampus}. Based on this idea, in this section, we combine our computational platform with the value-based economic choice task to explore the role of episodic memory in the process of decision-making.
\subsection{Episodic memory contributes to decision-making}\smallskip
First, we need to verify whether the Actor-Critic model that is equipped with episodic memory has an effective performance. Psychologically, episodic memory refers to the capacity to consciously recollect an autobiographical memory of the events that occurred in particular times and places. For example, a person can recall an episode from the past, such as his $20^{\rm th}$ birthday party, and remember who was there and where it happened. Computationally, we mainly emphasize the notion of one-time episodes (like one-trial learning in a task). A previous study suggested that episodic memory could be used to store the specific rewarding sequence of state-action pairs and later try to mimic such a sequence, a process called episodic control \citep{lengyel2008hippocampal}. In this work, we propose a slightly different computational principle, in which episodic memory is used to optimize the policy rather than directly extract it.
In our computational model, one episodic memory is generated as follows: On each trial $i$ in the value-based economic choice task, the agent's experiences $e_t=(\mathrm u_t,\mathrm a_t,\eta_t,\mathrm s_{t+1})$ at each time step $t$ are stored as an episodic memory $E_i=(\mathrm u_0,\mathrm a_0,\eta_0,\mathrm s_1,…,\mathrm u_t,$ $\mathrm a_t,\eta_t,\mathrm s_{t+1},…,\mathrm u_{T_{i-1}},\mathrm a_{T_{i-1}},\eta_{T_{i-1}},$ $\mathrm s_{T_{i}})$ and $T_i$ is the length of the $i$th trial. According to the reward received at the end of the $i$th trial, we can divide the memory into three types: the trial with positive reward (denoted as $E_i^{posi}$), the trial with negative reward~(denoted as $E_i^{nega}$), and the trial with zero reward (denoted as $E_i^{zero}$). Then the agent stores these episodic memories in one replay buffer $D=\{\{E_1^{posi},…,E_{N_1}^{posi}\},$ $\{E_1^{nega},…,E_{N_2}^{nega}\},$ $\{E_1^{zero},…,E_{N_3}^{zero}\}\}$, a pool of memories, the function of which is similar to the hippocampus in the brain.
How does past experience stored in replay buffer optimize behavior policy? At the computational level, a method called importance sampling can be used to estimate the expected return $J(\theta)$ by sampling episodic memory from replay buffer $D$. In fact, this behavior policy for collecting samples is a known policy (predefined just like a hyperparameter), labeled as $\mu(\mathrm a|\mathrm u)$. Suppose we retrieve a single experience $(\mathrm u_0,\mathrm a_0,\eta_0,\mu(.|\mathrm u),…,\mathrm u_t,\mathrm a_t,\eta_t,\mu(.|u_t)$ $,…,$ $\mathrm u_{T_{i-1}}$,
$\mathrm a_{T_{i-1}},\eta_{T_{i-1}},\mu(.|\mathrm u_{T_{i-1}}))$, where actions have been sampled from episodic memory according to the behavior policy $\mu(a|u)$. Given that the training observations, the policy gradient can be rewritten as:
\begin{align}
\nabla_\theta J(\theta) &=\sum_{t=0}^{T}\frac{\pi ( \mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}\nabla_\theta \log \pi(\mathrm a_t|\mathrm u_t;\theta)A(\mathrm u_t, \mathrm r^\pi_t),
\label{equ:13}
\end{align}
\noindent where $\frac{\pi(\mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}$ is the importance weight, and $\mu$ is non-zero whereever $\pi(\mathrm a_t|\mathrm u_t;\theta)$ is.
We note that in the case where $\frac{\pi_\theta (\mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}=1$ the equation~(\ref{equ:13}) is the same as equation~(\ref{equ:8}). To use episodic memory to optimize policy, we define the learning process as follows: for trial $n=1$, policy network was updated with equation~(\ref{equ:12}), in which the gradient term $\nabla_\theta J(\theta)$ is represented by equation~(\ref{equ:8}). Then the agent store full trajectory (an episodic memory) of this trial in replay buffer. For the trial $n=2$, the agent randomly samples a trajectory as past experience to optimize policy and the gradient term $\nabla_\theta J(\theta)$ is represented by equation~(\ref{equ:13}). These steps are repeated until the training terminal, at which point the agent learns a policy concerning how to perform the value-based economic choice task.
\textcolor{blue}{Fig.~\ref{fig5}} \textit{(left)} shows the learning curve of agents with and without episodic memory (orange line and blue line, respectively) for the value-based economic choice task (the average return of $2000$ trial samples). It can be seen that the agent with episodic memory performs significantly faster in this task compared with the one without episodic memory, although both policies eventually reached the same performance. These results are consistent with some recent studies showing that animal decisions can indeed be guided by samples of the individual past experience ~\citep{murty2016episodic}.
The percentage of correct trials is shown in \textcolor{blue}{Fig.~\ref{fig5}} \textit{(right)} and it is calculated by $N_{right}/N_{choice}$, where $N_{choice}$ represents the number of trials in which the monkey made a choice (right or error) in $20000$ trials, and $N_{right}$ denotes the number of correct choices. It can be observed that at the beginning of the trial, the correct percentage of agents who cannot extract episodic memory from the replay buffer is maintained at around $50\%$ (blue line), and only after substantial training (about $30000$ trials) can the agent achieve the baseline accuracy rate. This suggests that the agent equipped with episodic memory shows a better execution efficiency.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/fig5.pdf}
\caption{Learning curves of the agent with episodic memory (orange line) and without episodic memory (blue line) on the economic choice task. \textit{(Left)} Average reward per trial. \textit{(Right)} Percent correct, for trials on which the network made a decision.
}
\label{fig5}
\end{figure}
\subsection{Episodic memory for salient event}\smallskip
\label{investigate}
In the previous section, we have verified that episodic memory indeed allows the agent to learn a task faster. Nevertheless, the question of which types of episodic memory samples should be selected to govern the decisions remains unanswered in the field of cognitive neuroscience. In this section, we will examine this question.
The relationship between events is often clear only when they are reviewed. For example, when something positive happens, we want to know how to repeat this event. However, when an event occurs before the reward is given, how to know what causes it? This is the earlier mentioned `temporal credit assignment problem', which can be solved by saving all the potential determinants, such as rewards, of behaviorally relevant events into working memory. We propose the question of how does episodic memory balance the need to represent these potential determinants of reward outcomes to deal with credit assignment? One solution may be to enhance episodic memory for notable events, referred to as 'salient memory', which are potential reward determinants. In fact, both the violations and conformance of expectancy can be considered as salient events to be stored in the memory buffer. Since such long-term memories are potentially predictive of reward outcomes, it will provide a computationally feasible way to obtain future rewards.
In the value-based economic choice task, salient events include trials in which the right choice was made (rewarded; expectancy conformance) or the fixation was broken (punished; expectancy violation). When it comes to a gaze-breaking trial, the agent's policy cannot be optimized due to insufficient interaction with the environment. As a result, we only choose expectancy conformance as a salient event. In the third type of trials, the monkeys made a response before the trial was over, but their choice was wrong. The incorrect response was neither rewarded by the juice nor punished. Such a trial can be considered as a common event, because it's not a particular event for monkeys. Accordingly, the episodic memories in the replay buffer $D$ have three types: the set $D_{posi}=\{E_1^{posi},$ $…,E_{N_c}^{posi}\}$ for salient events, the set $D_{zero}=\{E_1^{zero},…,E_{N_z}^{zero}\}$ for common events and the remaining events are denoted as the set $D_{nega}=\{E_1^{nega},…,$ $E_{N_e}^{nega}\}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/fig6.pdf}
\caption{Learning curves of an agent on the RDM task for different types of episodic memory, salient memory (green line), common episodic memory (blue line), all type of episodic memory (orange). \textit{(Left)} Average reward per trial. \textit{(Right)} Percent correct.
}
\label{fig6}
\end{figure}
In order to investigate if salient events sampled from the memory buffer can more effectively have a bias towards reward-guided choice compared with common events, we plot the learning curve of the agent with different types of episodic memories. By comparing the green (salient events) and blue (common events) curves in \textcolor{blue}{Fig.~\ref{fig6}}, we can see that the agent with significant events achieves a better performance than the agent with common events.
As shown in \textcolor{blue}{Fig.~\ref{fig6}} (\textit{left}), when the agent uniformly at random draws an event from the set $D_{posi}$ to optimize the policy, the return received by the agent can reach the baseline level more quickly (green line). However, when the agent extracts common events from the set $D_{zero}$ (blue line), it must go through a long period of learning to get higher returns. In this case, the percentage of agents who chose correctly is also always maintained at around $50\%$ at the beginning of the experiment (green line in \textcolor{blue}{Fig.~\ref{fig6}} (\textit{right})), which indicates that the monkey chooses the direction at random. As the training increases, the monkey makes more and more correct choices. It can be noted that its learning curve is similar to that of an agent who does not use memory to optimize their strategies (blue line in \textcolor{blue}{Fig.~\ref{fig5}}).
This suggests that episodic memory about common events did not help the monkeys to make choices. Moreover, when an experience is sampled from the set $D$, the reward value and final accuracy obtained by the agent are higher than those in the case where experience is sampled from the set $D_{zero}$, but lower than the case where experience is sampled from the set $D_{posi}$. Although the learning time significantly varies, the agent ends up with the same return value and accuracy in all the cases. Our results suggest that memory encoding may be stronger for trials that involved salient events. That is, the salient episodic memory in the hippocampus is more likely to be sampled during the ensuing choice.
\section{Discussion}\medskip
The goal of the present work was twofold: First, we trained an Actor-Critic RL model to solve tasks that are analogous to the monkey's tasks. This can reproduce the main features of the behavioral data so that we conduct other behavioral experiments in this framework. Specifically, we used RNNs to construct the Actor-Critic RL framework based on RL theories of the PFC-BG circuit. The model was evaluated in two classical decision-making tasks --- a simple conceptual decision-making task and a value-based economic choice task --- and successfully reproduced the behavioral features reported by \citep{shadlen2001neural} and neural activity recorded from the animal brain reported by \citep{padoa2006neurons}. We presented a computational platform, in which corresponding circuit mechanisms can be studied by systematically analyzing a model network. In addition, diverse cognitive functions can also be explored by conducting corresponding behavioral experiments. Second, based on our modeling work, we investigated which experiences in the hippocampus are ultimately considered or ignored during deliberation to govern future choices.
Since 1995, numerous actor-critic models for reinforcement learning have been proposed in the field of neuroscience, particularly in the rat’s basal ganglia \citep{davis1995models,joel2002actor}. Some evidence shows that neurons in the PFC \citep{fujii2005time} and striatum \citep{barnes2005activity} code the action sequences, suggesting that the BG-PFC circuit may participate in abstract action representations. Therefore, at the biological analysis level, our model supports the actor-critic picture for reward-based learning in the PFC-BG circuit: One circuit learns an action selection policy and implement it, while the second structure computes the expected return and offers immediate feedback that tells it whether the current action is good or bad. Moreover, \cite{frank2006anatomy} have demonstrated that the BG can implement an adaptive gating mechanism, which allows task-relevant information to be maintained into working memory (a temporary storage in the brain, and facilitated by the prefrontal cortex). Our model also supports this division of labor between PFC and BG as follows: The actor network learns task-relevant information and saves it into the hidden state in the form of working memory, while the critic system learns a value function to train the actor network, which in turn furnishes a dynamic gating mechanism to control updating the working memory.
Moreover, a recent experimental work in humans has shown that during memory-based decision-making tasks, the medial frontal cortical neurons phase-locked their activity to theta frequency band oscillations in the hippocampus, which suggests an oscillation-mediated activity coordination between distant brain regions \citep{Minxha2020Flexible}. This functional interaction between the frontal cortical and hippocampus supports our computational framework: The Actor-Critic model uses working memory stored in the hidden state of the GRU to make a choice, and this selected action affects the storage of memories in the hippocampus, which is in turn used to optimize the policy and control working memory updates. Although we have used the GRU to model the decision and value networks, both the ability of dynamic gating mechanism and storing states as working memory make our model shows a powerful computational learning performance. However, early work demonstrated that the capacity of working memory is limited, which results in decisions that are often made with finite information. Due to the transient characteristic caused by the capacity limitation and fast decay rate of working memory, it is not an ideal memory system to independently support decision-making. Moreover, accumulating evidence indicates that dopamine can facilitate episodic memory in the hippocampus encoding to support adaptive behavior \citep{bethus2010dopamine}, which suggests that episodic sampling is may be a powerful decision-making mechanism. Therefore, we investigated the link between episodic memory and reward-based choice in our framework by conducting the value-based economic choice task in our framework. The results suggest that a retrieval of salient episodic memory can promote deliberation in the decision-making process, which is essential to future goal-directed behavior.
Our model has some limitations, which may be opportunities for future work. For instance, during the retrieval of samples from episodic memories, we have defined the priority of salient events only in an abstract way, while we have not provided a mechanism to explain how the mammalian brain would compute it. Therefore, there is a need to develop a process-level model to implement this term. Moreover, in the cerebral cortex of mammals, one neuron releases only a single transmitter, known as `Dale's Principle', which generates the same effect (excitatory or inhibitory) at all of its synaptic connections to other cells. In our framework, due to the complex nature of the GRU, we omitted such a biological constraint and instead used the firing rate units as a mixture of excitatory and inhibitory neurons. In future work, it is required to reintroduce these constraints, and other physiologically relevant phenomena, such as bursting, adaptation and oscillations, may also be incorporated to build a more biologically-plausible model.
\noindent
\textbf{Acknowledgement} This work was supported by the National Natural Science Foundation of China (Grant No.11572127 and No.11172103).
\printcredits
\bibliographystyle{cas-model2-names}
|
\section{Introduction}\label{Sec1}
\thispagestyle{empty}
Person re-identification (RE-ID) is a challenging problem focusing on pedestrian
matching and ranking across non-overlapping camera views. It remains an open
problem although it has received considerable exploration recently, in consideration of its potential
significance in security applications, especially in the case of video surveillance.
It has not been solved yet principally because of the dramatic intra-class variation
and the high inter-class similarity.
Existing attempts mainly focus on learning to extract robust and discriminative
representations
\cite{2014_ECCV_SCNCD,2014_IVC_KBICOV, 2015_CVPR_LOMO},
and learning matching functions or metrics
\cite{2011_CVPR_PRDC,2012_CVPR_KISSME,2013_CVPR_LADF,2014_ICDSC_KCCA,2015_CVPR_LOMO,2015_ICCV_MLAPG, 2015_ICCV_CSL}
in a supervised manner. Recently, deep learning has been adopted to RE-ID community
\cite{2015_CVPR_Ahmed,2016_CVPR_JSTL,2016_CVPR_Wang, 2016_ECCV_Gated}
and has gained promising results.
However, supervised strategies are intrinsically limited due to the requirement
of manually labeled cross-view training data, which is very expensive \cite{2015_TCSVT_xiaojuan}.
In the context of RE-ID,
the limitation is even pronounced because \emph{(1)} manually labeling may not be reliable
with a huge number of images to be checked across multiple camera views, and more importantly \emph{(2)} the astronomical
cost of time and money is prohibitive to label the overwhelming amount of data across disjoint camera views.
Therefore, in reality supervised methods would be restricted
when applied to a new scenario with a huge number of unlabeled data.
\begin{figure}\label{FigTitle}
\includegraphics[width=1\linewidth]{temp2.pdf}
\caption{Illustration of view-specific interference/bias and our idea.
Images from different cameras suffer from
view-specific interference, such as occlusions in Camera-1,
dull illumination in Camera-2, and the change of viewpoints between them.
These factors introduce bias in the original feature space, and therefore
unsupervised re-identification is extremely challenging. Our model
structures data by clustering and learns view-specific projections
jointly, and thus finds a shared space where view-specific bias is
alleviated and better performance can be achieved. (Best viewed in color)
}
\end{figure}
\ws{To directly make full use of the cheap and valuable unlabeled data,
some existing efforts on exploring unsupervised strategies
\cite{2010_CVPR_SDALF,2013_CVPR_SALIENCE,2014_BMVC_GTS, 2015_BMVC_DIC,2015_PAMI_ISR,2016_CVPR_tDIC, 2016_ICIP_Wang, 2016_ECCV_Kodirov} have been reported,}
but they are still not very satisfactory.
One of the main reasons is that without the help of labeled data,
it is rather difficult to model the dramatic variances
across camera views, such as the variances of illumination and occlusion conditions.
Such variances lead to view-specific interference/bias which can be very disturbing in finding
what is more distinguishable in matching people across views (see Figure \ref{FigTitle}).
In particular, existing unsupervised models treat the samples from different views in the same manner,
and thus the effects of view-specific bias could be overlooked.
In order to better address the problems \ws{caused by camera view changes} in unsupervised RE-ID scenarios, we propose a novel
unsupervised RE-ID model named \emph{Clustering-based Asymmetric\footnote{\final{``Asymmetric'' means specific transformations for each camera view.}} MEtric Learning (CAMEL)}.
The ideas behind are on the two \ws{following} considerations. \ws{First, although}
conditions can vary among camera views, we assume that there should be some shared space
where the data representations are less affected by view-specific bias.
By projecting original data into the shared space, the distance between any pair of
samples $\mathbf{x}_i$ and $\mathbf{x}_j$ is computed as:
\begin{equation}\label{EqSym}
\small
d(\mathbf{x}_i,\mathbf{x}_j) = \lVert \bm{U}^{\mathrm{T}}\mathbf{x}_i - \bm{U}^{\mathrm{T}}\mathbf{x}_j \rVert_2
= \sqrt{(\mathbf{x}_i-\mathbf{x}_j)^{\mathrm{T}}\bm{M}(\mathbf{x}_i-\mathbf{x}_j)},
\end{equation}
where $\bm{U}$ is the transformation matrix and $\bm{M} = \bm{U}\bm{U}^{\mathrm{T}}$.
\Koven{However, it can be hard for a universal transformation to implicitly model the view-specific feature distortion from different camera views,
especially when we lack label information to guide it.
This motivates us to \emph{explicitly} model the view-specific bias.
Inspired by the supervised asymmetric distance model \cite{2015_TCSVT_ASM},
we propose to embed the asymmetric metric learning to our unsupervised RE-ID modelling,
and thus modify the symmetric form in Eq. (\ref{EqSym}) to an asymmetric one:}
\begin{equation}\label{EqAsym}
\small
d(\mathbf{x}_i^p,\mathbf{x}_j^q) = \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \bm{U}^{q\mathrm{T}}\mathbf{x}_j^q \rVert_2,
\end{equation}
where $p$ and $q$ are indices of camera views.
An asymmetric metric is more acceptable for unsupervised RE-ID scenarios as
it explicitly models the variances among views by treating each view differently.
By such an explicit means, we are able to better alleviate the disturbances of view-specific
bias.
The other consideration is that since we are not clear about how to separate similar persons
in lack of labeled data, it is reasonable to pay more attention to
better separating dissimilar ones.
Such consideration \ws{motivates} us to structure our data by clustering.
Therefore, we develop \emph{asymmetric metric clustering} that clusters cross-view person images.
By clustering together with asymmetric modelling, the data can be better characterized in the shared space,
contributing to better matching performance (see Figure \ref{FigTitle}).
In summary, the proposed CAMEL aims to learn view-specific projection for each camera view
by jointly learning the asymmetric metric and
seeking \ws{optimal} cluster separations.
In this way, the data from different views is projected into
a shared space where view-specific bias is aligned to an extent, and thus better performance
of cross-view matching can be achieved.
\ws{So far in literatures, the unsupervised RE-ID models have only been evaluated on small datasets which contain only hundreds or
a few thousands of images. However, in more realistic scenarios we need evaluations
of unsupervised methods on much larger datasets, say, consisting of hundreds of thousands of samples,
to validate their scalabilities. In our experiments, we have conducted extensive comparison on datasets
with their scales ranging widely.
In particular, we combined two existing RE-ID datasets \cite{2015_ICCV_MARKET,MARS}
to obtain a larger one which contains over 230,000 samples.
Experiments on this dataset (see Sec. \ref{SecFurtherEval}) show empirically that our model is more scalable to problems of larger scales,
which is more realistic and more meaningful for unsupervised RE-ID models, while some existing unsupervised RE-ID models are not scalable due to the expensive cost in either storage or computation.}
\section{Related Work}\label{Sec2}
At present, most existing RE-ID models are in a supervised manner. They are mainly
based on learning distance metrics or subspace
\cite{2011_CVPR_PRDC,2012_CVPR_KISSME,2013_CVPR_LADF,2014_ICDSC_KCCA,2015_CVPR_LOMO,2015_ICCV_MLAPG, 2015_ICCV_CSL},
learning view-invariant and discriminative features
\cite{2014_ECCV_SCNCD,2014_IVC_KBICOV, 2015_CVPR_LOMO},
and deep learning frameworks
\cite{2015_CVPR_Ahmed,2016_CVPR_JSTL,2016_CVPR_Wang, 2016_ECCV_Gated}.
However, all these models rely on substantial labeled training data, which is typically required
to be pair-wise for each pair of camera views. Their performance depends highly on
the quality and quantity of labeled training data.
In contrast, our model does not require any labeled data and thus is free from
prohibitively high cost of manually labeling and the risk of incorrect labeling.
\ws{To directly utilize unlabeled data for RE-ID, several unsupervised RE-ID models \cite{2013_CVPR_SALIENCE,2014_BMVC_GTS,2015_PAMI_ISR,2015_BMVC_DIC,2016_CVPR_tDIC}
have been proposed}.
All these models differ from ours in two aspects.
On the one hand, these models do not explicitly exploit the information on
view-specific bias, i.e., they treat feature transformation/quantization in every distinct camera view in the same manner
when modelling. In contrast, our model tries to learn specific transformation
for each camera view, aiming to find a shared space where view-specific interference
can be alleviated and thus better performance can be achieved.
On the other hand, as for the means to learn a metric or a transformation,
existing unsupervised methods for RE-ID rarely consider clustering while
we introduce an asymmetric metric clustering to characterize data in the learned space. \ws{While the methods proposed in \cite{2015_TCSVT_ASM, 2013_AVSS_RCCA,2015_TCSVT_RCCA} could
also learn view-specific mappings, they are supervised methods and more importantly cannot be generalized to handle unsupervised RE-ID.}
Apart from our model, there have been some clustering-based metric learning models
\cite{2007_CVPR_AML,2015_NC_uNCA}. However, to our best knowledge, there is no such
attempt in RE-ID community before.
This is potentially because clustering is more susceptible to view-specific interference
and thus data points from the same view are more inclined to be clustered together,
instead of those of a specific person across views.
Fortunately, \ws{by formulating asymmetric learning and further limiting the discrepancy between view-specific transforms}, this problem can be
alleviated in our model. Therefore, our model is essentially different from these models
not only in formulation but also
in that our model is able to better deal with cross-view matching problem by treating
each view asymmetrically. We will discuss the differences between our model and the
existing ones in detail in Sec. \ref{SecFairCmp}.
\section{Methodology}
\subsection{Problem Formulation}
Under a conventional RE-ID setting, suppose we have a surveillance camera network that
consists of $V$ camera views, from each of which we have collected
$N_p\;(p = 1,\cdots,V)$ images and thus there are $N = N_1+\cdots+N_V$ images in total as training samples.
Let \modify{ $\bm{X} = [\mathbf{x}_1^1,\cdots,\mathbf{x}_{N_1}^1,\cdots,\mathbf{x}_{1}^V,\cdots,\mathbf{x}_{N_V}^V]\in \mathbb{R}^{M \times N}$}
denote the training set, with each column $\mathbf{x}_i^p$ $(i = 1,\cdots,N_p; p = 1,\cdots,V)$
corresponding to an $M$-dimensional representation of the $i$-th image from the $p$-th
camera view.
Our goal is to learn $V$ mappings i.e., $\bm{U}^1,\cdots,\bm{U}^V$,
where $\bm{U}^p \in \mathbb{R}^{M \times T} (p = 1,\cdots,V)$,
corresponding to each camera view,
and thus we can project the original representation $\mathbf{x}_i^p$
from the original space $\mathbb{R}^M$
into a shared space $\mathbb{R}^T$
in order to alleviate the view-specific interference.
\subsection{Modelling}\label{Sec3}
Now we are looking for some transformations to map our data
into a shared space where we can better separate the
images of one person from those of different persons.
Naturally, this goal can be achieved by narrowing intra-class discrepancy and meanwhile
pulling the centers of all classes away from each other.
In an unsupervised scenario, however, we have no labeled data to tell our model
how it can exactly distinguish one person from another who has a confusingly similar
appearance with him.
Therefore, it is acceptable to relax the original idea:
we focus on gathering similar person images together, and hence separating relatively dissimilar ones.
Such goal can be modelled by minimizing an objective function like that of $k$-means
clustering \cite{KMEANS}:
\begin{equation}\label{Eq0}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}}\mathcal{F}_{intra}= \sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{\mathrm{T}}\mathbf{x}_i - \mathbf{c}_k \rVert^2,
\end{aligned}
\end{equation}
where $K$ is the number of clusters,
$\mathbf{c}_k$ denotes the centroid of the $k$-th cluster and
$\mathcal{C}_k = \{ i | \bm{U}^{\mathrm{T}}\mathbf{x}_i \in k$-th cluster$\}$.
However, clustering results may be affected extremely
by view-specific bias when applied in cross-view problems.
In the context of RE-ID, the feature distortion could be view-sensitive due to view-specific interference like
different lighting conditions and occlusions \cite{2015_TCSVT_ASM}.
Such interference
might be disturbing or even dominating in searching the similar person images across views during
clustering procedure. To address this cross-view problem,
we learn specific projection for each view rather than a universal one
to explicitly model the effect of view-specific interference and to alleviate it.
Therefore, the idea can be further formulated
by minimizing an objective function below:
\begin{equation}\label{Eq1}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}^1,\cdots,\bm{U}^V}\mathcal{F}_{intra}= &\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \mathbf{c}_k \rVert^2\\
s.t.\qquad \bm{U}^{p\mathrm{T}}&\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V),
\end{aligned}
\end{equation}
where the notation is similar to Eq. (\ref{Eq0}), with
$p$ denotes the view index,
$\bm{\Sigma}^p = \bm{X}^p\bm{X}^{p\mathrm{T}}/ N_p + \alpha \bm{I}$ and $\bm{I}$ represents the identity matrix
which avoids singularity of the covariance matrix.
The transformation $\bm{U}^p$ that corresponds to each instance $\mathbf{x}_i^p$ is determined
by the camera view which $\mathbf{x}_i^p$ comes from.
The quasi-orthogonal constraints on $\bm{U}^p$ ensure that the model will
not simply give zero matrices. By combining the asymmetric metric learning, we actually realize an asymmetric metric clustering on RE-ID data across camera views.
Intuitively, if we minimize this objective function directly,
$\bm{U}^p$ will largely depend on the data distribution
from the $p$-th view. Now that there is specific bias on each view,
any $\bm{U}^p$ and $\bm{U}^q$ could be arbitrarily different.
This result is very natural,
but large inconsistencies among the learned transformations are
not what we exactly expect,
because the transformations are with respect to person images from different views: they are inherently correlated and homogeneous.
More critically, largely different projection basis pairs would fail to
capture the discriminative nature of cross-view images, producing an even worse
matching result.
Hence, to strike a balance between the ability to capture discriminative nature and
the capability to alleviate view-specific bias, we embed a cross-view consistency regularization term
into our objective function. And then, in consideration of better tractability,
we divide the intra-class term by its scale $N$, so that the regulating parameter
would not be sensitive to the number of training samples.
Thus, our optimization task becomes
\modify{
\begin{equation}\label{Eq2}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}^1,\cdots,\bm{U}^V} \mathcal{F}_{obj} = \frac{1}{N}&\mathcal{F}_{intra} + \lambda\mathcal{F}_{consistency} \\
= \frac{1}{N}\sum_{k=1}^K &\sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \mathbf{c}_k \rVert^2
+\lambda \sum_{p\neq q} \lVert \bm{U}^p-\bm{U}^q\rVert_F^2 \\
s.t.\qquad &\bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V),
\end{aligned}
\end{equation}}
where $\lambda$ is the cross-view regularizer and $\lVert\cdot\rVert_F$ denotes the Frobenius norm
of a matrix. We call the above model the \emph{Clustering-based Asymmetric MEtric Learning (CAMEL)}.
To illustrate the differences between symmetric and asymmetric metric clustering in structuring data
in the RE-ID problem,
we further show the data distributions in Figure \ref{FigP}.
We can observe from Figure \ref{FigP} that the view-specific
bias is obvious in the original space: triangles in the upper left and circles in the lower right.
In the common space
learned by symmetric metric clustering, the bias is still obvious.
In contrast, in the shared space learned by asymmetric metric clustering,
the bias is alleviated and thus the data is better characterized according to the identities
of the persons, i.e., samples of one person (one color) gather together into a cluster.
\begin{figure}[t]
\hspace{-1ex}
\subfigure[Original]{
\includegraphics[width=0.33\linewidth]{feature_1.pdf}
}
\hspace{-2.5ex}
\subfigure[Symmetric]{
\includegraphics[width=0.33\linewidth]{symDistribution_7color.pdf}
}
\hspace{-2.5ex}
\subfigure[Asymmetric]{
\includegraphics[width=0.33\linewidth]{metric_1.pdf}
}
\caption{\label{FigP}Illustration of how symmetric and asymmetric metric clustering structure data
using our method for the unsupervised RE-ID problem. The samples are from the SYSU dataset \cite{2015_TCSVT_ASM}.
We performed PCA for visualization. One shape (triangle or circle) stands for samples from one view, while one color indicates samples of one person.
(a) Original distribution (b) distribution in the common space learned by symmetric metric clustering
(c) distribution in the shared space learned by asymmetric metric clustering. (Best viewed in color)}
\end{figure}
\subsection{Optimization}
For convenience, we denote $\mathbf{y}_i=\bm{U}^{p\mathrm{T}}\mathbf{x}_i^p$. Then we have $\bm{Y} \in \mathbb{R}^{T \times N}$,
where each column $\mathbf{y}_i$
corresponds to the projected new representation of that from $\bm{X}$. For optimization, we rewrite our objective function in a more compact form.
The first term can be rewritten as follow \cite{NMF}:
\begin{equation}\label{Eq3}
\small
\begin{aligned}
\frac{1}{N}\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \mathbf{y}_i - \mathbf{c}_k \rVert^2
=\frac{1}{N}[\mathrm{Tr}(\bm{Y}^{\mathrm{T}}\bm{Y})-\mathrm{Tr}(\bm{H}^{\mathrm{T}}\bm{Y}^{\mathrm{T}}\bm{YH})], \\
\end{aligned}
\end{equation}
where
\begin{equation}\label{EqH}
\small
\bm{H} =
\begin{bmatrix}
\mathbf{h}_1,...,\mathbf{h}_K
\end{bmatrix}
,\quad \mathbf{h}_k^{\mathrm{T}}\mathbf{h}_l =
\begin{cases}
0 & k\neq l \\
1 & k= l
\end{cases}
\end{equation}
\begin{equation}\label{EqColH}
\small
\mathbf{h}_k =
\begin{bmatrix}
0,\cdots,0,1,\cdots,1,0,\cdots,0,1,\cdots
\end{bmatrix}
^{\mathrm{T}}/\sqrt{n_k}
\end{equation}
is an indicator vector with the $i$-th entry corresponding to the instance $\mathbf{y}_i$,
indicating that $\mathbf{y}_i$ is in the $k$-th cluster if the corresponding entry does not equal zero.
Then we construct
\modify{
\begin{equation}
\small
\widetilde {\bm{X}} =
\begin{bmatrix}
\mathbf{x}^1_1&\cdots&\mathbf{x}^1_{N_1}& \mathbf{0}& \cdots& \mathbf{0}& \cdots& \mathbf{0} \\
\mathbf{0}&\cdots&\mathbf{0}& \mathbf{x}^2_1&\cdots& \mathbf{x}^2_{N_2}& \cdots& \mathbf{0} \\
\vdots&\vdots&\vdots& \vdots&\vdots& \vdots& \vdots& \vdots \\
\mathbf{0}&\cdots&\mathbf{0}& \mathbf{0}&\cdots& \mathbf{0}& \cdots& \mathbf{x}^V_{N_V}
\end{bmatrix}
\end{equation}}
\begin{equation}
\small
\widetilde {\bm{U}} =
\begin{bmatrix}
\bm{U}^{1\mathrm{T}}, \cdots, \bm{U}^{V\mathrm{T}}
\end{bmatrix}
^{\mathrm{T}}
,
\end{equation}
so that
\begin{equation}\label{EqY}
\small
\bm{Y} = \widetilde{\bm{U}}^{\mathrm{T}}\widetilde{\bm{X}},
\end{equation}
and thus Eq. (\ref{Eq3}) becomes
\begin{equation}
\small
\begin{aligned}
&\frac{1}{N}\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \mathbf{y}_i - \mathbf{c}_k \rVert^2 \\
=&\frac{1}{N}\mathrm{Tr}(\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}})
-\frac{1}{N}\mathrm{Tr}({\bm{H}}^{\mathrm{T}}\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}}\bm{H}).
\end{aligned}
\end{equation}
As for the second term, we can also rewrite it as follow:
\begin{equation}
\small
\lambda \sum_{p\neq q} \lVert \bm{U}^p-\bm{U}^q\rVert_F^2 = \lambda\mathrm{Tr}(\widetilde{\bm{U}}^{\mathrm{T}}\bm{D\widetilde U}),
\end{equation}
where
\begin{equation}
\small
\bm{D} =
\begin{bmatrix}
(V-1)\bm{I}& -\bm{I}& -\bm{I}&\cdots &-\bm{I} \\
-\bm{I}& (V-1)\bm{I}& -\bm{I}&\cdots &-\bm{I} \\
\vdots&\vdots&\vdots&\vdots&\vdots \\
-\bm{I}& -\bm{I}& -\bm{I}& \cdots&(V-1)\bm{I}
\end{bmatrix}.
\end{equation}
Then, it is reasonable to relax the constraints
\begin{equation}
\small
\bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V)
\end{equation}
to
\begin{equation}
\small
\sum_{p=1}^V \bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \widetilde {\bm{U}}^{\mathrm{T}}\widetilde{\bm{\Sigma}}\widetilde {\bm{U}} = V\bm{I},
\end{equation}
where $\widetilde{\bm{\Sigma}} = diag(\bm{\Sigma}^1, \cdots, \bm{\Sigma}^V)$
because what we expect is to prevent each $\bm{U}^p$ from shrinking to a zero matrix.
The relaxed version of constraints is able to satisfy our need, and it
bypasses trivial computations.
By now we can rewrite our optimization task as follow:
\begin{equation}\label{optFinal}
\small
\begin{aligned}
\mathop{\min}_{\widetilde{\bm{U}}}\mathcal{F}_{obj} &=
\frac{1}{N}\mathrm{Tr}(\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}})
+\lambda\mathrm{Tr}(\widetilde{\bm{U}}^{\mathrm{T}}\bm{D\widetilde U}) \\
&- \frac{1}{N}\mathrm{Tr}({\bm{H}}^{\mathrm{T}}\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}}\bm{H})
\\
&s.t.\qquad \widetilde {\bm{U}}^{\mathrm{T}}\widetilde{\bm{\Sigma}}\widetilde {\bm{U}} = V\bm{I}.
\end{aligned}
\end{equation}
It is easy to realize from Eq. (\ref{Eq2}) that our objective function
is highly non-linear and non-convex. Fortunately, in the form of Eq. (\ref{optFinal})
we can find that once $\bm{H}$ is fixed, Lagrange's method can be applied to
our optimization task. And again from Eq. (\ref{Eq2}),
it is exactly the objective of $k$-means clustering once $\widetilde{\bm{U}}$ is fixed \cite{KMEANS}.
Thus, we can adopt an alternating algorithm to solve the optimization problem.
\noindent \textbf{Fix $\bm{H}$ and optimize $\widetilde{\bm{U}}$.} Now we see how we optimize $\widetilde{\bm{U}}$.
After fixing $\bm{H}$ and applying the method
of Lagrange multiplier, our optimization task (\ref{optFinal})
is transformed into an eigen-decomposition problem as follow:
\begin{equation}\label{EqEigenDe}
\small
\bm{G}\mathbf{u} = \gamma \mathbf{u},
\end{equation}
where $\gamma$ is the Lagrange multiplier (and also is the eigenvalue here) and
\begin{equation}\label{EqG}
\small
\bm{G} = \widetilde{\bm{\Sigma}}^{-1}(\lambda \bm{D}+\frac{1}{N}\widetilde{\bm{X}}\widetilde{\bm{X}}^{\mathrm{T}}-\frac{1}{N}\widetilde{\bm{X}}\bm{HH}^{\mathrm{T}}\widetilde{\bm{X}}^{\mathrm{T}}).
\end{equation}
Then, $\widetilde{\bm{U}}$ can be obtained by solving this eigen-decomposition problem.
\noindent \textbf{Fix $\widetilde{\bm{U}}$ and optimize $\bm{H}$.} As for the optimization of $\bm{H}$, we can simply fix $\widetilde{\bm{U}}$
and conduct $k$-means clustering in the learned space. Each column of $\bm{H}$,
$\mathbf{h}_k$, is thus constructed according to the clustering result.
Based on the analysis above, we can now propose the main algorithm
of CAMEL in Algorithm \ref{AlgCamel}. We set maximum iteration to 100.
After obtaining $\widetilde{\bm{U}}$, we decompose it back into $\{\bm{U}^1,\cdots,\bm{U}^V\}$.
The algorithm is guaranteed to convergence, as given in the following proposition:
\final{
\begin{prop}
In Algorithm \ref{AlgCamel}, $\mathcal{F}_{obj}$ is guaranteed to convergence.
\end{prop}
\begin{proof}
In each iteration, when $\widetilde{\bm{U}}$ is fixed,
if $\bm{H}$ is the local minimizer, $k$-means remains $\bm{H}$ unchanged, otherwise it seeks the local minimizer.
When $\bm{H}$ is fixed, $\widetilde{\bm{U}}$ has a closed-form solution which is the global minimizer.
Therefore, the $\mathcal{F}_{obj}$ decreases step by step.
As $\mathcal{F}_{obj}\geq 0$ has a lower bound $0$, it is guaranteed to convergence.
\end{proof}
}
\begin{algorithm}[t]\label{AlgCamel}
\scriptsize
\caption{CAMEL}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$\widetilde{\bm{X}},K,\epsilon=10^{-8}$}
\Output{$\widetilde{\bm{U}}$}
Conduct $k$-means clustering with respect to
each column of $\widetilde{\bm{X}}$ to initialize $\bm{H}$ according to Eq. (\ref{EqH}) and (\ref{EqColH}). \\
Fix $\bm{H}$ and solve the eigen-decomposition problem described by Eq. (\ref{EqEigenDe}) and (\ref{EqG})
to construct $\widetilde{\bm{U}}$. \\
\While{decrement of $\mathcal{F}_{obj} > \epsilon$ \& maximum iteration unreached}
{
\begin{itemize}[leftmargin=*]
\setlength{\topsep}{1ex}
\setlength{\itemsep}{-0.1ex}
\setlength{\parskip}{0.1\baselineskip}
\vspace{0.1cm}
\item Construct $\bm{Y}$ according to Eq. (\ref{EqY}). \\
\item Fix $\widetilde{\bm{U}}$ and conduct $k$-means clustering with respect to
each column \par of $\bm{Y}$ to update $\bm{H}$ according to Eq. (\ref{EqH}) and (\ref{EqColH}). \\
\item Fix $\bm{H}$ and solve the eigen-decomposition problem described by \par Eq. (\ref{EqEigenDe}) and (\ref{EqG})
to update $\widetilde{\bm{U}}$.
\end{itemize}
}
\end{algorithm}
\section{Experiments}
\subsection{Datasets}
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_VIPER.pdf}
}
\subfigure[\label{FigDatasetsCUHK01}]{
\includegraphics[width=0.137\linewidth]{collage_CUHK01.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_CUHK03.pdf}
}
\subfigure[\label{FigDatasetsSYSU}]{
\includegraphics[width=0.137\linewidth]{collage_SYSU.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_Market.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_ExMarket.pdf}
}
\caption{\label{FigDatasets}Samples of the datasets. Every two images in
a column are from one identity across two disjoint camera views.
(a) VIPeR (b) CUHK01 (c) CUHK03 (d) SYSU (e) Market (f) ExMarket. (Best viewed in color)}
\end{center}
\end{figure}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.5cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
\# Samples & 1,264 & 3,884 & 13,164 & 24,448 & 32,668 & 236,696 \\
\# Views & 2 & 2 & 6 & 2 & 6 & 6 \\
\bottomrule
\end{tabular}%
\caption{\label{TableDatasets}Overview of dataset scales. ``\#'' means ``the number of''.}
\end{center}
\end{table}
Since unsupervised models are more meaningful when the scale of problem
is larger, our experiments were conducted on relatively big datasets
except VIPeR \cite{VIPER} which is small but widely used.
Various degrees of view-specific bias can be observed in all these datasets (see Figure \ref{FigDatasets}).
\noindent \textbf{The VIPeR dataset} contains 632 identities,
with two images captured from two camera views of each identity.
\noindent \textbf{The CUHK01 dataset} \cite{CUHK01} contains 3,884 images of
971 identities captured from
two disjoint views. There are two images of every identity from each view.
\noindent \textbf{The CUHK03 dataset} \cite{2014_CVPR_CUHK03} contains 13,164 images
of 1,360 pedestrians captured from six surveillance camera views.
Besides hand-cropped images, samples detected
by a state-of-the-art pedestrian detector are provided.
\noindent \textbf{The SYSU dataset} \cite{2015_TCSVT_ASM} includes 24,448 RGB images of 502 persons under two surveillance cameras.
One camera view mainly
captured the frontal or back views of persons, while the other observed mostly
the side views.
\noindent \textbf{The Market-1501 dataset} \cite{2015_ICCV_MARKET} (Market) contains 32,668 images of 1,501 pedestrians, each of which was
captured by at most six cameras. All of the images were cropped by a pedestrian
detector. There are some bad-detected samples in this datasets as distractors
as well.
\noindent \textbf{The ExMarket dataset}\final{\footnote{Demo code for the model and the ExMarket dataset can be found on \url{https://github.com/KovenYu/CAMEL}.}}. In order to evaluate unsupervised RE-ID methods on even larger scale,
which is more realistic, we further combined \textbf{the MARS dataset} \cite{MARS} with
Market. MARS is a video-based RE-ID dataset which contains
20,715 tracklets of 1,261 pedestrians. All the identities from MARS are of a
subset of those from Market.
We then took 20\% frames (each one in every five successive frames) from the tracklets
and combined them with Market to obtain an extended version of Market (\textbf{ExMarket}).
The imbalance between the numbers of samples from the 1,261 persons and other
240 persons makes this dataset more challenging and realistic. There are 236,696 images
in ExMarket in total, and 112,351 images of them are of training set.
A brief overview of the dataset scales can be found in Table \ref{TableDatasets}.
\subsection{Settings}
\noindent \textbf{Experimental protocols}:
A widely adopted protocol was followed on VIPeR in our
experiments \cite{2015_CVPR_LOMO}, i.e., randomly dividing the 632 pairs of images into
two halves, one of which was used as training set and the other as testing set. This
procedure was repeated 10 times to offer average performance.
Only
single-shot experiments were conducted.
The experimental protocol for CUHK01 was the same as that in \cite{2015_CVPR_LOMO}.
We randomly selected 485 persons as training set and the other 486 ones as testing set.
The evaluating procedure was repeated 10 times. Both multi-shot and single-shot
settings were conducted.
The CUHK03 dataset was provided together with its recommended evaluating protocol \cite{2014_CVPR_CUHK03}.
We followed the provided protocol, where images of 1,160 persons were chosen as training set,
images of another 100 persons as
validation set and the remainders as testing set.
This procedure was repeated 20 times.
In our experiments, detected samples were adopted since they
are closer to real-world settings.
Both multi-shot and single-shot experiments were conducted.
As for the SYSU dataset, we randomly picked 251 pedestrians' images as training set
and the others as testing set.
In the testing stage, we basically followed the protocol as in \cite{2015_TCSVT_ASM}. That is,
we randomly chose one and three images of each pedestrian as gallery for single-shot and multi-shot experiments, respectively.
We repeated the testing procedure by 10 times.
Market is somewhat different from others. The evaluation protocol was also
provided along with the data \cite{2015_ICCV_MARKET}. Since the images of one person
came from at most six views, single-shot experiments were not suitable. Instead,
multi-shot experiments were conducted and both cumulative matching characteristic (CMC) and
mean average precision (MAP) were adopted for evaluation \cite{2015_ICCV_MARKET}.
The protocol of ExMarket was identical to that of Market since the identities were
completely the same as we mentioned above.
\noindent \textbf{Data representation}:
In our experiments we used the deep-learning-based JSTL feature proposed in \cite{2016_CVPR_JSTL}.
We implemented it using the 56-layer ResNet \cite{2016_CVPR_resnet}, which
produced $64$-D features.
The original JSTL was adopted to our implementation to extract features on SYSU, Market and ExMarket.
Note that the training set of the original JSTL contained VIPeR, CUHK01 and CUHK03,
violating the unsupervised setting.
So we trained a new JSTL model without VIPeR in its training set to extract
features on VIPeR. The similar procedures were done for CUHK01 and CUHK03.
\noindent \textbf{Parameters}:
We set $\lambda$, the cross-view consistency regularizer, to $0.01$.
We also evaluated the situation when $\lambda$ goes to infinite, i.e.,
the symmetric version of our model in Sec. \ref{SecFurtherEval},
to show how important the asymmetric modelling is.
\Koven{Regarding the parameter $T$ which is the feature dimension after the transformation learned by CAMEL, we set $T$ equal to original feature dimension i.e., $64$, for simplicity. In our experiments, we found that CAMEL can align data distributions across camera views even without performing any further dimension reduction.
This may be due to the fact that, unlike conventional subspace learning models, the transformations learned by CAMEL are view-specific for different camera views and always non-orthogonal. Hence, the learned view-specific transformations can already reduce the discrepancy between the data distributions of different camera views.}
As for $K$, we found that
our model was not sensitive to $K$ when $N\gg K$ and $K$ was not too small
(see Sec. \ref{SecFurtherEval}),
so we set $K = 500$.
These parameters were fixed for all datasets.
\subsection{Comparison}\label{SecFairCmp}
Unsupervised models are more significant when applied on larger datasets.
In order to make comprehensive and fair comparisons, in this section
we compare CAMEL with the most comparable unsupervised models
on six datasets with their scale orders varying from hundreds to hundreds of thousands.
We show the comparative results measured by
the rank-1 accuracies of CMC and MAP (\%)
in Table \ref{TableJSTL}.
\noindent \textbf{Comparison to Related Unsupervised RE-ID Models}.
In this subsection we compare CAMEL with the sparse dictionary learning
model (denoted as Dic) \cite{2015_BMVC_DIC},
sparse representation learning model ISR \cite{2015_PAMI_ISR},
kernel subspace learning model RKSL \cite{2016_ICIP_Wang} and
sparse auto-encoder (SAE) \cite{SAE1,SAE2}.
We tried several sets of parameters for them, and report the best ones.
We also adopt the Euclidean distance which is adopted in the original JSTL paper \cite{2016_CVPR_JSTL} as a baseline (denoted as JSTL).
From Table \ref{TableJSTL}
we can observe that
CAMEL outperforms other models on all the datasets on both settings.
In addition, we can further see from Figure \ref{FigCMC} that CAMEL outperforms other models
at any rank.
One of the main reasons is that the view-specific
interference is noticeable in these datasets. For example, we can see in Figure \ref{FigDatasetsCUHK01} that
on CUHK01, the
changes of illumination are extremely severe and even human beings may have difficulties in
recognizing the identities in those images across views.
This impedes other symmetric models from achieving higher accuracies,
because they potentially hold an assumption that
the invariant and discriminative information can be retained and exploited through a universal
transformation for all views.
But CAMEL relaxes this assumption by
learning an asymmetric metric and then can outperform other models significantly.
In Sec. \ref{SecFurtherEval} we will see the performance of CAMEL would drop much
when it degrades to a symmetric model.
\begin{table}[t]
\scriptsize
\begin{center}
\setlength{\tabcolsep}{0.16cm}
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
Dic \begin{tiny}\cite{2015_BMVC_DIC}\end{tiny} &29.9&49.3/52.9&27.4/36.5&21.3/28.6&50.2(22.7)& 52.2(21.2) \\
ISR \begin{tiny}\cite{2015_PAMI_ISR}\end{tiny} &27.5 &53.2/55.7 &31.1/38.5& 23.2/33.8& 40.3(14.3)&- \\
RKSL \begin{tiny}\cite{2016_ICIP_Wang}\end{tiny} &25.8 & 45.4/50.1 &25.8/34.8 &17.6/23.0 &34.0(11.0) &- \\
SAE \begin{tiny}\cite{SAE1}\end{tiny} &20.7 &45.3/49.9 &21.2/30.5 &18.0/24.2 &42.4(16.2) &44.0(15.1) \\
JSTL \begin{tiny}\cite{2016_CVPR_JSTL}\end{tiny} &25.7 &46.3/50.6 &24.7/33.2 &19.9/25.6 &44.7(18.4) &46.4(16.7)\\
\midrule
AML \begin{tiny}\cite{2007_CVPR_AML}\end{tiny} &23.1 &46.8/51.1 &22.2/31.4 &20.9/26.4 &44.7(18.4) &46.2(16.2) \\
UsNCA \begin{tiny}\cite{2015_NC_uNCA}\end{tiny} &24.3 &47.0/51.7 &19.8/29.6 &21.1/27.2 &45.2(18.9) &- \\
\midrule
CAMEL & \textbf{30.9} & \textbf{57.3/61.9} & \textbf{31.9/39.4} & \textbf{30.8/36.8} & \textbf{54.5}(\textbf{26.3}) & \textbf{55.9}(\textbf{23.9}) \\
\bottomrule
\end{tabular}%
\caption{\label{TableJSTL}Comparative results of unsupervised models on the six datasets, measured by
rank-1 accuracies and MAP (\%).
``-'' means prohibitive time consumption due to time complexities of the models.
``SS'' represents single-shot setting and ``MS'' represents multi-shot setting.
For Market and ExMarket, MAP is also provided in the parentheses due to the
requirement in the protocol \cite{2015_ICCV_MARKET}.
Such a format is also applied in the following tables.}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\subfigure[VIPeR]{
\includegraphics[width=0.47\linewidth]{VIPER_.pdf}
}
\subfigure[CUHK01]{
\includegraphics[width=0.47\linewidth]{CUHK01_.pdf}
}
\subfigure[CUHK03]{
\includegraphics[width=0.47\linewidth]{CUHK03_.pdf}
}
\subfigure[SYSU]{
\includegraphics[width=0.47\linewidth]{SYSU_.pdf}
}
\subfigure[Market]{
\includegraphics[width=0.47\linewidth]{Market_.pdf}
}
\subfigure[ExMarket]{
\includegraphics[width=0.47\linewidth]{ExMarket_.pdf}
}
\caption{\label{FigCMC}CMC curves.
\modify{For CUHK01, CUHK03 and SYSU, we take the results under single-shot setting as examples.
Similar patterns can be observed on multi-shot setting.}}
\end{center}
\end{figure}
\noindent \textbf{Comparison to Clustering-based Metric Learning Models}.
In this subsection we compare CAMEL with
a typical model AML \cite{2007_CVPR_AML} and a recently proposed model UsNCA \cite{2015_NC_uNCA}.
We can see from Fig. \ref{FigCMC} and Table \ref{TableJSTL} that compared to them, CAMEL achieves
noticeable improvements on all the six datasets.
One of the major reasons is that
they do not consider the view-specific bias which can
be very disturbing in clustering, making them unsuitable for RE-ID problem.
\ws{In comparison}, CAMEL alleviates such disturbances by asymmetrically modelling.
This factor contributes to the much better performance of CAMEL.
\noindent \textbf{Comparison to the State-of-the-Art.}
In the last subsections, we compared with existing unsupervised RE-ID methods using the same features.
In this part, we also compare with the results reported in literatures.
Note that most existing unsupervised RE-ID methods have not been evaluated on large datasets like CUHK03, SYSU, or Market,
so Table \ref{TableSotA} only reports the comparative results
on VIPeR and CUHK01.
We additionally compared existing unsupervised RE-ID models, including the
hand-craft-feature-based SDALF \cite{2010_CVPR_SDALF} and CPS \cite{CAVIAR},
the transfer-learning-based UDML \cite{2016_CVPR_tDIC},
graph-learning-based model (denoted as GL) \cite{2016_ECCV_Kodirov},
and local-salience-learning-based GTS \cite{2014_BMVC_GTS} and SDC \cite{2013_CVPR_SALIENCE}.
We can observe from Table \ref{TableSotA} that
our model CAMEL can outperform the state-of-the-art by large margins on CUHK01.
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccccc}
\toprule
Model & SDALF & CPS & UDML
& GL & GTS
& SDC & CAMEL \\
&\cite{2010_CVPR_SDALF} &\cite{CAVIAR} &\cite{2016_CVPR_tDIC} &\cite{2016_ECCV_Kodirov} &\cite{2014_BMVC_GTS} & \cite{2013_CVPR_SALIENCE} & \\
\midrule
VIPeR & 19.9 & 22.0 & 31.5 & \textbf{33.5} & 25.2 & 25.8 & 30.9 \\
CUHK01 & 9.9 & - & 27.1 & 41.0 & - & 26.6 & \textbf{57.3} \\
\bottomrule
\end{tabular}%
\caption{\label{TableSotA}Results compared to the state-of-the-art reported in literatures, measured by rank-1 accuracies (\%). ``-'' means no reported result.}
\end{center}
\end{table}
\noindent \textbf{Comparison to Supervised Models.}
Finally, in order to see how well CAMEL can approximate the performance of supervised RE-ID,
\Koven{we additionally compare CAMEL with its supervised version (denoted as CAMEL$_s$) which is easily derived by substituting the clustering results by true labels, and three standard supervised models,
including the widely used KISSME \cite{2012_CVPR_KISSME}, XQDA \cite{2015_CVPR_LOMO}, the asymmetric distance model CVDCA \cite{2015_TCSVT_ASM}.
The results are shown in Table \ref{TableSupervised}.
We can see that CAMEL$_s$ outperforms CAMEL by various degrees,
indicating that label information can further improve CAMEL's performance.
Also from Table \ref{TableSupervised}, we notice that CAMEL can be comparable to other standard supervised models on some datasets like CUHK01,
and even outperform some of them.}
It is probably because the used JSTL model had not been fine-tuned on the target datasets: this was for a fair comparison with unsupervised models which work on completely unlabelled training data.
Nevertheless, this suggests that the performance of CAMEL may not be far below the standard supervised RE-ID models.
\begin{table}[t]
\scriptsize
\setlength{\tabcolsep}{0.11cm}
\begin{tabular}{ccccccc}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
KISSME \begin{tiny}\cite{2012_CVPR_KISSME}\end{tiny} &28.4&53.0/57.1&37.8/45.4&24.7/31.8&51.1(24.5)& 48.0(18.3) \\
XQDA \begin{tiny}\cite{2015_CVPR_LOMO}\end{tiny} &28.9&54.3/58.2&36.7/43.7&25.2/31.7&50.8(24.4)& 47.4(18.1) \\
CVDCA \begin{tiny}\cite{2015_TCSVT_ASM}\end{tiny} &\textbf{37.6}&57.1/60.9&37.0/44.6&31.1/\textbf{38.9}&52.6(25.3)&51.5(22.6) \\
CAMEL$_s$ &33.7&\textbf{58.5/62.7}&\textbf{45.1/53.5}&\textbf{31.6}/37.6&\textbf{55.0}(\textbf{27.1})& \textbf{56.1}(\textbf{24.1}) \\
\midrule
CAMEL & 30.9 & 57.3/61.9 & 31.9/39.4 & 30.8/36.8 &54.5(26.3) & 55.9(23.9) \\
\bottomrule
\end{tabular}%
\caption{\label{TableSupervised}Results compared to supervised models using the same JSTL features.}
\end{table}
\subsection{Further Evaluations}\label{SecFurtherEval}
\noindent \textbf{The Role of Asymmetric Modeling}.
We show what is going to happen if CAMEL degrades to a common symmetric model
in Table \ref{TableSym}. Apparently, without asymmetrically modelling each camera view,
our model would be worsen largely, indicating that the asymmetric modeling for clustering
is rather important for addressing the cross-view matching problem in RE-ID as well as in our model.
\noindent \textbf{Sensitivity to the Number of Clustering Centroids}. We take
CUHK01, Market and ExMarket datasets as examples of different scales (see Table \ref{TableDatasets}) for this evaluation.
Table \ref{TableK} shows how the performance varies with different numbers of clustering centroids, $K$.
It is obvious that the performance
only fluctuates mildly when $N \gg K$ and $K$ is not too small.
Therefore CAMEL is not very sensitive to $K$ especially when applied to large-scale problems.
\final{To further explore the reason behind,
we show in Table \ref{table:rate} the rate of clusters which contain more than one persons,
in the initial stage and convergence stage in Algorithm \ref{AlgCamel}.
We can see that \emph{(1)} in spite of that $K$ is varying,
there is always a number of clusters containing more than one persons in both the initial stage and convergence stage.
This indicates that our model works \emph{without} the requirement of perfect clustering results.
And \emph{(2)}, although the number is various,
in the convergence stage the number is consistently decreased compared to initialization stage.
This shows that the cluster results are improved consistently.
These two observations suggests that
the clustering should be a mean to learn the asymmetric metric, rather than an ultimate objective.}
\modify{
\noindent \textbf{Adaptation Ability to Different Features}.
At last, we show that CAMEL can be effective not only when adopting deep-learning-based JSTL features.
We additionally adopted the hand-crafted LOMO feature proposed in \cite{2015_CVPR_LOMO}.
We performed PCA to produce $512$-D LOMO features, and the results are shown in Table \ref{TableLOMO}.
Among all the models, the results of Dic and ISR are the most comparable (Dic and ISR take all second places). So for clarity, we only compare CAMEL with them and $L_2$ distance as baseline.
From the table we can see that CAMEL can outperform them.
}
\begin{table}[t]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{0.11cm}
\begin{tabular}{ccccccc}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
CMEL & 27.5 & 52.5/54.9 & 29.8/37.5 & 25.4/30.9 & 47.6(21.5) & 48.7(20.0) \\
CAMEL & \textbf{30.9} & \textbf{57.3/61.9} & \textbf{31.9/39.4} & \textbf{30.8/36.8} & \textbf{54.5}(\textbf{26.3}) & \textbf{55.9}(\textbf{23.9}) \\
\bottomrule
\end{tabular}%
\caption{\label{TableSym}Performances of CAMEL compared to its symmetric version, denoted as CMEL.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccc}
\toprule
K & 250 & 500 & 750 & 1000 & 1250 \\
\midrule
CUHK01 & 56.59 & 57.35 & 56.26 & 55.12 & 52.75 \\
Market & 54.48 & 54.45 & 54.54 & 54.48 & 54.48 \\
ExMarket & 55.49 & 55.87 & 56.17 & 55.93 & 55.67 \\
\bottomrule
\end{tabular}%
\caption{\label{TableK}Performances of CAMEL when the number of clusters, K, varies.
Measured by single-shot rank-1 accuracies (\%) for CUHK01 and multi-shot for Market and ExMarket.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccc}
\toprule
K & 250 & 500 & 750 & 1000 & 1250 \\
\midrule
Initial Stage & 77.6\% & 57.0\% & 26.3\% & 11.6\% & 6.0\% \\
Convergence Stage & 55.8\% & 34.3\% & 18.2\% & 7.2\% & 4.8\% \\
\bottomrule
\end{tabular}%
\caption{\label{table:rate}
Rate of clusters containing similar persons on CUHK01.
Similar trend can be observed on other datasets.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{0.16cm}
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
Dic \begin{tiny}\cite{2015_BMVC_DIC}\end{tiny} & 15.8 & 19.6/23.6 & 8.6/13.4 & 14.2/24.4 & 32.8(12.2) & 33.8(12.2) \\
ISR \begin{tiny}\cite{2015_PAMI_ISR}\end{tiny} & 20.8 & 22.2/27.1 & 16.7/20.7 & 11.7/21.6 & 29.7(11.0) & - \\
$L_2$ & 11.6 & 14.0/18.6 & 7.6/11.6 & 10.8/18.9 & 27.4(8.3) & 27.7(8.0) \\
\midrule
CAMEL & \textbf{26.4} & \textbf{30.0/36.2} & \textbf{17.3/23.4} & \textbf{23.6/35.6} & \textbf{41.4(14.1)} & \textbf{42.2(13.7)} \\
\bottomrule
\end{tabular}%
\caption{\label{TableLOMO}Results using $512$-D LOMO features.}
\end{center}
\end{table}
\section{Conclusion}
In this work, we have shown that metric learning can be effective for unsupervised RE-ID by proposing
clustering-based asymmetric metric learning called CAMEL. \ws{CAMEL learns view-specific projections
to deal with view-specific interference, and this is based on existing clustering (e.g., the $k$-means model demonstrated in this work)
on RE-ID unlabelled data, resulting in an asymmetric metric clustering.
Extensive experiments show that our model can outperform
existing ones in general, especially on large-scale unlabelled RE-ID datasets.
\section*{Acknowledgement}
This work was supported partially by the National Key Research and Development Program of China (2016YFB1001002), NSFC(61522115, 61472456, 61573387, 61661130157, U1611461), the Royal Society Newton Advanced Fellowship (NA150459), Guangdong Province Science and Technology Innovation Leading Talents (2016TX03X157).
{\small
\bibliographystyle{ieee}
|
\section{Introduction}
The physics of planar structures describes interesting properties \cite{Bais}, e. g., charge fractioning \cite{Feldman,Cherman} and fractional statistics \cite{Arovas}. Furthermore, in analyzing planar systems, several interesting features arise due to the correspondence between particles and their duals. One of these correspondences is the particle-vortex duality \cite{Karch,Murugan,Metlitski}. In the planar world, vortices constitute an important class of structures. The importance of these structures is due to their relevant applications, as we can see in Refs. \cite{Lima1,Lima2,Lima3,Lima4}. A notably interesting application appears in condensed matter physics, where these structures appear in the phenomena description of superconductivity \cite{Abrikosov,Davis1,Davis2}.
In general, one can understand the vortices as structures that arise in three-dimensional spacetime, i. e., $(2+1)$D \cite{Casana2,Casana3,Casana4,Edery1,Edery2}. In field theory, pioneers in the study of vortex structures were Nielsen and Olesen \cite{Nielsen}. In the seminal paper: {\it Vortex-line models in dual strings}, the authors show the vortex solutions of an action constructed with a complex scalar field minimally coupled to a gauge field with symmetry $U(1)$ \cite{Nielsen}. After Nielsen and Olesen's proposal, several papers emerged discussing topological \cite{Weinberg,Hong} and non-topological \cite{LeeP,Arthur,Kimm} structures.
Only in 1991, Stern \cite{Stern} proposed for the first time the study of a theory non-minimally coupled to the gauge field. Using a three-dimensional spacetime, Stern seeks to describe point particles with no spin degree of freedom that carries an appropriate magnetic momentum. Stern's work motivated several researchers who later proposed papers on non-minimal models, e. g., vortices non-minimally coupled to the gauge field \cite{Lima3,Torres,PKGhosh,SGhosh}. To be specific, in Ref. \cite{Cavalcante}, the authors investigate BPS vortex solutions for a specific interaction using an $O(3)$-sigma model non-minimally coupled to a Maxwell-Chern-Simons field. Besides, the BPS properties of sigma model vortices were also studied using a non-minimal coupling and a multi-field approach \cite{Lima3}. Motivated by these applications, a natural questioning arises: How are vortex structures modified in a non-minimum theory constituted by non-canonical multi-fields? Throughout this work, we will expose the answer to this question.
In this research article, we use the non-linear O(3)-sigma model. Briefly, the non-linear O(3)-sigma model consists of three real scalar fields \cite{Rajaraman}, i. e., $\Phi(\textbf{r},t)\equiv\{\phi_i(\textbf{r},t),\, i=1,2,3\}$ with the constraint
\begin{align}\label{vin}
\Phi\cdot\Phi=\sum_{i=1}^{3}\phi_i\phi^i=1.
\end{align}
Respecting this constraint, the dynamics of the O(3)-sigma field, i. e., of the field $\Phi$ is governed by the following Lagrangian
\begin{align}
\mathcal{L}=\frac{1}{2}\partial_\mu\Phi\cdot\partial^\mu\Phi.
\end{align}
Thus, one describes the sigma model as a vector of fields in its internal space, i. e., a three-dimensional field space \cite{Rajaraman,Ghosh1,Ghosh2,Schroers1,Schroers2}. In 1960, Gell-Mann and L\'{e}vy were the first to propose this model \cite{Gellmann}. At the time, the purpose was to describe the Goldberger and Treiman formula for the rate of decay of the charged pion using a strong interaction proposed by Schwinger \cite{Schwinger} and a weak current formulated by Polkinghorne \cite{Polkinghorne}. After the work of Gell-Mann and L\'{e}vy, several papers considered the non-linear sigma model in their analysis. For example, using the O(3)-sigma model, photons emerging was investigated in Ref. \cite{Motrunich}. Furthermore, the solitons stability and Lorentz violation were studied, respectively, in Refs. \cite{Leese} and \cite{Messias}.
Not far from the non-linear sigma model, some authors have proposed so-called multi-field models \cite{Bazeia1,Bazeia2,Bazeia3}. These models play an important role in inflationary theories \cite{Langlois1}. That is because the theoretical results of the multi-field theories agree with the phenomenological measurements \cite{Langlois1,Langlois2,Bean, Trotta, Keskitalo}. Thus, that motivates us to study the topological structures derived from this kind of theory. Indeed, one can find some research articles in the literature discussing aspects of structures in multi-field theories, e. g., see Refs. \cite{Oles,Liu}. However, as far as we know, no study was performed discussing the vortex structures considering an O(3)-sigma and other non-canonical fields.
In particular, in this work, in addition to the dynamic term of the sigma model, we will use a cuscuton-like non-canonical real scalar field. Afshordi, Chung, and Geshnizjani announce the cuscuton model in the paper: {\it A causal field theory with an infinite speed of sound} \cite{Afshordi}. In this theory, the cuscuton dynamics arise from the degenerate Hamiltonian symplectic structure description in the cosmologically homogeneous limit \cite{Afshordi}. In this case, the cuscuton theory becomes homogeneous when the metric is locally Minkowski \cite{Afshordi,Afshordi2,Afshordi3}. An interesting feature of the cuscuton field is that it does not contribute to the equation of motion at the stationary limit. Thus, one can interpret it as a non-dynamic auxiliary field that follows the dynamics of the fields to which it couples.
Naturally, together with these applications and motivations arise some questioning. For example, is it possible to obtain a vortex line in an O(3)-sigma model coupled to a non-canonical field? How do the non-canonical term and multi-field influence the structure of O(3)-sigma vortices? These are relevant questions that motivate our study. Thus, considering a sigma-cuscuton model, we hope to answer these questions throughout this research article.
We organized our work as follows: In Sec. II, the BPS vortices are analyzed. In Sec. III, we implement spherical symmetry in the target space of the O(3)-sigma model. Posteriorly, in Sec. IV, topological vortex solutions are displayed. To finalize, in Sec. V, our findings are announced.
\section{Non-minimal BPS vortex}
As discussed in Ref. \cite{Lima3}, the vortex configurations generated by multi-field theories are interesting because it is possible that they can have changes in their physical properties. Motivated by that, allow us to start our study by considering a three-dimensional model, i. e., a spacetime with $(2+1)$D. In this scenario, the Lagrangian density of our theory is
\begin{align}\label{Lag}
\mathcal{L}=\frac{1}{2}\nabla_{\mu}\Phi\cdot\nabla^{\mu}\Phi+\eta\sqrt{\vert \partial_\mu\psi\partial^\mu\psi\vert}+\frac{1}{2}\partial_\mu\psi\partial^\mu\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\mathcal{V}(\phi_3,\psi).
\end{align}
Here, $\Phi$ is a triplet of scalar fields subject to the constraint $\Phi\cdot\Phi=1$. Meanwhile, $\psi$ is a real scalar field, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the electromagnetic tensor and $\mathcal{V}( \phi_3,\psi)$ is the theory interaction. Furthermore, the term $\eta\sqrt{\vert\partial_\mu\psi\,\partial^\mu\psi\vert}$ is known as the cuscuton term. This term describes non-canonical theories \cite{Afshordi,Afshordi2,Afshordi3}. Indeed, the term cuscuton appears for the first time as an alternative to describe dark matter and their contribution to the action lacks a dynamic degree of freedom \cite{Lima2, Afshordi2}. In its etymology, the word \textit{cuscuton} originates in Latin and describes a parasitic plant, namely, the Cuscuta. Based on this, we call our theory of sigma-cuscuton-like model.
As discussed in Ref. \cite{Lima3}, one defines the usual covariant derivative as
\begin{align}
\label{CovariantD0}
D_\mu\Phi=\partial_\mu\Phi+eA_\mu\,(\hat{n}_3\times\Phi).
\end{align}
Meanwhile, the non-minimal covariant derivative is
\begin{align}\label{CovariantD}
\nabla_\mu\Phi=\partial_\mu\Phi+\bigg(eA_\mu+\frac{g}{2}\varepsilon_{\mu\nu\lambda}F^{\nu\lambda}\bigg)\,\hat{n}_3\times \Phi.
\end{align}
Let us study the non-minimal theory, i. e., vortex configurations with an anomalous contribution of magnetic momentum. We introduce the anomalous magnetic momentum contribution using the coupling $\frac{g}{2}\varepsilon_{\mu\nu\lambda}F^{\nu\lambda}$ in the covariant derivative, i. e., a coupling between the gauge field and the matter field. One can find the non-minimal coupling applied in investigations of the properties of BPS solitons, e. g., see Refs. \cite{Torres,PKGhosh,CA,SGhosh}.
To carry out our study, allow us to consider a flat spacetime with a metric signature such as $\eta_{\mu\nu}=$ det$(-,+,+)$. Moreover, one defines the gauge field as
\begin{align}\label{GaugeEquation}
j^\nu=\partial_\lambda[g\varepsilon_{\mu\lambda\nu}(\Phi\times\nabla^\mu\Phi)\cdot\hat{n}_3-F^{\lambda\nu}],
\end{align}
where $j^\nu=e(\Phi\times\nabla^\nu\Phi)\cdot\hat{n}_3$ and $\textbf{J}^\nu=-j^\nu\cdot\hat{n}_3$.
By inspection of Gauss' law, i. e., the component $\nu=0$ of Eq. (\ref{GaugeEquation}), we can assume $A_0=0$. In this case, the structures that arise in this theory will be purely magnetic.
Investigating the equation of motion, one obtains the matter field equation, namely,
\begin{align}
\nabla^\mu\nabla_\mu\Phi=-\mathcal{V}_\Phi,
\end{align}
with $\mathcal{V}_\Phi=\frac{\partial\mathcal{V}}{\partial \Phi}$.
Meanwhile, the real scalar field equation is
\begin{align}
\partial_\mu\bigg[\partial^\mu\psi+\eta\frac{\partial^\mu\psi}{\sqrt{\vert\partial_\nu\psi\,\partial^\nu\psi\vert}}\bigg]=-\mathcal{V}_\psi,
\end{align}
with $\mathcal{V}_\psi=\frac{\partial\mathcal{V}}{\partial \psi}$.
We are interested in the soliton-like solutions that describe the topological vortices. Thus, it is necessary to investigate the energy of the system. To perform this analysis, we construct the energy-momentum tensor and examine the $T_{00}$ component of the energy-momentum tensor. The integration of the $T_{00}$ component in the overall space gives us the energy of the structures. Performing this analysis, the energy is
\begin{align}\label{energy0}
\mathrm{E}=\frac{1}{2}\int\, d^2x\, \bigg[\nabla_i\Phi\cdot\nabla^i\Phi+\partial_i\psi\partial^i\psi+2
\eta\sqrt{\vert\partial_i\psi\,\partial^i\psi\vert}+F_{ij}F^{ij}+2\mathcal{V}\bigg].
\end{align}
The energy can be organized as follows:
\begin{align}\label{energy1} \nonumber
\mathrm{E}=&\int\, d^2x\,\bigg[\frac{1}{2}(\nabla_i\Phi\mp\varepsilon_{ij}\Phi\times\nabla_j\Phi)^2+\frac{1}{2}\bigg(\partial_i \psi\mp\frac{W_\psi}{r}\bigg)^2+\frac{1}{2}(F_{ij}\pm\sqrt{2\mathcal{U}})+\eta\sqrt{\vert\partial_i\psi\,\partial^i\psi\vert}+\\
\mp&\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)\mp\frac{W_\psi\partial_i\psi}{r}-\frac{W_{\psi}^{2}}{2r^2}+\mathcal{V}\mp F_{ij}\sqrt{2\mathcal{U}}-\mathcal{U}\bigg].
\end{align}
Here, we implement in the energy two interactions, i. e., $\mathcal{W}=\mathcal{W}[\phi_3(x_i);\, x_i]$ and $\mathcal{U}=\mathcal{U}[\psi(x_i);\, x_i ]$ with $\mathcal{W}_\psi=\frac{\partial \mathcal{W}}{\partial\psi}$. In general, one implements the superpotential functions $\mathcal{W}$ and $\mathcal{U}$ to obtain a first-order formalism of the theory. Indeed, these superpotentials play a relevant role, i. e., these functions relate with the potential $\mathcal{V}$ at the saturation limit of the energy \cite{Vachaspati}. Thus, it allows in the energy saturation limit, to obtain first-order equations of motion \cite{Vachaspati}, which is quite suitable for our purpose.
Analyzing the energy (\ref{energy1}), one notes that the static field configurations have energy bounded from below. Therefore, at the energy saturation limit, one obtains
\begin{align}\label{BPS1}
\nabla_i\Phi=\pm\varepsilon_{ij}\Phi\times\nabla_j\Phi, \, \, \, \, \, \, F_{ij}=\mp\sqrt{2\mathcal{U}} \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \partial_i\psi=\pm\frac{W_\psi}{r}.
\end{align}
Note that the first two first-order equations of the expression (\ref{BPS1}) are known as the Bogomol'nyi equations (or BPS equation) that describe the vortices of the O(3)-sigma model. On the other hand, the expression $\partial_i\psi=\frac{W_\psi}{r}$ is the BPS equation for the scalar field without the contribution of the non-canonical term (the cuscuton contribution). As a matter of fact, in the stationary case, the dynamics derived from the cuscuton term do not contribute to the equation of motion. That occurs because when we consider the case of the cuscuton-like scalar field $\psi=\psi(r_1)\equiv\psi(r)$, the contribution of the cuscuton-like term in the equation of motion is
\begin{align}
\partial_\mu\bigg[\frac{\partial \mathcal{L}_{cusc}}{\partial(\partial_\mu\psi)}\bigg]=\bigg(\frac{\partial\mathcal{L}_{cusc}}{\partial \psi'}\bigg)'=\eta\bigg(\frac{\partial\vert\psi'\vert}{\partial\psi'}\bigg)',
\end{align}
which disappears, except in the singular case, i. e., $\psi'=0$. However, this singularity is removable. Therefore, one can assign the value zero to the contribution of the cuscuton-like to the equation of motion. Thus, pure contributions from the cuscuton term yield only a trivial contribution to the equations of motion, regardless of the shape of the potential. So, we hope that the first-order motion equation for the $\psi$ field is simply the BPS equation for the $\psi$ field without the cuscuton contributions.
Substituting Eqs. (\ref{BPS1}) into (\ref{energy1}), one obtains
\begin{align}\label{energy3}
\mathrm{E}_{BPS}=\mp\int\, d^2x\, \bigg[\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)-F_{ij}\sqrt{2\mathcal{U}}+ \frac{W_\psi\partial_i\psi}{r}\bigg].
\end{align}
The integrand of the above equation is the BPS energy density.
To obtain the BPS properties, we assume that the interaction is
\begin{align}
\mathcal{V}=\mathcal{U}+\frac{W_{\psi}^{2}}{2r^2}\mp\eta\frac{W_\psi}{r}.
\end{align}
Perceive that the last term in the potential is the contribution of the non-canonical term. Thus, the cuscuton-like term in the BPS limit plays the role of we call impurity. This word is applied to characterize terms in the action that do not change the equations of motion but can change the soliton profile \cite{Adam}. In truth, we can find theories with impurity in some works. For example, the impurities appear in the studies of the self-dual configuration solubility \cite{Adam}, CP$(2)$ vortex solutions \cite{Casana}, and the vortices in the presence of a neutral field \cite{Dionisio}.
Therefore, the absolute BPS energy (\ref{energy3}) is
\begin{align}
\mathrm{E}_{BPS}=\mathrm{E}_{BPS}^{(\sigma)}+\mathrm{E}_{BPS}^{(\psi)},
\end{align}
where
\begin{align}\label{Energy4}
\mathrm{E}_{BPS}^{(\sigma)}=\mp\int\, d^2x\, [\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)-F_{ij}\sqrt{2\mathcal{U}}] \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \mathrm{E}_{BPS}^{(\psi)}=\mp\int\,d^2x\, \frac{W_\psi\partial_i\psi}{r}.
\end{align}
\section{The spherically symmetric and vacuumless structures}
To investigate the spherically symmetric vortex solutions, let us assume the ansatz proposed by Schroers in Ref. \cite{Schroers1}, i. e.,
\begin{align}\label{ansatz1}
\Phi(r, \theta)=\begin{pmatrix}
\sin f(r)\cos N\theta\\
\sin f(r)\sin N\theta\\
\cos f(r)
\end{pmatrix}.
\end{align}
This ansatz is necessary for the $\Phi$ field to respect the O(3)-sigma model constraint, i. e., $\Phi\cdot\Phi=1$. It is interesting to mention that this ansatz was used widely in other works, e. g., see Refs. \cite{Lima5,Lima6}.
On the other hand, as suggested in Refs. \cite{Lima3, Casana5}, the real scalar field is
\begin{align}
\psi=\psi(r).
\end{align}
To study the vortex configurations, we use the ansatz proposed in Refs. \cite{Schroers1,PKGhosh}, i. e.,
\begin{align}\label{ansatz3}
\textbf{A}(r)=-\frac{Na(r)}{er}\hat{\textbf{e}}_{\theta},
\end{align}
where $N$ is the winding number. This behavior of $\textbf{A}(r)$ produces a magnetic field $\textbf{B}=\nabla\times\textbf{A}$. Thus, calculating the $\nabla\times\textbf{A}$, one obtains
\begin{align}\label{MagneticF}
\textbf{B}=-\frac{Na'(r)}{er}\hat{\textbf{e}}_z,
\end{align}
and therefore, being $F_{12}=-B$ with $B=\vert\vert\textbf{B}\vert\vert$, it follows that
\begin{align}
F_{12}=-\frac{Na'(r)}{er}.
\end{align}
The magnetic field (\ref{MagneticF}) is responsible for arising of a magnetic flux that emerges from the vortex. In this case, the magnetic flux is
\begin{align}\label{flux}
\phi_{flux}=\oiint \textbf{B}\cdot d\textbf{S}
\end{align}
Considering the planar nature of the vortex, we conclude that the magnetic flux (\ref{flux}) is
\begin{align}
\phi_{flux}=-\int_{0}^{2\pi}\int_{0}^\infty\frac{Na'(r)}{er}rdrd\theta,
\end{align}
which leads us to
\begin{align}\label{Mflux}
\phi_{flux}=\frac{2\pi N}{e}[a(0)-a(\infty)].
\end{align}
Furthermore, the vortex has the energy profile shown in Eq. (\ref{Energy4}). This energy reformulated in terms of the field variables $f(r)$ and $a(r)$ is
\begin{align}\label{EBPS}
\mathrm{E}_{BPS}=\mp\int\, d^2x\, \bigg[\frac{N[a(r)-1]}{r}f'(r)\sin f(r)+\frac{Na'(r)}{er}\sqrt{2\mathcal{U}}+\frac{W_\psi\partial_i\psi}{r}\bigg].
\end{align}
\section{Vortex solution in the vacuumless theory}
\subsection{The scalar field solutions}
The boundary conditions of the topological field configurations are
\begin{align}\label{top1}
\psi(r\to 0)=\mp1, \hspace{1cm} \psi(r\to\infty)=\pm1,
\end{align}
\begin{align}\label{top2}
&f(r\to 0)=0, \hspace{1cm} f(r\to \infty)=\pi,
\end{align}
and
\begin{align}
\label{top3}
&a(r\to 0)=0, \hspace{1cm} a(r\to \infty)=-\beta.
\end{align}
Here $\beta\in\mathds{R}_{+}$.
Furthermore, allow us to start our investigation of topological structures by assuming the superpotential
\begin{align}\label{SPW}
W[\psi(r)]=\alpha\psi\bigg(1-\frac{1}{3}\psi^2\bigg).
\end{align}
To avoid carrying too many constants in our theory, let us assume $\eta=\alpha$.
The superpotential (\ref{SPW}) describes a $\phi^4$-like interaction. Therefore, when considering this superpotential, we are ensuring that spontaneous symmetry breaking occurs. This spontaneous symmetry breaking will be responsible for the arising of structures in the topological sector of $\psi$ \cite{Vachaspati}.
Now, using the superpotential (\ref{SPW}) the first-order equation of $\psi(r)$ is
\begin{align}\label{PsiE}
\psi'(r)=\pm\frac{\alpha}{r}[1-\psi(r)^2].
\end{align}
Considering the topological conditions (\ref{top1}), one solves Eq. (\ref{PsiE}). The solutions of the equation (\ref{PsiE}) are
\begin{align}\label{solpsi}
\psi(r)=\pm\frac{r^{2\alpha}-r_{0}^{2\alpha}}{r^{2\alpha}+r_{0}^{2\alpha}}.
\end{align}
As previously discussed in reference \cite{Lima3}, $r_0$ is an integration constant that describes the initial setting of the $\psi$ field. Thus, one can assume $r_{0}=1$. In this case, the solutions (\ref{solpsi}) are
\begin{align}
\psi(r)=\pm\tanh[\text{ln}(r^\alpha)].
\end{align}
The solutions of the $\psi$ field are called kink-like (positive sign) and antikink-like (negative sign) solutions. In Fig. \ref{fig1}, we display the kink-like and antikink-like solutions that describe the field $\psi$.
\begin{figure}[!ht]
\centering
\includegraphics[height=6.5cm,width=8cm]{kink.pdf}
\includegraphics[height=6.5cm,width=8cm]{AKink.pdf}\\
\vspace{-1cm}
\begin{center}
(a) \hspace{7cm} (b)
\end{center}
\vspace{-1cm}
\caption{Solutions of $\psi(r)$. (a) kink-like configuration. (b) Antikink-like configuration.}
\label{fig1}
\end{figure}
\subsection{The vacuumless theory}
To study the vortex configurations of the non-minimal O(3)-sigma model, we particularize our analysis to the case of vacuum theories. For example, some authors used vacuumless theories to study the vortex-like solutions with Maxwell and Chern-Simons electrodynamics \cite{Matheus}. Furthermore, structures in curved spacetime \cite{Moreira} and topological solitons \cite{DBazeiaF} were studied. Therefore, now allow us to consider a vacuumless theory to investigate the vortex solutions of the non-minimal sigma-cuscuton model. Thus, to have a vacuumless theory, let us assume
\begin{align}\label{UP}
\mathcal{U}=-\frac{W_\psi^2}{2r^2}\pm\alpha\frac{W_\psi}{r}.
\end{align}
The only way for equality (\ref{UP}) to be true is if $\mathcal{U}[\phi_i(x_i);x_i]=\mathcal{U}(x_i)$. In this case, the interaction of the theory [see the Lagrangian (\ref{Lag})] is null, i. e., $\mathcal{V}=0$. So, we would have a theory (\ref{Lag}) without a vacuum. Allow us, for the moment, to focus on this case. Thus, let us assume the superpotential
\begin{align}\label{SPSigma}
\mathcal{U}=-\frac{\alpha^2}{2r^2}[1-\tanh^2(\text{ln}(r^\alpha))]^2+\frac{\alpha^2}{r}[1-\tanh(\text{ln}(r^\alpha))].
\end{align}
\subsection{The vacuumless vortex solutions}
Considering the BPS equations (\ref{BPS1}), the ansätze (\ref{ansatz1}) and (\ref{ansatz3}), and the superpotential (\ref{SPSigma}), one obtains the well-known vortex equations of the O(3)-sigma model, i. e.,
\begin{align}\label{B1}
f'(r)=\pm\frac{N}{r}[a(r)-1]\sin f(r),
\end{align}
and
\begin{align}\label{B2}
a'(r)=\pm\frac{\alpha}{N}\sqrt{2r[1-\tanh^2(\text{ln}(r^\alpha))]-[1-\tanh^2(\text{ln}(r^\alpha))]^2}.
\end{align}
To write Eqs. (\ref{B1}) and (\ref{B2}), we use the natural units, i. e., $e=1$.
Considering the topological boundary conditions (\ref{top2}) and (\ref{top3}), let us investigate the vortex solutions produced by Eqs. (\ref{B1}) and (\ref{B2}). To study these solutions, we will use the numerical interpolation method. Thus, in Fig. \ref{fig2}, the numerical solutions are displayed. Fig. \ref{fig2}(a) corresponds to the matter field solutions of the topological sector for the $\Phi$ field. On the other hand, Fig. \ref{fig2}(b) corresponds to the topological solutions of the gauge field.
\begin{figure}[ht!]
\centering
\includegraphics[height=6cm,width=7.5cm]{sigma.pdf}
\includegraphics[height=6cm,width=7.5cm]{gauge.pdf}\vspace{-1cm}
\begin{center}
(a) \hspace{7cm} (b)
\end{center}
\vspace{-1cm}
\caption{(a) Solution of the field variable of the O(3)-sigma model. (b) Solution of the gauge field. In both plots, the dotted line is the curve when $\alpha=1$, while the other curves correspond to $\alpha=2,4,8,16$ and $32$.}
\label{fig2}
\end{figure}
Using the numerical solutions of the matter field (\ref{B1}) and the gauge field (\ref{B2}), one can analyze the magnetic field and the energy density (\ref{EBPS}) of the vortex. Let us start our analysis by investigating the vortex magnetic field. To perform this analysis, we recall that Eq. (\ref{MagneticF}) gives us the magnetic field. Thus, substituting the numerical solution of the gauge field in Eq. (\ref{MagneticF}), we obtain the vortex magnetic field. We expose the magnetic field in Fig. \ref{fig3}. This result shows us an interesting property of the vortex, i. e., the ring-like magnetic field. This feature is what we call a ring-like vortex. For more details, see Refs. \cite{Dionisio2,LA}. We discuss more physical implications of these results in the final remarks.
\begin{figure}[!p]
\centering
\includegraphics[height=4.5cm,width=6cm]{B1.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP1.pdf}
\includegraphics[height=4.5cm,width=6cm]{B2.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP2.pdf}
\includegraphics[height=4.5cm,width=6cm]{B3.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP3.pdf}
\includegraphics[height=4.5cm,width=6cm]{B4.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP4.pdf}
\includegraphics[height=4.5cm,width=6cm]{B5.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP5.pdf}
\vspace{-0.7cm}
\caption{Magnetic field varying $\alpha$.}
\label{fig3}
\end{figure}
By Eq. (\ref{EBPS}), the BPS energy density in terms of the field variable is
\begin{align}\label{DenergyBPS}
\mathcal{E}(r)=\mp\frac{N[a(r)-1]}{r}f'(r)\sin f(r)\mp\frac{Na'(r)}{er}\sqrt{2\mathcal{U}}\mp\frac{W_\psi\partial_i\psi}{r}.
\end{align}
Thus, substituting the numerical solutions of Eqs. (\ref{B1}) and (\ref{B2}) in Eq. (\ref{DenergyBPS}), the BPS energy density of the structure is obtained. The Fig. (\ref{fig4}) shows the numerical solution of the BPS energy density. Analyzing the BPS energy density (see Fig. \ref{fig4}), we highlight the interesting appearance of internal structures.
\begin{figure}[p]
\centering
\includegraphics[height=5cm,width=6cm]{E.pdf}
\includegraphics[height=5cm,width=5cm]{EP1.pdf}
\includegraphics[height=5cm,width=6cm]{E2.pdf}
\includegraphics[height=5cm,width=5cm]{EP2.pdf}
\includegraphics[height=5cm,width=6cm]{E3.pdf}
\includegraphics[height=5cm,width=5cm]{EP3.pdf}
\includegraphics[height=5cm,width=6cm]{E4.pdf}
\includegraphics[height=5cm,width=5cm]{EP4.pdf}
\includegraphics[height=5cm,width=6cm]{E5.pdf}
\includegraphics[height=5cm,width=5cm]{EP5.pdf}
\vspace{-0.7cm}
\caption{Vortex energy density varying $\alpha$.}
\label{fig4}
\end{figure}
\section{Final remarks}
In this work, we studied the vortex solutions of a multi-field theory. The model proposed has a canonical field, i. e., the field describing the O(3)-sigma model, and a non-canonical field, i. e., the field $\psi$. Furthermore, it is considered that $\Phi$ is non-minimally coupled with the gauge field. Thus, the vortices produced have an anomalous contribution from the magnetic dipole momentum.
We consider that the scalar field dynamics have canonical and non-canonical contributions. These contributions are, respectively, $\frac{1}{2}\partial_\mu\psi\partial^\mu\psi$ and $\eta\sqrt {\vert\partial_\mu\psi\partial^\mu\psi\vert}$. The non-canonical contribution is what is known as cuscuton. The cuscuton term is interesting since the contribution of the cuscuton in the stationary case is trivial. Thus, the equation of motion will only have contributions from the canonical terms in the stationary limit. However, in this case, the term cuscuton will have a non-trivial contribution to the energy density of the structures. Therefore, in the stationary BPS limit, cuscuton will be an impurity of the theory. It is worthwhile to mention that cuscuton, in this scenario, is interpreted as an impurity only at the topological sector of the sigma field. Indeed, this is a consequence of dealing with a vacuumless theory, i. e., $\mathcal{V}=0$.
Furthermore, the vacuumless multi-field model proposed proved to support electrically neutral vortices that engender an interesting internal structure. Besides, the magnetic field of vortices also has a ring-like shape. Note that these ring structures become well defined if the contribution of cuscuton increases, i. e. when the $\alpha$ parameter increases. Consequently, as $\eta$ increases, the flux of the magnetic field will increase, and therefore, the energy radiated by the vortex increases. In general, we can interpret this as a consequence of the behavior of the matter field and the gauge field in the topological sector of the sigma model. These fields have a very peculiar behavior, i. e., when the contribution of the cuscuton term (the impurity) increases, the matter field and the gauge field become more compact. That occurs due to the location of the kink at the topological sector of $\psi$ around $r=1$.
Finally, allow us to mention that theories of supersymmetric vortices are a subject of growing interest. That is because these theories generalize particle-vortex dualities. Thus, one expects that duality to have applications in condensed matter physics. Therefore, a future perspective of this work is the study of particle-vortex duality in our theory. Furthermore, one can build extensions of this theory by implementing these structures in dielectric media. We hope to carry out these studies soon.
\section{Acknowledgment}
The authors thank the Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq), grant n$\textsuperscript{\underline{\scriptsize o}}$ 309553/2021-0 (CASA) and the Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior (CAPES), grant n$\textsuperscript{\underline{\scriptsize o}}$ 88887.372425/2019-00 (FCEL), for financial support.
|
\section{Introduction}\label{sec:intro}
Modern optical superresolution (OSR) imaging has drawn much interest over the past fifty years, starting with the pioneering modern work of Rushforth and Harris \cite{Rushforth68} on the role of noise in classical image restoration from spatially filtered images. Novel optical designs utilizing super-oscillating point-spread functions (PSFs) \cite{Berry06, Yuan16, Gbur19}, new metamaterial based super-lenses \cite{Durant06, Jacob06, Salandrino06, Liu07}, structured-illumination microscopy (SIM) \cite{Gustaffson00}, superresolution imaging flourescence imaging (SOFI) \cite{SOFI09}, superresolution imaging with quantum emitters employing quantum correlations in second \cite{Schwartz12} and higher orders \cite{Schwartz13, Monticone14, Israel17}, and SIM enhanced by quantum correlations of single emitters\cite{Classen17}, pushed at the theoretical limits of super-resolution in different ways. They all have practical limitations of one form or another, however, and achieve only moderate improvements by factors of order 2-3 even at very high signal-to-noise ratios.
It was not until more recently that single-molecule localization imaging using uncorrelated photons from randomly photoactivated, well separated individual molecules \cite{Rust06} led to a qualitatively major advance in super-resolution, reaching ten to hundred fold improvement when compared to the classic Rayleigh-Abbe resolution limits. But these methods are limited to the biological domain where photoactivations and observations of only a subset of well separated fluorescent molecules are enabled, which only requires localization microscopy for each such sub-image, entailing a photon budget that follows an inverse quadratic dependence on the sought localization precision \cite{Thompson02,Ober04}. The final superresolved image only emerges when a large number of such source-localization-based subimages are carefully registered with respect to (w.r.t.) a fixed high-resolution grid and then superposed.
The use of coherent detection techniques \cite{Roberts16,Yang16} have promised to enable qualitatively superior super-resolution of closely spaced point sources via quantum-correlated, optical centroid measuring states \cite{Tsang09, Schwartz13, Unternahrer18} and wavefront projections \cite{Tsang16, Nair16, Paur16, Rehacek17,Tham17,Chrostowski17,Zhou18,Tsang18,Tsang20}. These latter papers, led principally by the work of Tsang and collaborators \cite{Tsang16}, have provided the most fundamental, quantum mechanical estimation-theoretic limits of superresolution possible by {\it any} method and their realization for point-source imaging in domains including and, notably, beyond microscopy. In the photon counting limit, the variance for estimating the separation between a closely spaced, symmetrical pair of point sources using wavefront projections can, in principle, approach this quantum limit, with the corresponding photon cost scaling according to an inverse-square law w.r.t.~separation, rather than the inverse quartic law for intensity-based images \cite{Ram06,Prasad14}.
Three recent papers by the present author \cite{YuPrasad18, PrasadYu19, Prasad20} have derived quantum estimation-theoretic limits on full three-dimensional (3D) localization and separation of a pair of incoherent point sources when the sources emit at a single wavelength. Coherent wavefront projections have been proposed and demonstrated as a way of realizing the lowest possible, quantum-mechanical bound on the variance of an unbiased estimation of the pair separation, as determined by the inverse of the quantum Fisher information (QFI) \cite{Helstrom76,Braunstein94,Paris09}.
The projective wavefront coding approach to spatial point-source-pair OSR can be readily generalized to the time-frequency domain as well, as shown in Ref.~\cite{Donohue18} for a pair of Gaussian pulse forms with slightly different center frequencies for their spectra using Hermite-Gaussian time-frequency modes. But the calculation and possible realization of the quantum bounds on the spatial OSR problem when source emission has a finite optical bandwidth, a problem that combines experimentally relevant spatial and temporal characteristics in a single setting, has not been treated before. The fundamental quantum bound on the variance of estimation of both the location and separation of the source pair by an imager is expected to degrade with increasing bandwidth of incoherent emission, since the imager's PSF being optical-frequency dependent broadens.
In this paper, we calculate the quantum estimation-theoretic fidelity for two problems of interest of finite-bandwidth emission in two dimensions - the transverse localization of a single point source w.r.t.~the optical axis and the separation of a pair of equally bright point sources that are symmetrically located w.r.t.~to the optical axis. Assuming uniform incoherent emission over a finite bandwidth, with no emission outside it, we utilize the basis of one-dimensional (1D) prolate spheroidal wave functions (PSWFs) to calculate QFI for these two problems when the imaging pupil is a clear circular disk with perfect transmission, for which the PSF is of the Airy form \cite{Goodman96}. Since, as previously noted \cite{Paur16,YuPrasad18}, in the photon counting limit the symmetrical pair OSR problem with a fixed midpoint of the pair separation vector and the single-source localization problem entail the same minimum estimation error, we expect to obtain similar results for the two problems.
The use of PSWFs largely eliminates errors that would accrue from a direct numerical integration of the relevant integral eigenvalue equation based on a singular kernel function \cite{Bertero98}, while yielding important insights into the notion of an effective dimensionality \cite{Landau62,diFrancia69} of the continuous-state problem. The PSWF based approach, as we show in Ref.~\cite{Prasad20c}, also furnishes an excellent method for computing the quantum limits on superresolution imaging of spatially extended 1D and two dimensional (2D) sources.
\section{Quantum Bound on Source Localization with Single Photons}
Let a point source, which is located at position $\br$ in the plane of best focus, emit a photon into a uniformly mixed state of finite bandwidth $B\omega_0$ centered at frequency $\omega_0$ and let the photon be subsequently captured by an imaging system with aperture function $P(\bu)$. The state of such a photon may be described by the following single-photon density operator (SPDO):
\be
\label{rho}
\hrho = {1\over B}\int_\cB df\, |K_f\ra\la K_f|,
\ee
in which $f=(\omega-\omega_0)/\omega_0$ is the normalized frequency detuning, obtained by dividing the difference of the actual frequency, $\omega$, from the center frequency, $\omega_0$, by the latter. Correspondingly, the fractional detuning range, $\cB$, denotes the symmetrical interval, $-B/2<f<B/2$. Typical values of $B$ are expected to be small compared to 1. The wavefunction for the photon emitted into the pure state, $|K_f\ra$, of normalized frequency detuning $f$ and then captured by the imaging system has the following form in the system's exit pupil \cite{YuPrasad18}:
\be
\label{wavefunction}
\la \bu| K_f\ra = {1\over\sqrt{\pi}}P(\bu)\, \exp[-i2\pi (1+f)\bl\cdot\bu],
\ee
where the pupil position vector $\bu$ is the true position vector normalized by dividing the latter by the characteristic spatial scale $R$ of the exit pupil. For a circular aperture, we will take $R$ to be the radius of the exit pupil. The symbol $\bl$ denotes the normalized transverse location vector of the point source, $\bl=\br/\delta$, obtained by dividing its physical position vector $\br$ by the characteristic Airy diffraction parameter, $\delta\defeq \lambda_0 z_I/R$, corresponding to the center optical wavelength, $\lambda_0=2\pi c/\omega_0$, and the distance $z_I$ of the image plane from the exit pupil. The parameter $\delta$ sets the Rayleigh resolution scale. In this section, we consider the minimum quantum limited variance of estimation of the distance, $l=|\bl|$, of the source from a known origin using a circular imaging pupil, assuming that the angle $\phi$ that $\bl$ makes with the $x$ axis may be estimated accurately in advance.
The matter of how well we can localize a point source in the photon-counting limit can be treated by calculating the single-photon QFI w.r.t.~the source distance, $l$, from a fixed origin and then simply scaled up by multiplying the result with the observed number of photons. Such scaling is well justified for most thermal and other incoherent sources in nature because of their low mean photon emission number per coherence interval \cite{Goodman15}, $\delta_c <<1$, which thus may be regarded as emitting photons independently. The quantum state of $N$ independently emitted photons may be described by a tensor product of the density operators for the individual photons, so the overall QFI when $N$ independent photons are observed is simply $N$ times \cite{Tsang16,Liu20} that for a single photon. The same scaling holds for the classical Fisher information as well.
\subsection{General Expression for QFI}
The QFI per photon w.r.t.~a set of parameters, $\{\theta_1,\cdots,\theta_P\}$, on which SPDO has a differentiable dependence, is defined as the matrix \cite{Helstrom76} with elements that are the real part, denoted by the symbol, Re, of the following trace, denoted by the symbol, Tr:
\be
\label{QFIdef}
H_{\mu\nu} = \Re\, \Tr \left(\hrho\hat L_\mu\hat L_\nu\right),
\ee
where $\hat L_\mu$ is the symmetric logarithmic derivative of SPDO, $\hrho$, defined in terms of the partial derivative $\pmu\hrho$ w.r.t.~parameter $\theta_\mu$ by the relation,
\be
\label{SLD}
\partial_\mu \hrho = {1\over 2}\left(\hat L_\mu\hrho+\hrho\hat L_\mu\right).
\ee
By evaluating the trace in Eq.~(\ref{QFIdef}) in the basis of orthonormal eigenstates, $\{\lambda_i,\ |\lambda_i\ra\,|\,i=1,2,\ldots\}$ and calculating the partial trace over the null space of SPDO in terms of the partial trace over its range space, we may express $H_{\mu\nu}$ as \cite{YuPrasad18},
\begin{align}
\label{Hmn1}
H_{\mu\nu}=&4\sum_{i\in \cR}{1\over \lambda_i}\Re \langle \lambda_i|\partial_\mu \hat\rho\,\partial_\nu \hat\rho|\lambda_i\rangle]+2\sum_{i\in \cR}\sum_{j\in \cR}\Bigg[{1\over {(\lambda_i+\lambda_j)}}\nn
&-{1\over \lambda_i}-{1\over\lambda_j}\Bigg]\Re\langle \lambda_i|\partial_\mu \hat\rho|\lambda_j\rangle\langle \lambda_j|\partial_\nu \hat\rho|\lambda_i\rangle,
\end{align}
where $\cR$ denotes the space of values of the index of the eigenstates of SPDO associated with non-zero eigenvalues and the symbol $\partial_\mu$ denotes first-order partial derivative with respect to the parameter $\theta_\mu$.
For the present problem of estimating a single parameter, $l$, we may drop the parameter labels as well as the operator $\Re$ everywhere. By incorporating the $i=j$ terms from the double sum into the first sum in Eq.~(\ref{Hmn3}), we arrive at the following expression for QFI:
\ba
\label{Hmn3}
H=&\sum_{i\in \cR}{1\over \lambda_i}\left[4\langle \lambda_i|(\partial \hat\rho)^2|\lambda_i\rangle-3\la\lambda_i|\partial\hrho|\lambda_i\ra^2\right]\nn
+&2\sum_{i\neq j\in \cR}\left[{1\over (\lambda_i+\lambda_j)}-{1\over \lambda_i}-{1\over\lambda_j}\right]|\langle \lambda_i|\partial\hat\rho|\lambda_j\rangle|^2.
\end{align}
As we see from Eq.~(\ref{Hmn3}), evaluating QFI requires accurately computing the eigenstates and eigenvalues of SPDO given by Eq.~(\ref{rho}).
\subsection{Eigenstates and Eigenvalues of SPDO }
In view of expression (\ref{wavefunction}), the overlap function of two single-photon states at two different frequency detunings $f,f'$ is given by the following pupil-plane integral over the normalized position vector, $\bu=\brho/R$:
\ba
\label{overlap}
O(f-f') &\defeq \la K_f|K_{f'}\ra\nn
&= \int d^2 u |P(\bu)|^2\exp [i2\pi(f-f')\bl\cdot\bu].
\end{align}
For a circular clear pupil, for which $P(\bu)$ is simply $1/\sqrt{\pi}$ times the indicator function over the unit-radius pupil, the above integral may be evaluated in terms of Bessel function $J_1$ as \cite{Goodman96}
\be
\label{overlap1}
O(f-f') = {J_1(2\pi|f-f'|l)\over \pi|f-f'|l},
\ee
which reduces to 1 when $f\to f'$, as required by normalization of the single-photon states. The set of states, $\{|K_f\ra\}$, is clearly non-orthogonal.
Let $|\lambda\ra$ be an eigenstate of $\hrho$ of non-zero eigenvalue $\lambda$. Since $\hrho$ is supported over the subspace $\cH_B$ spanned by the basis $\{|K_f\ra,\ f\in \cB\}$, all its eigenstates with non-zero eigenvalues must also be fully contained in $\cH_B$. Consider therefore an expansion of $|\lambda\ra$ in this basis of form,
\be
\label{expansion}
|\lambda\ra = {1\over B} \int_\cB df' \, \dl(f') |K_{f'}\ra.
\ee
On substituting expressions (\ref{rho}) and (\ref{expansion}) for $\hrho$ and $|\lambda\ra$ into the eigenstate relation,
\be
\label{eigenrelation}
\hrho|\lambda \ra =\lambda |\lambda\ra,
\ee
and then equating the coefficients of each $|K_f\ra$ term on the two sides of the resulting equation, which is permitted due to the linear independence of these monochromatic single-photon states, we obtain the following integral equation for the coefficient function $\dl(f)$:
\be
\label{eigenrelation1}
{1\over B} \int_\cB O(f-f')\, \dl (f')\, df' = \lambda \dl(f).
\ee
By defining the Fourier transform of $\dl(f)$ as
\be
\label{FTcoeff}
\Dl (x)=\int_{-B/2}^{B/2} \dl(f)\exp(i2\pi f lx)\, df,\ \ x\in \cR,
\ee
we may transform Eq.~(\ref{eigenrelation1}) to the Fourier domain, as we show in Appendix A, re-expressing it as
\ba
\label{FTcoeff4}
\int_{-1}^1 \sqrt{1-x^{'2}}\,\sinc Bl(x-x')\,\Dl(x')\,&dx'={\pi \lambda\over 2} \Dl(x).\nn
& \ x\in \cR,
\end{align}
Note that without the square root inside the integrand Eq.~(\ref{FTcoeff4}) would be identical to the integral equation obeyed by the prolate spheroidal wave functions (PSWFs) first introduced by Slepian and Pollak \cite{Slepian61}.
Let us expand $\Dl(x)$ in the complete orthogonal PSWF basis over the interval $(-1,1)$,
\be
\label{PSWFexpansion}
\Dl(x) = \sum_n d_n^{(\lambda)}\Psi_n(x;C),
\ee
where $C\defeq\pi Bl$ is the space-bandwidth parameter (SBP) of the associated PSWF problem. Substituting expansion (\ref{PSWFexpansion}) into Eq.~(\ref{FTcoeff4}), we can convert the original SPDO eigenvalue problem into a matrix eigenvalue problem of form,
\be
\label{FTcoeffM1}
\bM \ud^{(\lambda)}=\lambda \ud^{(\lambda)},
\ee
in which $\ud^{(\lambda)}$ denotes the column vector of coefficients,
\be
\label{columnvector}
\ud^{(\lambda)} = (d_0,d_1,\ldots)^T,
\ee
with the superscript $T$ on a matrix denoting its simple transpose and the elements of the matrix $\bM$ are defined as the integral,
\be
\label{M}
M_{mn} = {2\over C}\int_{-1}^1 \sqrt{1-x^2}\, \Psi_m(x)\, \Psi_n(x)\, dx.
\ee
We relegate the details of this evaluation to Appendix A.
The PSWFs alternate in parity,
\be
\label{PSWFparity}
\Psi_n(-x;C)=(-1)^n\Psi_n(x;C),
\ee
and their associated eigenvalues $\lambda_n(C)$ are all positive and arranged in descending order, and obey the sum rule,
\be
\label{PSWFsum}
\sum_{n=0}^\infty \lambda_n^{(C)} = 2{C\over \pi},
\ee
with approximately $S\defeq\lceil 2C/\pi\rceil $ of these eigenvalues being close to $\min(2C/\pi,1)$ and the rest decaying rapidly toward 0 with increasing index value. Here the function $\lceil x\rceil$ takes the value 1 plus the integer part of $x$. The number $S$ is called the Shannon number, which was first introduced and discussed in the context of imaging by Toraldo di Francia \cite{diFrancia55, diFrancia69} as a measure of the effective number of degrees of freedom when a finite object is imaged with a finite-aperture imager.
Since the PSWF $\Psi_n$ is either even or odd under inversion according to whether the index $n$ is even or odd, it follows that $M_{mn}$ is non-zero only if $m$ and $n$ are either both even or both odd. It then follows from Eq.~(\ref{FTcoeffM1}) that the set of coefficients $\{d_n^{(\lambda)}|n=0,1,\ldots\}$ separates into two subsets of coefficients, namely $\cD_e=\{d_n^{(\lambda)}|n=0,2,\ldots\}$ and $\cD_o=\{ d_n^{(\lambda)}| n=1,3,\ldots\}, $ that are only coupled within each subset. Correspondingly, in view of expansion (\ref{PSWFexpansion}) and parity-alternation property (\ref{PSWFparity}), the associated eigenfunctions $\Dl(x)$ are either even or odd under inversion, a fact that also follows directly from the form of the kernel of the integral equation (\ref{FTcoeff4}). For the two sets of even-order and odd-order coefficients, the matrix eigenvalue equation (\ref{FTcoeffM1}) may be solved quite efficiently for its eigenvalues and eigenvectors by truncating the size of the matrix at some finite but sufficiently high value $N$, {\it i.e.}, $0\leq m,n\leq N-1$. We evaluated integral (\ref{M}) by approximating the integral by a discretized Riemann sum and then using the Matlab routine {\it dpss} \cite{Percival98} that computes discrete sequences of the PSWFs for different values of SBP and sequence length on the interval $(-1,1)$.
Due to the closeness of Eq.~(\ref{FTcoeff4}) to the integral equation obeyed by the PSWF, we expect there to be only a number of order $S$ of significantly large non-negative eigenvalues, $\lambda_p$, with the largest one being of order 1 and the successively smaller eigenvalues decreasing rapidly by many orders from one to the next. In other words, the nominal rank and the dimension of the range space of SPDO $\hrho$ are both expected to be of order $S$. This observation renders the problem numerically highly efficient, particularly when $C\sim 1$, for which the truncation index value, $N$, need not be greater than 10-20. These properties and the sum rule,
\be
\label{eig_sumrule}
\sum_{p=0}^\infty \lambda_p= 1,
\ee
obeyed by the eigenvalues of $\hrho$, since ${\rm Tr}\,\hrho = 1$, were verified numerically.
\subsection{Evaluation of QFI for 2D Source Localization}
By differentiaing expression (\ref{rho}) w.r.t.~$l$, we obtain
\be
\label{drho}
\partial\hrho={1\over B}\int df\, [\partial|K_f\ra\la K_f|+|K_f\ra\partial\la K_f|],
\ee
which, upon squaring and noting relation (\ref{overlap}), further yields
\ba
\label{drho2}
(\partial\hrho)^2=&{1\over B^2}\int df\int df'\, [\partial|K_f\ra\la K_f|\partial|K_{f'}\ra\la K_{f'}|\nn
+\partial&|K_f\ra O(f-f')\, \partial\la K_{f'}|+|K_f\ra\partial\la K_f|\partial|K_{f'}\ra\la K_{f'}|\nn
+&|K_f\ra\partial\la K_f|K_{f'}\ra\partial\la K_{f'}|].
\end{align}
For notational brevity, we henceforth use the convention that $\partial$ only operates on the quantity immediately following it and have dropped explicit reference to the range, $(-B/2,B/2)$, of the frequency integrals.
Next, taking the scalar product of the state vector $|K_{f'}\ra$ with expression (\ref{expansion}) for the eigenstate $|\lambda\ra$ and subsequently using the integral equation (\ref{eigenrelation1}) that the coefficients $d_\lambda (f)$ satisfies, we may show readily that
\be
\label{Kf_lambda_matrixelement}
\la K_{f'}|\lambda_i\ra= \lambda_i d_i(f').
\ee
Use of expression (\ref{wavefunction}) for the wave function permits evaluation of the matrix element $\la K_{f'}|\partial|K_f\ra$ for a clear circular pupil for which $P(\bu)$ is simply $1/\sqrt{\pi}$ times its indicator function as
\ba
\label{KpK}
\la K_{f'}|\partial|K_f\ra=&-2i(1+f)\int_{u<1}\!\! \!\!d^2 u\cos\Phi_u\nn
&\qquad\times \exp[-i2\pi (f-f')ul\cos\Phi_u]\nn
=&-4\pi (1+f)\int_0^1 du\, u^2 J_1\big(2\pi (f-f')ul\big) \nn
=&-{2(1+f)\over (f-f')l} J_2\big(2\pi (f-f')l\big)\nn
=&(1+f)P(f-f'),\quad P(x)\defeq -2 {J_2(2\pi xl)\over xl}
\end{align}
in which $\Phi_u=\phi_u-\phi$ and we made successive use of the following identities for integrating first over the azimuthal angle, $\phi_u$, and then over the radial variable, $u$, of the pupil plane:
\ba
\label{BesselIdentities}
\oint d\Phi\cos n\Phi \exp[\pm iz\cos(\Phi-\psi)] = &(\pm i)^n 2\pi \cos n\psi J_n(z);\nn
z^n J_{n-1} (z)=&{d\over dz}\left[z^n J_n(z)\right].
\end{align}
We can now evaluate the matrix element $\la\lambda_i|\partial\hrho|\lambda_j\ra$, with $\partial\hrho$ given by (\ref{drho}), by using relations (\ref{Kf_lambda_matrixelement}), (\ref{KpK}), and (\ref{expansion}) as,
\ba
\label{lambda_drho_lambda}
\la \lambda_i|\partial\hrho|\lambda_j\ra ={1\over B^2}&\int\int df \, df' (1+f)\big[\lambda_id_i(f')\, d_j(f) \nn
&+\lambda_j d_j(f')\, d_i(f)\big]\, P(f-f').
\end{align}
To evaluate the matrix elements $\la \lambda_i|(\partial\hrho)^2|\lambda_i\ra$, we first note from Eq.~(\ref{drho2}) that we need one more matrix element involving single-frequency emission states, namely $\partial \la K_f|\partial |K_{f'}\ra$, which we may evaluate as
\ba
\label{pKpK}
\partial \la K_f|\partial &|K_{f'}\ra =4\pi (1+f)(1+f')\nn
\times&\int_{u<1}d^2u\, u^2\cos^2\Phi_u \exp[-i2\pi(f-f')Bl\cos\Phi_u]\nn
= (2\pi)^2&(1+f)(1+f')\int_0^1du\, u^3\Big[J_0\big(2\pi (f-f')lu\big)\nn
&\qquad\qquad+i^2J_2\big(2\pi (f-f')lu\big)\Big],
\end{align}
in which we used the identity, $2\cos^2\Phi_u=(1+\cos 2\Phi_u)$, and then used the first of the identities (\ref{BesselIdentities}) twice to reach the final equality. The indefinite integral of the first term in the integrand is known to be \cite{besint19}
\be
\label{BesselIdentity3}
\int dz\, z^3 J_0(z) = 2z^2 J_0(z) +z(z^2-4) J_1(z),
\ee
while the second term in the integrand may be evaluated immediately using the second of the identities (\ref{BesselIdentities}) for $n=3$. We obtain in this way the result,
\be
\label{pKpK2}
\partial \la K_f|\partial |K_{f'}\ra =(1+f)(1+f')\, Q(f-f'),
\ee
where the function $Q$ is defined by the relation
\ba
\label{Q}
Q(x)=&\Bigg[{2\over x^2l^2}\left(J_0\big(2\pi xl\big) -2{J_1\big(2\pi xl\big)\over 2\pi xl}\right)\nn
&+{2\pi\over xl}\left(J_1\big(2\pi xl\big)-J_3\big(2\pi xl\big)\right)\Bigg].
\end{align}
In terms of the functions $O,\ P,$ and $Q$, we may express Eq.~(\ref{drho2}) as
\ba
\label{drho2a}
(\partial\hrho)^2=&{1\over B^2}\int df\int df'\, \big[\partial|K_f\ra (1+f')P(f'-f)\la K_{f'}|\nn
&+\partial|K_f\ra O(f-f')\, \partial\la K_{f'}|\nn
&+(1+f)(1+f')|K_f\ra Q(f-f')\la K_{f'}|\nn
&+(1+f)|K_f\ra P(f-f')\partial\la K_{f'}|\big].
\end{align}
The matrix element $\la \lambda_i|(\partial\hrho)^2|\lambda_i\ra$ now follows from a repeated use of identity (\ref{Kf_lambda_matrixelement}) and expansion (\ref{expansion}), the latter yielding the relation,
\ba
\label{lambda_dKf}
\partial\la K_f|\lambda_i\ra^*=&\la \lambda_i|\partial|K_f\ra ={1\over B}\int df^{''}d_i(f^{''}) \la K_f^{''}|\partial|K_f\ra\nn
=&{1\over B}(1+f)\int df^{''} P(f-f^{''})d_i(f^{''}),
\end{align}
which can be evaluated efficiently by discretizing the integral as a Riemann sum that can be expressed as a matrix-vector product.
In Fig.~\ref{2DLocQFI_vs_B}, we display numerically evaluated QFI for estimating the source location for a number of different values of its distance $l$ away from the {\it a priori} well determined axial point in the plane of Gaussian focus. The source distance, $l$, expressed in image-plane units of the Airy diffraction width parameter, $\lambda_0 z_I/R$, is allowed to vary in the sub-diffractive regime from 0.2 to 1.0. As expected and seen from the figure, QFI decreases from the maximum theoretical zero-bandwidth value \cite{YuPrasad18} of $4 \pi^2$ as the fractional bandwidth, $B$, increases but this decrease is rather gradual. Even for $l=1$, the maximum reduction of QFI at 20\% fractional bandwidth is no larger than about 10\%. The drop in QFI as $B$ varies between 0.02 and 0.04 for $l=0.2$ is presumably a numerical artifact, as we expect the localization QFI in this range to be quite close to the maximum value. Since the minimum variance of unbiased estimation is the reciprocal of QFI \cite{Helstrom76}, the minimum quantum-limited error for estimating $l$ correspondingly increases with increasing $B$.
To see heuristically how bandwidth increase causes source-localization QFI to decrease, note that for monochromatic imaging ($B=0$) QFI is given by the expression \cite{YuPrasad18},
\be
\label{QFImono}
H=4\left[(\partial_l \la K_f|)\partial_l|K_f\ra-|\la K_f\partial_l|K_f\ra|^2\right],
\ee
which reduces, for an inversion symmetric aperture like the clear circular aperture, to four times the aperture average of the squared gradient w.r.t.~$l$ of the wavefront phase $\Psi$ in the aperture plane,
\be
\label{MeanSqPhaseGrad}
H ={4\over \pi}\int d^2u P(\bu) (\partial_l \Psi)^2.
\ee
Since $\Psi=2\pi\,l\,u\,(1+f)\cos\Phi$ according to Eq.~(\ref{wavefunction}), Eq.~(\ref{MeanSqPhaseGrad}) evaluates for the clear circular aperture and monochromatic imaging ($f=0$) to the value $4\pi^2$. When the wavefunction $\la\bu|K_f\ra$, given by Eq.~(\ref{wavefunction}), is distributed over a finite bandwidth, the overall phase of any superposition of such wavefunctions gets scrambled, the more so, the larger the bandwidth, which increasingly reduces the mean-squared phase gradient over the aperture and thus $H$ with increasing bandwidth. This heuristic picture is supported by our quantitatively rigorous calculation of QFI based on the SPDO eigenvalues and eigenfunctions computed using PSWFs and displayed in Fig.~\ref{2DLocQFI_vs_B}.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{2DLocQFI_vs_B_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of QFI for estimating the distance, $l$, of a point source in the plane of Gaussian focus from the point of intersection of that plane with the optical axis vs. the fractional bandwidth, $B$, for different values of source distance $l$.}
\label{2DLocQFI_vs_B}
\end{figure}
\section{Quantum Bound on 2D Source-Pair Superresolution with Single Photons}
We will now evaluate QFI (\ref{Hmn3}) for estimating the separation of a symmetrical pair of closely spaced sources in a plane transverse to the optical axis of a circular-aperture imager. This calculation is closely related to the single-source localization QFI that we have considered so far.
Consider a pair of equally bright incoherent point sources that are located at positions $\pm \bl$ with respect to the mid-point of their separation vector, which we choose to be fixed {\it a priori} at the origin. The SPDO for light emitted by the pair and transmitted through the imager to its pupil may be written as the integral
\be
\label{rho2}
\hrho={1\over 2B}\int_\cB\left[|K_{+f}\ra\la K_{+f}|+|K_{-f}\ra\la K_{-f}|\right]\, df
\ee
in which, as for the localization problem, we take the detuning power spectrum of the imaging photon to be a top-hat function of fractional bandwidth $B$. The state $|K_{\pm f}\ra$ is the pure monochromatic-photon state vector of fractional frequency $f$ emitted by the source located at $\pm \bl$, with its pupil-plane wave function of form,
\be
\label{wavefunction2}
\la \bu| K_{\pm f}\ra = {1\over\sqrt{\pi}}P(\bu)\, \exp[\mp i2\pi (1+f)\bl\cdot\bu].
\ee
Because of the unit normalization of each of these states, $\la K_{\pm f}|K_{\pm f}\ra=1,$ expression (\ref{rho2}) has unit trace, $\trace(\hrho)=1$, as required. Also, the various pure-state overlap functions, for the case of a circular clear imaging aperture we are considering here, are real and equal in pairs,
\ba
\label{overlap2D}
\la K_{\pm f}|K_{\pm f'}\ra &= O(f-f');\nn
\la K_{\pm f}|K_{\mp f'}\ra = &O(2+f+f'),
\end{align}
in terms of the function $O$ defined by relation (\ref{overlap1}). We now calculate the eigenvalues and eigenstates of SPDO (\ref{rho2}) in terms of which QFI (\ref{Hmn3}) is defined.
\subsection{Eigenvalues and Eigenstates of SPDO (\ref{rho2})}
Let an eigenstate of SPDO (\ref{rho2}) obeying the relation,
\be
\label{eigen2}
\hrho |\lambda\ra = \lambda|\lambda\ra,
\ee
have the expansion
\be
\label{expansion2}
|\lambda\ra = {1\over B}\int_\cB \left[d_+(f)|K_{+f}\ra+d_-(f)|K_{-f}\ra\right] df.
\ee
Substitution of expression (\ref{rho2}) and expansion (\ref{expansion2}) into eigenrelation (\ref{eigen2}) and use of relations (\ref{overlap2D}) for the various state overlap functions, followed by equating coefficients of the different monochromatic source states on the two sides of the resulting equation, yield the following pair of coupled integral equations for the coefficient functions, $d_\pm(f)$:
\ba
\label{IntEq2}
{1\over 2B}\int_\cB df'\, \big[&d_+(f')O(f-f')\nn
&+d_-(f') O(2+f+f')\big] = \lambda d_+(f);\nn
{1\over 2B}\int_\cB df'\, \big[&d_+(f')O(2+f+f')\nn
&\qquad +d_-(f') O(f-f')\big] = \lambda d_-(f).
\end{align}
The two coupled equations in Eq.~(\ref{IntEq2}) may be decoupled by either adding them or subtracting one from the other as
\ba
\label{IntEq2Decoupled}
{1\over 2B}\int_\cB df'\, \left[O(f-f')+ O(2+f+f')\right]S_+(f') = &\lambda S_+(f);\nn
{1\over 2B}\int_\cB df'\, \left[O(f-f')-O(2+f+f')\right]S_-(f') = &\lambda S_-(f),
\end{align}
where $S_+$ and $S_-$ are the sum and difference functions,
\be
\label{SA}
S_+(f)=d_+(f)+d_-(f);\ \ S_-(f)=d_+(f)-d_-(f).
\ee
The two uncoupled equations (\ref{IntEq2Decoupled}) can be satisfied simultaneously by choosing either $S_+\neq 0,\ S_-=0$ or $S_+=0,\ S_-\neq 0$, corresponding, per Eq.~(\ref{SA}), to the choices $d_+(f)=\pm d_-(f)$. The nontrivial equation in each case may then be solved independently by using the same approach as for the 2D localization problem. Since the kernel functions, $[O(f-f')\pm O(2+f+f')]$, are not invariant under inversion, $f\to -f, \ f'\to -f'$, both even and odd PSWFs will be present in each such expansion, however.
We first transform the problem to the Fourier domain,
\be
\label{FT2}
\tilde S_\pm(x) = \int_{-B/2}^{B/2} df \, \exp(i2\pi lxf)\, S_\pm(f),
\ee
and use the same $\delta$-function trick we used in going from Eq.~(\ref{AFTcoeff1}) to (\ref{AFTcoeff3}). Use of the Fourier shift theorem, which imples that the FT of the function $O(2+f)$ is simply $\exp(i4\pi lx)$ times the FT of the unshifted function, $O(f)$, we see that Eqs.~(\ref{IntEq2Decoupled}) transform to a pair of more convenient equations, which we can write more compactly as a single equation with its lower and upper signs corresponding to the two separate equations,
\ba
\label{IntEq2FTscaled}
\int_{-1}^1& dx'\sqrt{1-x^{'2}}[\sinc Bl(x-x') \pm \exp(4\pi i lx')\nn
&\times\sinc Bl(x+x')] \tS_\pm(x')=\pi\lambda\tS_\pm(x),\ \ x\in \cR.
\end{align}
We may now substitute the spectral expansion (\ref{Aspectral_expansion}) of the sinc function and the expansion of the eigenfunctions $\tS_\pm (x)$ in terms of the PSWFs, namely
\be
\label{eigen2_expansion}
\tS_\pm(x)=\sum_{n=0}^\infty s^{(\pm)}_n \Psi_n(x;C),
\ee
into Eqs.~(\ref{IntEq2FTscaled}), then use the second of the orthogonality relations (\ref{APSWFnorm}), and finally equate the coefficients of the individual PSWFs on both sides to convert those two integral equations into the following pair of matrix equations:
\be
\label{eigen2_matrix_eq}
\sum_{n=0}^\infty \left[F_{mn}\pm (-1)^m G_{mn}\right] s^{(\pm)}_n=\lambda s^{(\pm)}_m,
\ee
in which the matrix elements $F_{mn}$ and $G_{mn}$ are defined as the integrals,
\ba
\label{FGmn}
F_{mn} = &{1\over C}\int_{-1}^1 \!\!dx'\sqrt{1-x^{'2}}\Psi_m(x';C)\Psi_n(x';C);\nn
G_{mn} = &{1\over C}\int_{-1}^1 \!\!dx'\sqrt{1-x^{'2}}\exp(4\pi ix')\,\Psi_m(x';C)\Psi_n(x';C).
\end{align}
To reach Eq.~(\ref{eigen2_matrix_eq}), we also used the parity-alternation property (\ref{PSWFparity}) of the PSWFs.
We now make use of the reality condition on the coefficient functions $d_\pm(f)$, or equivalently on their sum and difference functions, $S_\pm(f)$, in the frequency domain. This condition requires that in the Fourier domain ($x$), the functions $\tS_\pm(x)$ obey the condition,
\be
\label{reality2}
\tS_\pm^*(x)=\tS_\pm(-x),
\ee
which upon substitution of expansion (\ref{eigen2_expansion}) and use of parity property (\ref{PSWFparity}) yields the equivalent condition,
\be
\label{reality2coeff}
s^{(\pm )*}_n=(-1)^n s^{(\pm)}_n.
\ee
In other words, the coefficients $s^{(\pm)}_n$ are alternately either purely real or purely imaginary, as the index $n$ ranges over all non-negative integer values. As such, we may express them in terms of real coefficients $t^{(\pm)}_n$ by the relation,
\be
\label{real_coeff}
s^{(\pm)}_n=i^n t^{(\pm)}_n.
\ee
A substitution of this relation into the eigenrelation (\ref{eigen2_matrix_eq}) yields the equivalent eigenrelation,
\be
\label{eigen2_matrix_eq_real}
\sum_{n=0}^\infty \left(\tF_{mn}\pm \tG_{mn}\right) t^{(\pm)}_n=\lambda t^{(\pm)}_m,
\ee
in which the matrix elements $\tF_{mn}$ and $\tG_{mn}$ are defined by the relation
\be
\label{real_matrix}
\tF_{mn}=i^{n-m} F_{mn},\ \tG_{mn}=i^{n+m}G_{mn}.
\ee
In view of the alternating parity of the PSWFs with changing order, the parity-even property of $\sqrt{1-x^{'2}}$ and of the integration range, the definitions (\ref{FGmn}) of the matrix elements, and since $\exp(4\pi i lx')$ is the sum of a real parity-even and an imaginary parity-odd part, we can see that ${\bf F}$ and ${\bf G}$ are symmetric matrices, $F_{mn}=0$ when the index difference $m-n$ is odd, and $G_{mn}$ is purely real when $m+n$ is even and purely imaginary when $m+n$ is odd. It then follows that $\tF_{mn}$ and $\tG_{mn}$ defined by Eq.~(\ref{real_matrix}) are both real and symmetric. The eigenrelations (\ref{eigen2_matrix_eq_real}) are thus purely real equations involving symmetric system matrices, and are thus guaranteed to have real eigenvalues and orthogonal eigenvectors for non-degenerate eigenvalues.
We have numerically evaluated the eigenvalues and eigenvectors of the two matrices $(\tilde{\bf F}\pm\tilde{\bf G})$ by first calculating their matrix elements in terms of the discrete prolate spheroidal sequences discussed earlier, taking the latter to have a suffiiciently large length and truncating the matrices at some high but finite order of the PSWFs to ensure good accuracy. It helps, as with the localization problem, to know that only the largest $\mathcal{O}\lceil 2 C/\pi\rceil$ eigenvalues are sufficiently different from 0 to contribute significantly to QFI. In fact, for $C<<1$, which is the case of interest here, we ensure more than sufficient accuracy by truncating the matrix at order no larger than $15\times 15$ for which the smallest reproducibly computable eigenvalue has already dropped to a value more than fifteen orders of magnitude smaller than the largest one.
The orthogonality condition for the eigenvectors, $\la\lambda|\lambda'\ra=\delta_{\lambda \lambda'}$, can be shown, analogously to that for the localization problem, to be the same as Eq.~(\ref{Acolumn_orthogonality}), which for the column vector of real coefficients $t^{(\lambda)}_n$ is also the same,
\be
\label{column_orthogonality2}
\ut^{(\lambda)\dagger}\ut^{(\lambda')}= {B\over l \lambda}\delta_{\lambda\lambda'},
\ee
where the superscript $(\lambda)$ labels the column vector corresponding to the eigenstate $|\lambda\ra$. Since the Hermitian transpose for a real column vector such as $\ut^{(\lambda)}$ amounts to its simple matrix transpose, we may renormalize each ordinary orthonormal eigenvector obtained from a numerical evaluation of that eigenvector by an extra factor of $\sqrt{B/(l\lambda)}$.
\subsection{QFI Calculation}
We use expression (\ref{Hmn3}) for the evaluation of QFI for the single parameter of interest, the semi-separation parameter $l$. Unlike the localization problem, expression (\ref{rho2}) for SPDO is now more involved as it entails emission from two sources, rather than one. However, since we can work in the symmetric and anti-symmetric invariant subspaces of the DO, the two problems are rather analogous. In particular, we see that an eigenstate of SPDO in either of its $\pm$ range subspaces, which we denote as $\cH_B^{(\pm)}$, may be expressed as
\be
\label{eigen2_pm}
|\lambda^{(\pm)}\ra = {1\over B}\int d_+(f')\,\left(|K_{+f'}\ra\pm |K_{-f'}\ra\right) df',
\ee
with the notation $|\lambda^{(\pm)}\ra$ referring to an eigenstate belonging to the $\cH_B^{(\pm)}$ subspace. In view of this form, we can derive the relation,
\ba
\label{Kpf_lambda_pm}
\la K_{+f}|\lambda^{(\pm)}\ra =& {1\over B}\int d_+(f')\,\left[O(f-f')\pm O(2+f+f')\right] df'\nn
=&2\lambda^{(\pm)}d_+(f),
\end{align}
with the first relation following from a use of the overlap functions (\ref{overlap2D}) and the second from the eigenfunction relation (\ref{IntEq2}) in which we also used the fact that $d_-(f)=\pm d_+(f)$ in the two subspaces. We may similarly show that
\be
\label{Kmf_lambda_pm}
\la K_{-f}|\lambda^{(\pm)}\ra =2\lambda^{(\pm)} d_{-}(f)=\pm 2\lambda^{(\pm)}d_+(f).
\ee
The evaluation of the matrix elements $\la \lambda^{(\pm)}_i|\drho|\lambda^{(\pm)}_j\ra$ and $\la \lambda^{(\pm)}_i|(\drho)^2|\lambda^{(\pm)}_i\ra$ within each of the subspaces separately can now be carried out by differentiating expression (\ref{rho2}) with respect to $l$ first. The latter operation generates four terms, a pair of terms for each of the bilinear products, $|K_{+f}\ra\la K_{+f}|$ and $|K_{-f}\ra\la K_{-f}|$, inside the $f$ integral. Squaring $\drho$ then generates 16 terms inside a double frequency integral, for each of which terms one must evaluate the diagonal matrix element in an eigenstate $|\lambda^{(\pm)}_i\ra$. These calculations, although tedious, may be performed straightforwardly. Expressions (\ref{Kpf_lambda_pm}) and (\ref{Kmf_lambda_pm}) for the overlap functions greatly simplify these calculations, as we show in Appendix B, with the following results:
\ba
\label{MatElements2}
&\la\lambda_j^{(\pm)}|\drho|\lambda_i^{(\pm)}\ra\! =\! {2\over B^2}\!\iint\! df\, df' [P(f-f')\pm P(2+f+f')]\nn
&\times(1+f)\left[\lambda_i^{(\pm)}d_+^{(i)}(f)d_+^{(j)}(f')+\lambda_j^{(\pm)}d_+^{(j)}(f)d_+^{(i)}(f')\right];\nn
&\la\lambda_i^{(\pm)}|(\drho)^2|\lambda_i^{(\pm)}\ra={1\over 2B^2}\iint df\, df'\Big\{[O(f-f')\nn
&\qquad\pm O(2+f+f')]\la \lambda_i^{(\pm)}|\partial|K_{+f}\ra \la \lambda_i^{(\pm)}|\partial|K_{+f'}\ra \nn
&\qquad+4\lambda_i^{(\pm)}(1+f)[P(f-f')\pm P(2+f+f')]\nn
&\qquad\times d_+^{(i)}(f)\la\lambda_i^{(\pm)}|\partial|K_{+f'}\ra \nn
&\qquad+4\lambda_i^{(\pm)2}(1+f)(1+f')[Q(f-f')\nn
&\qquad\pm Q(2+f+f')]d_+^{(i)}(f)d_+^{(i)}(f')\Big\},
\end{align}
where the functions $P$ and $Q$ have been defined earlier by Eqs.~(\ref{KpK}) and (\ref{Q}).
The upper and lower signs in these expressions refer to the eigenstates drawn from the two subspaces, $\Omega_B^{(\pm)}$, respectively. What we also show in Appendix B is that any matrix element of form, $\la\lambda_j^{(\mp)}|\drho|\lambda_i^{(\pm)}\ra$, between any two states belonging to different subspaces vanishes identically,
\be
\label{MatElements3}
\la\lambda_j^{(\pm)}|\drho|\lambda_i^{(\mp)}\ra =0.
\ee
This allows both sums in expression (\ref{Hmn3}) to be evaluated separately over eigenstates belonging to the two different subspaces before adding their contributions to compute the total QFI.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{2DOSR_QFI_vs_B_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of QFI for estimating the semi-separation distance, $l$, of each point source from the pair centroid that has been fixed along the optical axis in the plane of Gaussian focus vs. fractional bandwidth, $B$, for different values of $l$.}
\label{2D_OSR_QFI_vs_B}
\end{figure}
\subsection{Numerical Results for QFI for 2D Pair OSR}
In Fig.~\ref{2D_OSR_QFI_vs_B}, we plot the value of QFI for estimating the separation of a symmetric source pair that is located in the transverse plane of Gaussian focus, with the origin in that plane fixed at the axial point that we take to be the pair's centroid. As the fractional bandwidth increases, QFI decreases much as it did for 2D localization of a single source that we treated in the previous section. However, even for 10\% fractional emission bandwidth and pair separations that are twice as large as the Airy parameter, QFI decreases to a value that is no more than 5\% below the maximum theoretical value of $4\pi^2$ for estimating the 2D pair separation distance for purely monochromatic emission. In other words, the maximum information that can be extracted about the pair separation remains rather robust with increasing emission bandwidth.
\section{Realization of QFI via Low-order Zernike Projections}
We have noted previously \cite{YuPrasad18, PrasadYu19} that low-order Zernike wavefront projections furnish an entirely classical measurement protocol that can realize pair-superresolution QFI in the extreme limit of vanishing pair separation. The Zernike modes, being real and orthogonal over the unit disk, might also meet the optimality criteria laid out by Rehacek, {\it et al.} \cite{Rehacek17}, when extended to two dimensions with respect to a clear circular imaging aperture. We now show that the same protocol using the lowest four orders of Zernike polynomials, namely $Z_1, Z_2, Z_3,Z_4$ in Noll's notation \cite{Noll76}, works well even when the emission bandwidth of the sources is not particularly narrow and the source separation not too large. Since, due to the realness of the Zernike modes, the squared moduli of their normalized projections, which determine their probabilities, are the same for both the symmetric source-pair separation and single-source localization problems, identical results for Zernike-based classical FI (CFI) are obtained for both, provided the semi-separation distance in the former problem is identified, as we already have, with the source distance in the latter.
The first four Zernikes are defined as the following functions of polar coordinates over the unit disk in the pupil plane:
\ba
\label{Z1234}
Z_1(\bu)=&{1\over \sqrt{\pi}};\ \
Z_2(\bu)={2\over\sqrt{\pi}}u\,\cos\phi_u;\nn
Z_3(\bu)=&{2\over\sqrt{\pi}}u\,\sin\phi_u;\ \
Z_4(\bu)=\sqrt{3\over \pi}(1-2u^2).
\end{align}
The choice of the specific coefficients for these functions ensures that they have unit norm over the unit disk, {\it i.e.,} $\la Z_n|Z_n\ra=1$. The probability of observing an imaging photon in the $n$th Zernike mode, $P_n=\la Z_n|\hrho|Z_n\ra$, is the same whether $\hrho$ is given by Eq.~(\ref{rho}) or Eq.~(\ref{rho2}) for the two different problems considered in this paper. In view of form (\ref{wavefunction}) for the wavefunction, $\la \bu|K_f\ra$, we may express $P_n$ as
\be
\label{ProbZ}
P_n\!=\!{1\over \pi B}\int_{-B/2}^{B/2}\!\!\!\!\!\! df \left\vert\int\!\! P(\bu)\exp[-i2\pi(1+f)\bl\cdot\bu]\, Z_n(\bu) d^2u\right\vert^2.
\ee
For the four Zernikes of interest here, we may calculate the corresponding probabilities as
\ba
\label{ProbZ1234}
P_1(l)=&{2\over B\pi l}\int_{x_-}^{x_+}dx{J_1^2(x)\over x^2};\nn
P_2(l)=&{8\over B\pi l}\cos^2\phi_l\int_{x_-}^{x_+}dx{J_2^2(x)\over x^2};\nn
P_3(l)=&{8\over B\pi l}\sin^2\phi_l\int_{x_-}^{x_+}dx{J_2^2(x)\over x^2};\nn
P_4(l)=&{96\over B\pi l}\int_{x_-}^{x_+}dx\Bigg[{J_0^2(x)\over x^4}+J_1^2(x)\Big({4\over x^6}-{1\over x^4}\nn &+{1\over 16 x^2}\Big)+J_0(x)J_1(x)\left({1\over2 x^3}-{4\over x^5}\right)\Bigg],
\end{align}
where $x_\pm$ are defined as
\be
\label{xpm}
x_\pm = 2\pi l\,(1\pm B/2).
\ee
We derived expressions (\ref{ProbZ1234}) by individually substituting the four Zernike polynomials (\ref{Z1234}) into Eq.~(\ref{ProbZ}), using the first of the Bessel identities in Eq.~(\ref{BesselIdentities}) to integrate over the angular coordinate $\phi_u$ in the unit disk, and then using the second of these identities and a third Bessel identity (\ref{BesselIdentity3}) to integrate over the radial coordinate $u$. The final step involved a simple scaling of the integration variable $f$ via the substitution $x=2\pi(1+f)l$.
All of the integrals in Eq.~(\ref{ProbZ1234}) may in fact be evaluated in closed form. The values of the corresponding indefinite integrals, listed in the tables of Bessel integrals in Ref.~\cite{besint19} on pages 244 and 263, were used to express the requisite probabilities, $P_n(l), \ n=1,\ldots,4,$ in closed form. Their derivatives, $dP_n/dl$, on the other hand, are more simply calculated by noting that expressions (\ref{ProbZ1234}) depend on $l$ only through its presence in the denominator of the overall coefficient and in the integration limits, which renders this calculation quite simple when we use the identity,
\ba
\label{integral_identity}
{d\over dl}\left[{1\over l}\int_{b(l)}^{a(l)} f(x)\, dx\right]&=-{1\over l^2}\int_{b(l)}^{a(l)} f(x)\, dx\nn
&+{1\over l}\left[f(a) {da\over dl}-f(b){db\over dl}\right].
\end{align}
Based on the {\em observed} mode-projection probabilities and their derivatives, we can now calculate the classical FI for estimating the distance $l$. Since an imaging photon has the probability $\bar P=1-\sum_{n=1}^N P_n$ of being found in the {\em unobserved} modes, we can write down the full CFI \cite{VT68} per photon for estimating $l$ from projective measurements in the $N$ Zernike modes as
\be
\label{CFI}
F_N(l)=\sum_{n=1}^N {1\over P_n}\left({dP_n\over dl}\right)^2+{1\over \bar P}\left({d\bar P\over dl}\right)^2.
\ee
In Fig.~\ref{CFI_TipTilt}, we plot the numerically evaluated CFI for estimating $l$ when only projections into the tip and tilt modes, $Z_2,Z_3$, are observed and the remaining mode projections are not, for values of $l$ varying between 0 and 2 for five different values of the fractional bandwidth, $B$, namely 0, 0.05, 0.10, 0.15, and 0.20. As expected, the fidelity of estimation, represented by CFI, degrades with increasing bandwidth, since the diffraction induced image, whose width in the image domain is proportional to the wavelength, gets fuzzier with an increasing range of emission wavelengths. Note that the shorter the distance $l$, the less impact the bandwidth increase has on the value of tip-tilt CFI, which approaches the quantum FI in the limit of $l\to 0$, regardless of the value of $B$ even with observations in the tip and tilt modes alone. This behavior was noted earlier in Refs.~\cite{YuPrasad18,PrasadYu19} as arising from the fact that these tip and tilt modes are perfect matched filters for the $x$ and $y$ coordinates, respectively, of vector $\bl$ in this limit. The oscillatory behavior of the CFI curves with increasing $l$, with alternating local maxima and minima, on the other hand, have to do with the fact that at certain values of $l$, $dP_2/dl=dP_3/dl=0$ and consequently the first-order information provided by the tip and tilt modes alone about $l$ vanishes for those values.
The values of CFI increase with the inclusion of further Zernike modes, as Fig.~\ref{CFI_TipTiltPistonDefocus} demonstrates. In this figure, we plot the relative contributions of the various Zernike modes, starting with the tip and tilt modes for two different values of $B$, namely 0 and 0.2, which correspond to the same values of $B$ as for the outside pair of curves in Fig.~\ref{CFI_TipTilt}. The lowest pair of curves that are bunched together represent the tip-tilt contribution to CFI for the two values of $B$. The next higher closely paired curves display CFI for the same two values of $B$ when the contribution of the piston Zernike, $Z_1$, is added, while the second highest pair of curves exhibit CFI when the final Zernike mode, $Z_4$, often called the defocus mode, is also included. The very highest pair of curves represent the overall CFI when the contributions from these four Zernikes and all other unobserved modes are added together. In each curve pair, the higher, solid one corresponds to $B=0$ and the lower, dashed one to $B=0.20$. To avoid confusion, we have not displayed the dependence of CFI for the remaining three, intermediate values of $B$ also covered by Fig.~\ref{CFI_TipTilt}, but those dependences fall, as expected, between each pair of solid and dashed curves shown in Fig.~\ref{CFI_TipTiltPistonDefocus}. As we readily see, even adding the piston mode to tip-tilt mode projections greatly enhances CFI over a much larger range of separations than tip-tilt projections alone.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{CFI_TipTiltRest_vs_separation_varyingB_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of CFI for estimating $l$ from wavefront projections into the tip-tilt modes, $Z_2$ and $Z_3$, vs. $l$ for a variety of values of the fractional bandwidth, $B=\Delta f/f_0$.}
\label{CFI_TipTilt}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{CFI_TipTiltPistonDefocusRest_vs_separation_B0_pt2_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of CFI for estimating $l$ from wavefront projections into the tip-tilt, piston, and defocus modes, namely $Z_1,Z_2,Z_3,Z_4$, vs. $l$ for values 0 (solid lines) and 0.20 (dashed lines) of the fractional bandwidth, $B=\Delta f/f_0$. The bottom three pairs of closely-bunched curves capture the increase of CFI from partial contributions of the tip-tilt, piston, and defocus modes, while the top pair represent the total CFI from inclusion of the unobserved modes as well.}
\label{CFI_TipTiltPistonDefocus}
\end{figure}
\paragraph*{Discussion}
To gain some quantitative sense about the scale of resolution and number of photons that might be needed to achieve minimum acceptable estimation variances, let us consider the following example. A symmetrical pair of point sources are separated by $2l=0.4$, in Airy diffraction parameter units, emit at the center wavelength $\lambda_0=500$ nm, and produce geometrical images a distance $z_I=0.2$ m away from a thin spherical mirror of aperture radius, $R=0.1$ m. In physical units, the pair separation has the value, $2l\,\delta=400$ nm. If the pair emission is observed in a 10\% fractional bandwidth ($B=0.1$), the values, per photon, of CFI calculated from observing projections into the tip-tilt Zernikes alone and tip-tilt-piston-defocus Zernikes alone are equal to 22.85 and 39.29, respectively, while QFI for $l=0.2$ and $B=0.1$ has the value 39.41, just a bit lower than the zero-bandwidth value of 4$\pi^2=39.48$ per photon. In other words, observing tip-tilt ($Z_2,Z_3$) projections alone can in principle achieve 58\% of the quantum upper bound on Fisher information (FI), while including piston and defocus mode ($Z_1,Z_4$) projections as well raises CFI to about 99.5\% of the quantum limit, making the latter classical limit essentially indistinguishable from the quantum upper bound.
As for minimum standard deviations (SDs), $\sigma_l^{({\rm min})}$, for estimating $l$, assuming {\em unbiased} estimation, their quantum and classical lower limits are given by the square root of the reciprocals of QFI and CFI, respectively. For our specific example, we calculate these SDs on estimating $l$ to be 0.1593 and 0.1595 units per photon. For $N$ photons, since CFI and QFI both scale up by factor $N$ in the photon counting regime, the SDs are smaller by the factor $\sqrt{N}$. For $N=100$, the minimum fractional error for estimating $l$ from the four lowest Zernike-mode projections is equal to $\sigma_l^{({\rm min})}/l=0.01593/0.2$, which is less than 8\%, making such estimations quite accurate even with just 100 photons. If finite detection errors, such as dark counts or additive noise are present, as is true for even the best photon counting detectors \cite{Hadfield09,Slussarenko19}, the minimum photon numbers needed for resolving such source pair would need to be higher.
The mode projections, be they in the Zernike or another basis, can be acquired in the laboratory using a digital holographic mask \cite{Paur16} that encodes the modes in a Fourier optics set-up \cite{Goodman96, YuPrasad18}. A maximum-likelihood algorithm, as we also discussed in Ref.~\cite{YuPrasad18}, can then be used to recover the parameters of interest from the projection count data acquired using a photon-counting camera.
\section{Concluding Remarks}
This paper has presented a theoretical analysis of the problems of quantum limited source localization and symmetrical point-source-pair separation in a single 2D object plane as the fractional bandwidth, $B$, of incoherent emission from the sources increases from zero and detection is limited only by photon counting noise. For both problems, the most important parameter that determines how the quantum estimation theroetic bound degrades with increasing fractional bandwidth, $B$, is the effective space-bandwidth parameter, $\pi B\ell$, where $\ell$, in units of the Airy diffraction parameter, is either the source distance from a fixed point when localizing a point source or the distance of either source from the {\it a priori} known midpoint of the line joining the pair of point sources when resolving the pair. In both cases, the fixed point was chosen without loss of generality to be the point at which the optical axis intersects the object plane taken to be the plane of Gaussian focus.
The number of eigenstates of the imaging-photon density operator with eigenvalues significantly different from 0 and which thus significantly control the fundamental quantum limit on the minimum variance of any possible unbiased estimation of $l$ is of order $S\defeq \lceil 2Bl\rceil$, with that limiting minimum error variance increasing significantly only when $S$ greatly exceeds 1. We may regard $S$ as the effective dimensionality of the continuous-state eigenvalue problem for the single-photon density operator for a point source emitting incoherently in a finite bandwidth. We have used the machinery of prolate spheroidal wave functions to formulate and obtain a solution of the eigenvalue problem, with which we then calculated the quantum bound numerically for a clear circular imaging pupil, exhibiting the detailed manner in which the quantum error bound increases with increasing value of $S$ for the two problems.
We have also shown that wavefront projections in the basis of Zernike modes can yield estimation fidelities approaching the quantum upper bound, even with a few low-order mode projections when the localization and pair separation distances are comparable to or smaller than the characteristic Rayleigh resolution scale. Including higher-order Zernike modes will surely reduce the gap between CFI and QFI for all values of such distances, but our recent work on quantum bounds for extended-source imaging has shown that this gap may not close fully even when {\em all} Zernike projections are included \cite{Prasad20c}.
While this paper has considered in detail the simplest form of a uniformly distributed emission power spectrum over a finite bandwidth outside which it vanishes identically, any general integrable power spectrum may be treated by an adaptation of the present calculation, as we show without detailed numerical evaluations in Appendix C. For unimodal power spectra, such as Lorentzian and Gaussian power spectra, we can always identify an effective SBP of form $\pi Bl$, in which $B$ is of order full width at half maximum (FWHM) of the emission spectrum when expressed as a fraction of the center frequency of that spectrum. We expect the detailed calculations presented in this paper and conclusions drawn from them to hold qualitatively even for such general power spectra.
Extensions of the finite-bandwidth QFI calculation to the axial dimension and pair brightness asymmetry for full 3D pair localization and separation will accord wavefront-projection-based superresolution techniques further value. Ultimately, however, these considerations will need to be generalized to finite sources with spatially non-uniform brightness distributions for a variety of low-light imaging applications.
\acknowledgments
The author is grateful for the research facilities provided by the School of Physics and Astronomy at the U. of Minnesota where he has held the position of Visiting Professor for the last two years. This work was partially supported under a consulting agreement with the Boeing Company and by Hennepin Healthcare Research Institute under a research investigator appointment.
|
\section{Introduction}
The Standard Model (SM) of strong and electroweak interactions, spectrally
completed by the discovery of its Higgs boson at the LHC \cite{higgs-mass},
seems to be the model of the physics at the Fermi energies. It does so because various experiments
have revealed so far no new particles beyond the SM spectrum. There is, however, at least the dark matter (DM), which requires new particles beyond the SM. Physically, therefore, we must use every
opportunity to understand where those new particles can hide, if any.
In the present work we study a massive spin-3/2 field hidden in the SM spectrum. This higher-spin field, described by the Rarita-Schwinger equations \cite{Rarita:1941mf,pilling}, has to obey certain constraints to have correct degrees of freedom when it is on the physical shell. At the renormalizable level, it can couple to the SM matter via only the neutrino portal (the composite SM singlet formed by the lepton doublet and the Higgs field). This interaction is such that it vanishes when the spin-3/2 field is on shell. In Sec. 2 below we give the model and basic constraints on the spin-3/2 field.
In Sec. 3 we study collider signatures of the spin-3/2 field. We study there $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings in detail. We give analytical computations and numerical predictions. We propose there a neutrino-Higgs collider and emphasize importance of the linear collider in probing the spin-3/2 field.
In Sec. 4 we turn to loop effects of the spin-3/2 field. We find that the spin-3/2 field adds logarithmic and quartic UV-sensitivities atop the logarithmic and quadratic ones in the SM. We convert power-law UV-dependent terms into curvature terms as a result of the incorporation of gravity into the SM. Here we use the results of \cite{gravity,gravity2} which shows that gravity can be incorporated into the SM properly and naturally {\it (i)} if the requisite curved geometry is structured by interpreting the UV cutoff as a constant value assigned to the spacetime curvature, and {\it (ii)} if the SM is extended by a secluded new physics (NP) that does not have to interact with the SM. This mechanism eliminates big hierarchy problem by metamorphosing the quadratic UV part of the Higgs boson mass turns into Higgs-curvature coupling.
In Sec. 5 we discuss possibility of Higgs inflation via the large Higgs non-minimal coupling induced by the spin-3/2 field. We find that Higgs inflation is possible in a wide range of parameters provided that the secluded NP sector is crowded enough.
In Sec. 6 we discuss the DM. We show therein that the spin-3/2 field is a viable DM candidate. We also show that the singlet fields in the NP can form a non-interacting DM component.
In Sec. 7 we conclude. There, we give a brief list of problems that can be studied as furthering of the material presented this work.
\section{A Light Spin-3/2 Field}
Introduced for the first time by
Rarita and Schwinger \cite{Rarita:1941mf}, $\psi_{\mu}$ propagates with
\begin{eqnarray}
S^{\alpha\beta}(p) = \frac{i}{{\slashed{p}} - M} \Pi^{\alpha\beta}(p),
\end{eqnarray}
to carry one spin-3/2 and two spin-1/2 components through the
projector \cite{pilling}
\begin{eqnarray}
\label{project}
\Pi^{\alpha\beta} = -\eta^{\alpha\beta} +
\frac{\gamma^{\alpha}\gamma^{\beta}}{3}+
\frac{\left(\gamma^{\alpha}p^{\beta} -
\gamma^{\beta}p^{\alpha}\right)}{3M}+\frac{2
p^{\alpha}p^{\beta}}{3 M^2},
\end{eqnarray}
that exhibits both spinor and vector characteristics.
It is necessary to impose \cite{pilling}
\begin{eqnarray}
\label{eqn4}
p^{\mu}\psi_{\mu}(p)\rfloor_{p^2=M^2}=0,
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn4p}
\gamma^{\mu}\psi_{\mu}(p)\rfloor_{p^2=M^2}=0,
\end{eqnarray}
to eliminate the two spin-1/2 components to make $\psi_{\mu}$
satisfy the Dirac equation
\begin{eqnarray}\label{eqn5}
\left(\slashed{p} - M\right)\psi_{\mu}=0
\end{eqnarray}
as expected of an on-shell fermion. The constraints (\ref{eqn4}) and (\ref{eqn4p}) imply that $p^{\mu}\psi_{\mu}(p)$ and $\gamma^{\mu}\psi_{\mu}(p)$ both vanish on the physical shell $p^2=M^2$. The latter is illustrated in Fig. \ref{fig:Px} taking $\psi_{\mu}$ on-shell.
Characteristic of singlet fermions, the $\psi_{\mu}$, at the renormalizable level, makes contact with the SM via
\begin{eqnarray}
\label{int1}
{\mathcal{L}}^{(int)}_{3/2} = c^{i}_{{3/2}} \overline{L^{i}} H \gamma^{\mu}\psi_{\mu} + {\text{h.c.}}
\end{eqnarray}
in which
\begin{eqnarray}
L^i = \left(\begin{array}{c}\nu_{\ell L}\\ \ell_L\end{array}\right)_{i}
\end{eqnarray}
is the lepton doublet ($i=1,2,3$), and
\begin{eqnarray}
H = \frac{1}{\sqrt{2}}\left(\begin{array}{c}v + h + i \varphi^0\\ \sqrt{2} \varphi^{-}\end{array}\right)
\end{eqnarray}
is the Higgs doublet with vacuum expectation value $v\approx 246\ {\rm GeV}$, Higgs boson $h$, and Goldstone bosons $\varphi^{-}$, $\varphi^0$ and $\varphi^+$ (forming the longitudinal components of $W^{-}$, $Z$ and $W^{+}$ bosons, respectively).
In general, neutrinos are sensitive probes of singlet fermions. They can get masses through, for instance, the Yukawa interaction (\ref{int1}), which leads to the Majorana mass matrix
\begin{eqnarray}
(m_{\nu})^{i j}_{3/2} \propto c^i_{{3/2}} \frac{v^2}{M} c^{\star j}_{{3/2}}
\end{eqnarray}
after integrating out $\psi_{\mu}$. This mass matrix can, however, not lead to the experimentally known neutrino mixings \cite{neutrino-mass}. This means that flavor structures necessitate additional singlet fermions. Of such are the right-handed neutrinos $\nu_R^k$ of mass $M_k$ ($k=1,2,3,\dots$), which interact with the SM through
\begin{eqnarray}
\label{int2}
{\mathcal{L}}^{(int)}_{R} = c_{{R}}^{i k} \bar{L}^i H \nu_R^k + {\text{h.c.}}
\end{eqnarray}
to generate the neutrino Majorana masses
\begin{eqnarray}
(m_{\nu})^{i j}_{R} \propto c_{{R}}^{i k} \frac{v^2}{M_k} c_{{R}}^{\star k j}
\end{eqnarray}
of more general flavor structure. This mass matrix must have enough degrees of freedom to fit to the data \cite{neutrino-mass}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.5]{onshell.pdf}
\end{center}
\caption{$\psi_{\mu}-h-\nu_L$ coupling with vertex factor $i c_{3/2} \gamma^{\mu}$. Scatterings in which $\psi_{\mu}$ is on shell must all be forbidden since $c_{3/2} \gamma^{\mu} \psi_{\mu}$ vanishes on mass shell by the constraint (\ref{eqn4p}). This ensures stability of $\psi_{\mu}$ against decays and all sort of co-annihilations.} \label{fig:Px}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.45]{vhvhZmed-cropped.pdf}
\end{center}
\caption{The $\nu-Z$ box mediating the $\nu_L h \rightarrow \nu_L h$ scattering in the SM. The $e-W$ box is not shown. } \label{nhnh-SM}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.40]{DM2-cropped.pdf}
\end{center}
\caption{$\nu_L h \rightarrow \nu_L h$ scattering with $\psi_{\mu}$ mediation. No resonance can occur at $\sqrt{s}=M$ because $\psi_{DM}$ cannot come to mass shell.} \label{nhnh-3/2}
\end{figure}
Here we make a pivotal assumption. We assume that $\psi_{\mu}$ and $\nu_R^k$ can weigh as low as a TeV, and that $c^i_{{3/2}}$ and some of $c_{{R}}^{i k}$ can be ${\mathcal{O}}(1)$. We, however, require that contributions to neutrino masses from
$\psi_{\mu}$ and $\nu_R$ add up to reproduce with experimental result
\begin{eqnarray}
\label{numass}
(m_{\nu})^{i j}_{3/2} + (m_{\nu})^{i j}_{R} \approx (m_{\nu})^{i j}_{exp}
\end{eqnarray}
via cancellations among different terms. We therefore take
\begin{eqnarray}
c_{{3/2}} \lesssim {\mathcal{O}}(1)\,,\; M\gtrsim {\rm TeV}
\end{eqnarray}
and investigate the physics of $\psi_{\mu}$. This cancellation requirement does not have to cause any excessive fine-tuning simply because $\psi_{\mu}$ and $\nu_R^k$ can have appropriate symmetries that correlate their couplings. One possible symmetry would be rotation of $\gamma^{\mu}\psi_{\mu}$ and $\nu_R^k$ into each other. We defer study of possible symmetries to another work in progress \cite{Ozan}. The right-handed sector, which can involve many $\nu_R^k$ fields, is interesting by itself but hereon we focus on $\psi_{\mu}$ and take, for simplicity, $c^i_{{3/2}}$ real and family-universal ($c^i_{{3/2}}=c_{{3/2}}$ for $\forall$ $i$).
\section{Spin-3/2 Field at Colliders}
It is only when it is off-shell that $\psi_{\mu}$ can reveal itself through the interaction (\ref{int1}). This means that its effects are restricted to modifications in scattering rates of the SM particles. To this end, as follows from (\ref{int1}), it participates in
\begin{enumerate}
\item $\nu_L h \rightarrow \nu_{L} h$ (and also $\nu_{L}\nu_{L} \rightarrow h h$)
\item $e^+ e^- \rightarrow W^+_L W^-_L$ (and also $\nu_{L}\nu_{L} \rightarrow Z_L Z_L$)
\end{enumerate}
at the tree level. They are analyzed below in detail.
\subsection{$\nu_L h \rightarrow \nu_{L} h$ Scattering}
Shown in Fig. \ref{nhnh-SM} are the two box diagrams which enable $\nu_L h \rightarrow \nu_L h$ scattering in the SM. Added to this loop-suppressed SM piece is the $\psi_{\mu}$ piece depicted in Fig. \ref{nhnh-3/2}. The two contributions add up to give the cross section
\begin{eqnarray}
\frac{d\sigma(\nu_L h \rightarrow \nu_L h)}{dt}= \frac{1}{16\pi}\frac{{\mathcal{T}_{\nu h}}({{s}},{{t}})}{(s-m_{h}^2)^2}
\end{eqnarray}
in which the squared matrix element
\begin{widetext}
\begin{eqnarray}
\label{mat-el-nuhnuh}
{\mathcal{T}_{\nu h}}({{s}},{{t}}) &=& 9\! \left(\frac{c_{3/2}}{3 M}\right)^4\!\! \left(\!
\left({{s}}-m_h^2\right)^2 + {{s}}{{t}}\right) \!-\! 16\! \left(\frac{c_{3/2}}{3 M}\right)^2\!\! \left(\!
2\left({{s}}-m_h^2\right)^2 \!+\! \left(2{{s}} -m_h^2\right){{t}}\right) {\mathbb{L}} \!+\! 2\left(
{{s}}-m_h^2\right)\left({{s}} + {{t}}-m_h^2\right) {\mathbb{L}}^2
\end{eqnarray}
\end{widetext}
\noindent involves the loop factor
\begin{eqnarray}
{\mathbb{L}}=\! \frac{(g_W^2\!+\!g_Y^2)^2 M_Z^2 m_h^2 I(M_Z)}{192 \pi^2}\! + \!\frac{g_W^4 M_W^2 m_h^2 I(M_W)}{96 \pi^2}
\end{eqnarray}
in which $g_W$ ($g_Y$) is the isospin (hypercharge) gauge coupling, and
\begin{widetext}
\begin{eqnarray}
I(\mu)=\int_{0}^{1}dx\int_{0}^{1-x}dy\int_{0}^{1-x-y}dz \left((s-m_h^2)(x+y+z-1) y - txz + m_h^2 y (y-1) + \mu^2 (x + y + z)\right)^{-2}
\end{eqnarray}
\end{widetext}
\noindent is the box function. In Fig. \ref{fig:Pxx}, we plot the total cross section $\sigma(\nu_L h \rightarrow \nu_L h)$ as a function of the neutrino-Higgs center-of-mass energy for different $M$ values. The first important thing about the plot is that there is no resonance formation around $\sqrt{s}=M$. This confirms the fact that $\psi_{\mu}$, under the constraint (\ref{eqn4p}), cannot come to physical shell with the couplings in (\ref{int1}). In consequence, the main search strategy for $\psi_{\mu}$ is to look for deviations from the SM rates rather than resonance shapes. The second important thing about the plot is that, in general, as revealed by (\ref{mat-el-nuhnuh}), larger the $M$ smaller the $\psi_{\mu}$ contribution. The cross section starts around $10^{-7}\ {\rm pb}$, and falls rapidly with $\sqrt{s}$. (The SM piece, as a loop effect, is too tiny to be observable: $\sigma(\nu_L h \rightarrow \nu_L h)\lesssim 10^{-17}\ {\rm pb}$). It is necessary to have some $10^{4}/fb$ integrated luminosity (100 times the target luminosity at the LHC) to observe few events in a year. This means that $\nu_L \nu_L \rightarrow h h$ scattering can probe $\psi_{\mu}$ at only high luminosity but with a completely new scattering scheme.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.5]{vhvh-total-xsection-cropped.pdf}
\end{center}
\caption{The total cross section for $\nu_L h \rightarrow \nu_L h$ scattering as a function of the neutrino-Higgs center-of-mass energy $\sqrt{s}$ for $M=1, 2$ and $3\ {\rm TeV}$ at $c_{3/2}= 1$. Cases with $c_{3/2}\neq 1$ can be reached via the rescaling $M\rightarrow M/c_{3/2}$.} \label{fig:Pxx}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.50]{skecth-cropped.pdf}
\end{center}
\caption{Possible neutrino-Higgs collider to probe $\psi_{\mu}$.} \label{fig:P10}
\end{figure}
Fig. \ref{fig:Pxx} shows that neutrino-Higgs scattering can be a promising channel to probe $\psi_{\mu}$ (at high-luminosity, high-energy machines). The requisite experimental setup would involve crossing of Higgs factories with accelerator neutrinos. The setup, schematically depicted in Fig. \ref{fig:P10}, can be viewed as incorporating future Higgs (CEPC\cite{Ruan:2014xxa}, FCC-ee \cite{Gomez-Ceballos:2013zzn} and ILC \cite{Baer:2013cma}) and neutrino \cite{Choubey:2011zzq} factories. If ever realized, it could be a rather clean experiment with negligible SM background. This hypothetical ``neutrino-Higgs collider'', depicted in Fig. \ref{fig:P10}, must have, as suggested by Fig. \ref{fig:Pxx}, some $10^4/fb$ integrated luminosity to be able to probe a TeV-scale $\psi_{\mu}$. In general, need to high luminosities is a disadvantage of this channel. (Feasibility study, technical design and possible realization of a ``neutrino-Higgs collider'' falls outside the scope of the present work.)
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.45]{elpoWW-cropped.pdf}
\end{center}
\caption{The Feynman diagram for $e^+ e^- \rightarrow W_L^+ W_L^-$ scattering. The $\nu_L \nu_L \rightarrow Z_L Z_L$ scattering has the same topology.} \label{fig:w6}
\end{figure}
\subsection{$e^+ e^- \rightarrow W_L^+ W_L^-$ Scattering}
It is clear that $\psi_{\mu}$ directly couples to the Goldstone bosons $\varphi^{+,-,0}$ via (\ref{int1}). The Goldstones, though eaten up by the $W$ and $Z$ bosons in acquiring their masses, reveal themselves at high energies. In fact, the Goldstone equivalence theorem \cite{equivalence} states that scatterings at energy $E$ involving longitudinal $W^{\pm}_L$ bosons are equal to scatterings that involve $\varphi^{\pm}$ up to terms ${\mathcal{O}}(M_W^2/E^2)$. This theorem, with similar equivalence for the longitudinal $Z$ boson, provides a different way of probing $\psi_{\mu}$. In this regard, depicted in Fig. \ref{fig:w6} is $\psi_{\mu}$ contribution to
$e^+ e^- \rightarrow W_L^+ W_L^-$ scattering in light of the Goldstone equivalence. The SM amplitude is given in \cite{equivalence}. The total differential cross section
\begin{eqnarray}
\frac{d\sigma(e^+ e^- \rightarrow W^+_L W^-_L)}{dt}= \frac{1}{16\pi s^2} {{\mathcal{T}_{W_L W_L}}({{s}},{{t}})}
\end{eqnarray}
involves the squared matrix element
\begin{widetext}
\begin{eqnarray}
\label{mat-el-nuhnuh}
{{\mathcal{T}_{W_L W_L}}({{s}},{{t}})}\! &=&\! \left(\! \frac{g_W^2}{s-M_Z^2}\left(\!-1+\! \frac{M_Z^2}{4 M_W^2}\! +\! \frac{M_Z^2-M_W^2}{s}\right)\! +\! \frac{g_W^2}{s-4 M_Z^2}\left(\!1+\! \frac{M_W^2}{t}\right)\! +\! \frac{c^{2}_{3/2}}{3 M^2}\right)^{2}\!\!\! \left(-2 s M_W^2 -2 (t-M_W^2)^2\right) \nonumber\\
&+&\frac{c^4_{3/2} s}{18 M^2} \left(4 + \frac{t}{t-M^2}\right)^2
\end{eqnarray}
\end{widetext}
\noindent Plotted in Fig. \ref{fig:Wxx} is $\sigma(e^+ e^- \rightarrow W^+_L W^-_L)$ as a function of the $e^+ e^-$ center-of-mass energy for different values of $M$. The cross section, which falls with $\sqrt{s}$ without exhibiting a resonance shape, is seen to be large enough to be measurable at the ILC \cite{Baer:2013cma}. In general, larger the $M$ smaller the cross section but even $1/fb$ luminosity is sufficient for probing $\psi_{\mu}$ for a wide range of mass values.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.45]{elpoWW-xsection-cropped.pdf}
\end{center}
\caption{ The total cross section for $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scattering as a function of the electron-positron center-of-mass energy $\sqrt{s}$ for $M=1, 2$ and $3\ {\rm TeV}$ at $c_{3/2}= 1$. Cases with $c_{3/2}\neq 1$ can be reached via the rescaling $M\rightarrow M/c_{3/2}$.} \label{fig:Wxx}
\end{figure}
Collider searches for $\psi_{\mu}$, as illustrated by $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings, can access spin-3/2 fields of several TeV mass. For instance, the ILC, depending on its precision, can confirm or exclude a $\psi_{\mu}$ of even 5 TeV mass with an integrated luminosity around $1/fb$. Depending on possibility and feasibility of a neutrino-neutrino collider (mainly accelerator neutrinos), it may be possible to study also $\nu_L \nu_L \rightarrow h h$ and $\nu_L \nu_L \rightarrow Z_L Z_L$ scatterings, which are expected to have similar sensitivities to $M$.
\section{Spin-3/2 Field in Loops}
As an inherently off-shell field, $\psi_{\mu}$ is expected to reveal itself mainly in loops. Its one possible loop effect would be generation of neutrino masses but chirality forbids it. Despite the couplings in (\ref{int1}), therefore, neutrino masses do not get any contribution from $\psi_{\mu}-h$ loop.
One other loop effect of $\psi_{\mu}$ would be radiative corrections to the Higgs boson mass. This is not forbidden by any symmetry. The relevant Feynman diagram is depicted in Fig. \ref{fig:P7}. It adds to the Higgs boson squared-mass a logarithmic piece
\begin{eqnarray}
\label{log-corr}
\left(\delta m_h^2\right)_{log} = \frac{c_{3/2}^2}{12\pi^2}M^2\log G_F M^2
\end{eqnarray}
relative to the logarithmic piece $\log G_F \Lambda^2$ in the SM, and a quartic piece
\begin{eqnarray}\label{eqn88}
\left(\delta m_h^2\right)_{4} = \frac{c_{3/2}^2}{ 48 \pi^2} \frac{ \Lambda^4}{M^2}
\end{eqnarray}
which have the potential to override the experimental result \cite{higgs-mass} depending on how large the UV cutoff $\Lambda$ is compared to the Fermi scale $G_F^{-1/2} = 293\ {\rm GeV}$.
The logarithmic contribution in (\ref{log-corr}), which originates from the $\eta^{\alpha\beta}$ part of (\ref{project}), gives rise to the little hierarchy problem in that larger the $M$ stronger the destabilization of the SM Higgs sector. Leaving aside the possibility of cancellations with similar contributions from the right-handed neutrinos $\nu_R^k$ in (\ref{int2}), the little hierarchy problem can be prevented if $M$ (more precisely $M/c_{3/2}$) lies in the TeV domain.
The quartic contribution in (\ref{eqn88}), which originates from the longitudinal $p^{\alpha} p^{\beta}$ term in (\ref{project}), gives cause to the notorious big hierarchy problem in that larger the $\Lambda$ larger the destabilization of the SM Higgs sector. This power-law UV sensitivity exists already in the SM
\begin{eqnarray}\label{eqn8}
\left(\delta m_h^2\right)_{2}&=&\frac{3 \Lambda^2}{16 \pi^2 {\left|\langle H \rangle\right|^2}}\left( m_h^2 + 2 M_W^2 + M_Z^2 - 4 m_t^2\right)
\end{eqnarray}
at the quadratic level \cite{Veltman:1980mj} and
violates the LHC bounds unless $\Lambda \lesssim 550\
{\rm GeV}$. This bound obviously contradicts with the
LHC experiments since the latter continue to confirm the
SM at multi {\rm TeV} energies. This experimental fact makes
it obligatory to find a natural UV completion to the SM.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.45]{naturalness.pdf}
\end{center}
\caption{The $\psi_{\mu}-\nu_L$ loop that generates the logarithmic correction in (\ref{eqn8}) and the quartic correction in (\ref{eqn88}).} \label{fig:P7}
\end{figure}
One possibility is to require $\left(\delta m_h^2\right)_{4}$ to cancel out $\left(\delta m_h^2\right)_{2}$. This requirement involves a severe fine-tuning (as with a scalar field
\cite{fine-tune-scalar}, Stueckelberg vector \cite{besteyle} and
spacetime curvature \cite{curvature-ft}) and cannot form a viable
stabilization mechanism.
Another possibility would be to switch, for instance, to dimensional
regularization scheme, wherein the quartic and quadratic
UV-dependencies are known to disappear. This, however, is not a
solution. The reason is that the SM, as a quantum field theory of
the strong and electroweak interactions, needs gravity to be
incorporated as the forth known force. And the fundamental scale of
gravity, $M_{Pl}$, inevitably sets an ineliminable physical UV
cutoff (rendering $\Lambda$ physical). This cutoff forces quantum field theories to exist in between
physical UV and IR scales. The SM plus $\psi_{\mu}$ (plus right-handed neutrinos), for instance,
ranges from $G_{F}^{-1/2}$ at the IR up to $\Lambda$ at the UV such
that both scales are physical (not to be confused with the formal
momentum cutoffs employed in the cutoff regularization).
To stabilize the SM, it is necessary to metamorphose the
destabilizing UV effects. This necessitates a physical agent. The
most obvious candidate is gravity. That is to say, the
UV-naturalness problems can be a clue to how quantized matter must
gravitate. Indeed, quantized matter in classical curved geometry
suffers from inconsistencies. The situation can be improved by
considering long-wavelength matter by integrating out high-frequency
modes. This means that the theory to be carried into curved geometry
for incorporating gravity is not the full action but the effective
action (see the discussions in \cite{gravity} and \cite{gravity2}). Thus, starting with
the SM effective action in flat spacetime with well-known
logarithmic, quartic and quadratic UV-sensitivities, gravity can be
incorporated in a way ensuring UV-naturalness. More precisely,
gravity gets incorporated properly and naturally {\it (i)} if the
requisite curved geometry is structured by interpreting $\Lambda^2$
as a constant value assigned to the spacetime curvature, and {\it
(ii)} if the SM is extended by new physics (NP) that does not have
to interact with the SM. The $\psi_{\mu}$ can well be an NP field.
Incorporating gravity by identifying $\Lambda^2 g_{\mu\nu}$ with
the Ricci curvature $R_{\mu\nu}(g)$, fundamental scale of
gravity gets generated as
\begin{eqnarray}
\label{MPl}
M_{Pl}^2 \approx \frac{\left(n_b-n_f\right)}{2(8 \pi)^2} \Lambda^2
\end{eqnarray}
where $n_b$ ($n_f$) are the total number of bosons (fermions) in the
SM plus the NP. The $\psi_{\mu}$ increases $n_f$ by 4, right-handed neutrinos by 2. There are
various other fields in the NP, which contribute to $n_b$ and $n_f$
to ensure $\Lambda \lesssim M_{Pl}$. Excepting $\psi_{\mu}$, they
do not need to interact with the SM fields. Induction of $M_{Pl}$
ensures that the quadratic UV-contributions to vacuum energy are
canalized not to the cosmological constant but to the gravitational
constant (see \cite{demir-ccp} arriving at this result in a
different context). This suppresses the cosmological constant down
to the neutrino mass scale.
The quartic UV contributions in (\ref{eqn88}) and the quadrat\-ic
contributions in (\ref{eqn8}) (suppressing contributions from the right-handed
neutrinos $\nu_R^k$) change their roles with the inclusion
of gravity. Indeed, corrections to the Higgs mass term $\left[\left(\delta m_h^2\right)_{4}\!+\!\left(\delta m_h^2\right)_{2} \right]\!\! H^{\dagger}\! H$ turns
into
\begin{equation}
\label{exp}
\left[\!\frac{3\!\left(\!m_h^2\! +\! 2 M_W^2\! +\! M_Z^2\! -\! 4 m_t^2\!\right)}{(8\pi)^2\left|\langle H \rangle\right|^2}
\!+\! \frac{c_{3/2}^2}{12(n_b\!-\!n_f)}\! \frac{M_{Pl}^2}{M^2}\! \right]\!\! R H^{\dagger}\! H
\end{equation}
which is nothing but the direct coupling of the Higgs field to the
scalar curvature $R$. This Higgs-curvature coupling is perfectly
natural; it has no potential to de-stabilize the Higgs sector.
Incorporation of gravity as in \cite{gravity,gravity2} leads, therefore, to
UV-naturalization of the SM with a nontrivial NP sector
containing $\psi_{\mu}$ as its interacting member.
\section{Spin-3/2 Field as Enabler of Higgs Inflation}
The non-minimal Higgs-curvature coupling in (\ref{exp}) reminds one at once the possibility of Higgs inlation. Indeed, the Higgs field has been shown in \cite{higgs-inf,higgs-inf-2} to lead to correct inflationary expansion provided that
\begin{eqnarray}
\frac{c_{3/2}^2}{12(n_b-n_f)} \frac{M_{Pl}^2}{M^2} \approx 1.7\times 10^{4}
\end{eqnarray}
after dropping the small SM contribution in (\ref{exp}). This relation puts constraints on $M$ and $\Lambda$ depending on how crowded the NP is.
For a Planckian UV cutoff $\Lambda \approx M_{Pl}$, the Planck scale in (\ref{MPl}) requires $n_b - n_f\approx 1300$, and this leads to $M/c_{3/2}\approx 6.3\times 10^{13}\ {\rm GeV}$. This heavy $\psi_{\mu}$, weighing not far from the see-saw and axion scales, acts as an enabler of Higgs inflation. (Of course, all this makes sense if the $\psi_{\mu}$ contribution in (\ref{log-corr}) is neutralized by similar contributions from the right-handed neutrinos $\nu_R^k$ to alleviate the little
hierarchy problem.)
For an intermediate UV cutoff $\Lambda\ll M_{Pl}$, $n_b-n_f$ can be large enough to bring $M$ down to lower scales. In fact, $M$ gets lowered to $M\sim {\rm TeV}$ for $n_b-n_f\simeq 10^{24}$, and this sets the UV cutoff $\Lambda \sim 3\ {\rm TeV}$. This highly crowded NP illustrates how small $M$
and $\Lambda$ can be. Less crowded NP sectors lead to intermediate-scale $M$ and $\Lambda$.
It follows therefore that it is possible to realize Higgs inflation through the Higgs-curvature coupling (corresponding to quartic UV-dependence the $\psi_{\mu}$ induces on the Higgs mass). It turns out that Higgs inflation is decided by how heavy $\psi_{\mu}$ is and how crowded the NP is. It is interesting that the $\psi_{\mu}$ hidden in the SM spectrum enables successful Higgs inflation if gravity is incorporated into the SM as in \cite{gravity,gravity2}.
\section{Spin-3/2 Field as Dark Matter}
Dark matter (DM), forming one-forth of the matter in the Universe, must be electrically-neutral and long-lived. The negative searches \cite{plehn,leszek} so far have added one more feature: The DM must have exceedingly suppressed interactions with the SM matter. It is not hard to see that the spin-3/2 fermion $\psi_{\mu}$ possesses all these properties. Indeed, the constraint (\ref{eqn4p}) ensures that scattering processes in which $\psi_{\mu}$ is on its mass shell must all be forbidden simply because its interactions in (\ref{int1}) involves the vertex factor $c_{3/2} \gamma^{\mu}$. This means that decays of $\psi_{\mu}$ as in Fig.\ref{fig:Px} as well as its co-annihilations with the self and other SM fields are all forbidden. Its density therefore does not change with time, and the observed DM relic density \cite{planck} must be its primordial density, which is determined by the short-distance physics the $\psi_{\mu}$ descends from. It is not possible to calculate the relic density without knowing the short-distance physics. Its mass and couplings, on the other hand, can be probed via the known SM-scatterings as studied in Sec. 3 above. In consequence, the $\psi_{\mu}$, as an inherently off-shell fermion hidden in the SM spectrum, possesses all the features required of a DM candidate.
Of course, the $\psi_{\mu}$ is not the only DM candidate in the setup. The crowded NP sector, needed to incorporate gravity in a way solving the hierarchy problem (see Sec. 4 above), involves various fields which do not interact with the SM matter. They are viable candidates for non-ineracting DM as well as dark energy (see the detailed analysis in \cite{gravity2}). The non-interacting NP fields can therefore contribute to the total DM distribution in the Universe. It will be, of course, not possible to search for them directly or indirectly. In fact, they do not have to come to equilibrium with the SM matter.
Interestingly, both $\psi_{\mu}$ and the secluded fields in the NP act as extra fields hidden in the SM spectrum. Unlike $\psi_{\mu}$ which reveal itself virtually, the NP singlets remain completely intact. The main implication is that, in DM phenomenology, one must keep in mind that there can exist an unobservable, undetectable component of the DM \cite{gravity2}.
\section{Conclusion and Outlook}
In this work we have studied a massive spin-3/2 particle $\psi_{\mu}$ obeying the constraint (\ref{eqn4p}) and interacting with the SM via (\ref{int1}). It hides in the SM spectrum as an
inherently off-shell field. We first discussed its collider signatures by studying $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings in detail in Sec. 3. Following this, we turned to its loop effects and determined how it contributes to big and little hierarchy problems in the SM. Resolving the former by appropriately incorporating gravity, we show that Higgs field can inflate the Universe. Finally, we show that $\psi_{\mu}$ is a viable
DM candidate, which can be indirectly probed via the scattering processes we have analyzed.
The material presented in this work can be extended in various ways. A partial list would include:
\begin{itemize}
\item Determining under what conditions right-handed neutrinos can lift the constraints on $\psi_{\mu}$ from the neutrino masses,
\item Improving the analyses of $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings by including loop contributions,
\item Simulating $e^{-}e^{+}\rightarrow W^{+}W^{-}$ at the ILC by taking into account planned detector acceptances and collider energies,
\item Performing a feasibility study of the proposed neutrino-Higgs collider associated with $\nu_L h \rightarrow \nu_{L} h$ scattering,
\item Exploring UV-naturalness by including right-handed neutrinos, and determining under what conditions the little hierarchy problem is softened.
\item Including effects of the right-handed neutrinos into Higgs inflation, and determining appropriate parameter space.
\item Giving an in-depth analysis of the dark matter and dark energy by taking into account the spin-3/2 field, right-handed neutrinos and the secluded NP fields.
\item Studying constraints on the masses of NP fields from nucleosynthesis and other processes in the early Universe.
\end{itemize}
We will continue to study the spin-3/2 hidden field starting with some of these points.
{\bf Acknowledgements.}
This work is supported in part by the TUBITAK grant 115F212. We thank to the conscientious referee for enlightening comments and suggestions.
|
\section{Introduction}\label{sec: introduction} The weight
part of generalisations of Serre's conjecture has seen significant
progress in recent years, particularly for (forms of) $\operatorname{GL}_2$. Conjectural
descriptions of the set of Serre weights were made in increasing
generality by \cite{bdj}, \cite{MR2430440} and \cite{GHS}, and cases
of these conjectures were proved in \cite{geebdj} and
\cite{geesavitttotallyramified}. Most recently, significant progress
was made towards completely establishing the conjecture for rank two
unitary groups in \cite{blggU2}. We briefly recall this result. Let
$p>2$ be prime, let $F$ be a CM field, and let
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ be a modular representation (see
\cite{blggU2} for the precise definition of ``modular'', which is in
terms of automorphic forms on compact unitary groups). There is a
conjectural set $W^?(\bar{r})$ of Serre weights in which $\bar{r}$ is
predicted to be modular, which is defined in Section \ref{sec: serre
weight definitions} below, following \cite{GHS}. Then the main
result of \cite{blggU2} is that under mild technical hypotheses,
$\bar{r}$ is modular of every weight in $W^?(\bar{r})$.
It remains to show that if $\bar{r}$ is modular of some weight, then
this weight is contained in $W^?(\bar{r})$. It had been previously
supposed that this was the easier direction; indeed, just as in the
classical case, the results of
\cite{blggU2} reduce the weight part of Serre's conjecture for these
unitary groups to a purely local problem in $p$-adic Hodge
theory. However, this problem has proved to be difficult,
and so far only fragmentary results are
known. In the present paper we resolve the problem in the totally
ramified case, so that in combination with \cite{blggU2} we resolve
the weight part of Serre's conjecture in this case, proving the
following Theorem (see Theorem \ref{thm: the main result, modular if
and only if predicted}).
\begin{ithm}
\label{thm: intro: the main result, modular if and only if predicted}Let
$F$ be an imaginary CM field with maximal totally real subfield
~$F^+$, and suppose that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
that $\zeta_p\notin F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that
$p>2$, and that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible
modular representation with split ramification such that
$\bar{r}(G_{F(\zeta_p)})$ is adequate. Assume that for each place $w|p$
of $F$, $F_w/\Qp$ is totally ramified.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre weight. Then
$a_w\inW^?(\bar{r}|_{G_{F_w}})$ if and only if $\bar{r}$ is modular of
weight $a$.
\end{ithm}(See the body of the paper, especially Section~\ref{ss:global}, for any unfamiliar notation and
terminology.) While \cite{blggU2} reduced this result to a purely
local problem, our methods are not purely local; in fact we use the
main result of \cite{blggU2}, together with potential automorphy
theorems, as part of our proof.
In the case that $\bar{r}|_{G_{F_w}}$ is semisimple for each place
$w|p$, the result was established (in a slightly different setting) in
\cite{geesavitttotallyramified}. The method of proof was in part
global, making use of certain potentially Barsotti-Tate lifts to
obtain conditions on $\bar{r}|_{G_{F_w}}$. We extend this analysis in
the present paper to the case that $\bar{r}|_{G_{F_w}}$ is reducible but
non-split,
obtaining conditions on the extension classes that can occur; we show
that (other than in one exceptional case) they lie in a certain set $L_{\operatorname{flat}}$, defined in terms of finite
flat models.
In the case that $\bar{r}|_{G_{F_w}}$ is reducible the definition of
$W^?$ also depends on the extension class; it is required to lie in
a set $L_{\operatorname{crys}}$, defined in terms of reducible crystalline lifts with
specified Hodge-Tate weights. To complete the proof, one must show
that $L_{\operatorname{crys}}=L_{\operatorname{flat}}$. An analogous result was proved in generic
unramified cases in section 3.4 of \cite{geebdj} by means of explicit
calculations with Breuil modules; our approach here is less direct,
but has the advantage of working in non-generic cases, and requires
far less calculation.
We use a global argument to show that
$L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$. Given a class in $L_{\operatorname{crys}}$, we use potential
automorphy theorems to realise the corresponding local representation
as part of a global modular representation, and then apply the main
result of \cite{blggU2} to show that this representation is modular of
the expected weight. Standard congruences between automorphic forms
then show that this class is also contained in $L_{\operatorname{flat}}$.
To prove the converse inclusion, we make a study of different finite
flat models to show that $L_{\operatorname{flat}}$ is contained in a vector space of
some dimension $d$. A standard calculation shows that $L_{\operatorname{crys}}$
contains a space of dimension $d$, so equality follows. As a
byproduct, we show that both $L_{\operatorname{flat}}$ and $L_{\operatorname{crys}}$ are vector
spaces. We also show that various spaces defined in terms of
crystalline lifts are independent of the choice of lift (see Corollary
\ref{cor: independence of lift for H^1_f}). The analogous property was
conjectured in the unramified case in \cite{bdj}.
It is natural to ask whether our methods could be extended to handle
the general case, where $F_w/\Qp$ is an arbitrary
extension. Unfortunately, this does not seem to be the case, because
in general the connection between being modular of some weight and
having a potentially Barsotti-Tate lift of some type is less direct. We expect that our methods could be used to reprove the results of
section 3.4 of \cite{geebdj}, but we do not see how to extend them to
cover the unramified case completely.
We now explain the structure of the paper. In Section \ref{sec: serre
weight definitions} we recall the definition of~$W^?$, and the
global results from \cite{blggU2} that we will need. In Section \ref{sec:local to
global} we recall a potential automorphy result from \cite{geekisin}, allowing us to
realise a local mod $p$ representation globally. Section \ref{sec:
congruences to weight 0} contains the definitions of the spaces
$L_{\operatorname{crys}}$ and $L_{\operatorname{flat}}$ and the proof that $L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$, and in
Section \ref{sec: finite flat
models} we carry out the necessary calculations with Breuil modules
to prove our main local results. Finally, in section \ref{sec: global
consequences} we combine our local results with the techniques of
\cite{geesavitttotallyramified} and the main result of \cite{blggU2}
to prove Theorem \ref{thm: intro: the main result, modular if and only if predicted}.
\subsection{Notation}If $M$ is a field, we let $G_M$ denote its
absolute Galois group. Let~$\epsilon$ denote the $p$-adic cyclotomic
character, and $\bar{\epsilon}$ the mod $p$ cyclotomic character.
If~$M$ is a global field and $v$ is a place of $M$, let $M_v$ denote
the completion of $M$ at $v$. If
~$M$ is a finite extension of $\mathbb{Q}_l$ for some $l$, we write $I_M$
for the inertia subgroup of~$G_M$. If $R$ is a local ring we write
$\mathfrak{m}_{R}$ for the maximal ideal of $R$.
Let $K$ be a finite extension of $\Qp$, with ring of integers $\mathcal{O}_K$
and residue field~$k$. We write ${\operatorname{Art}}_K:K^\times\to W_K^{{\operatorname{ab}}}$ for
the isomorphism of local class field theory, normalised so that
uniformisers correspond to geometric Frobenius elements. For each $\sigma\in {\operatorname{Hom}}(k,{\overline{\F}_p})$ we
define the fundamental character $\omega_{\sigma}$ corresponding
to~$\sigma$ to be the composite $$\xymatrix{I_K \ar[r] & W_K^{{\operatorname{ab}}} \ar[r]^{{\operatorname{Art}}_K^{-1}} &
\mathcal{O}} \newcommand{\calO}{\mathcal{O}_{K}^{\times}\ar[r] & k^{\times}\ar[r]^{\sigma} &
{\overline{\F}_p}^{\times}.}$$
In the case that $k\cong{\F_p}$, we will sometimes write $\omega$ for
$\omega_\sigma$. Note that in this case we have $\omega^{[K:\Qp]}=\epsilonbar$.
We fix an algebraic closure $\overline{{K}}$ of $K$. If $W$ is a de Rham representation of $G_K$ over
$\overline{{\Q}}_p$ and $\tau$ is an embedding $K \hookrightarrow \overline{{\Q}}_p$ then the multiset
$\operatorname{HT}_\tau(W)$ of Hodge-Tate weights of $W$ with respect to $\tau$ is
defined to contain the integer $i$ with multiplicity $$\dim_{\overline{{\Q}}_p} (W
\otimes_{\tau,K} \widehat{\overline{{K}}}(-i))^{G_K},$$ with the usual notation
for Tate twists. Thus for example
$\operatorname{HT}_\tau(\epsilon)=\{ 1\}$.
\section{Serre weight conjectures: definitions}\label{sec: serre
weight definitions}\subsection{Local definitions}We begin by recalling some
generalisations of the weight part of Serre's conjecture. We begin
with some purely local definitions. Let $K$ be a finite totally ramified extension of
$\Qp$ with absolute ramification index $e$, and let $\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous
representation.
\begin{defn}
A \emph{Serre weight} is an irreducible ${\overline{\F}_p}$-representation of
$\operatorname{GL}_2({\F_p})$. Up to isomorphism, any such representation is of the
form \[F_a:=\det{}^{a_{2}}\otimes\operatorname{Sym}^{a_{1}-a_{2}}{\mathbb{F}}_p^2\otimes_{{\mathbb{F}}_p}{\overline{\F}_p}\]
where $0\le a_{1}-a_{2}\le p-1$. We also use the term Serre weight
to refer to the pair $a = (a_1,a_2)$.
\end{defn}
We say that two Serre weights $a$ and $b$ are \emph{equivalent} if and only if
$F_a\cong F_b$ as representations of $\operatorname{GL}_2({\F_p})$. This is equivalent
to demanding that we
have $a_{1}-a_{2}=b_{1}-b_{2}$ and $a_2\equiv b_2\pmod{p-1}$.
We write ${\mathbb{Z}}^2_+$ for the set of pairs of integers $(n_1,n_2)$ with
$n_1\ge n_2$, so that a Serre weight $a$ is by definition an element
of ${\mathbb{Z}}^2_+$. We say that an element
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ is a \emph{lift} of a weight
$a\in{\mathbb{Z}}^2_+$ if there is an element $\tau\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ such that
$\lambda_{\tau}=a$, and for all other $\tau'\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ we have
$\lambda_{\tau'}=(0,0)$.
\begin{defn}
\label{defn: Galois representation of Hodge type some weight}Let
$K/\Qp$ be a finite extension, let
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$, and let
$\rho:G_K\to\operatorname{GL}_2({\overline{\Q}_p})$ be a de Rham representation. Then we say
that $\rho$ has \emph{Hodge type} $\lambda$ if for each
$\tau\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ we have $\operatorname{HT}_\tau(\rho)=\{\lambda_{\tau,1}+1,\lambda_{\tau,2}\}$.
\end{defn}
Following \cite{GHS} (which in turn follows \cite{bdj} and \cite{MR2430440}), we define an explicit
set of Serre weights $W^?(\rhobar)$.
\begin{defn}
\label{defn: W? niveau 1}If $\rhobar$ is reducible, then a Serre
weight $a\in{\mathbb{Z}}^2_+$ is in $W^?(\rhobar)$ if
and only if $\rhobar$ has a crystalline lift of the
form \[ \begin{pmatrix}\psi_1&*\\ 0& \psi_2
\end{pmatrix}\] which has Hodge type $\lambda$ for some lift
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ of $a$. In particular, if $a\in W^?(\rhobar)$ then by Lemma 6.2 of \cite{geesavitttotallyramified} it is necessarily the case that there is a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$ and an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix} \omega^{\delta}
\prod_{ \sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta} \prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}}. \end{pmatrix}\]
\end{defn}
We remark that while it may seem strange to consider the single
element set ${\operatorname{Hom}}({\F_p},{\overline{\F}_p})$, this notation will be convenient for us.
\begin{defn}
\label{defn: W? niveau 2}
Let $K'$ denote the quadratic unramified
extension of $K$ inside~$\overline{{K}}$, with residue field
$k'$ of order $p^2$.
If $\rhobar$ is irreducible, then a Serre
weight $a\in{\mathbb{Z}}^2_+$ is in $W^?(\rhobar)$ if
and only if there is a subset $J\subset{\operatorname{Hom}}(k',{\overline{\F}_p})$ of size $1$,
and an integer $0\le
\delta\le e-1$ such that if we write
${\operatorname{Hom}}(k',{\overline{\F}_p})=J\coprod J^c$, then \[\rhobar|_{I_K}\cong
\begin{pmatrix}\prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1+\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}+e-1-\delta}&0\\ 0& \prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1+\delta}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}+e-1-\delta}
\end{pmatrix}.\]
\end{defn}
We remark that by Lemma 4.1.19 of \cite{blggU2}, if
$a\inW^?(\rhobar)$ and $\rhobar$ is irreducible then
$\rhobar$ necessarily has a crystalline lift of Hodge type $\lambda$ for any lift
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ of $a$. Note also that if $a$
and $b$ are equivalent and $a\inW^?(\rhobar)$ then $b\inW^?(\rhobar)$.
\begin{remark}\label{rem: conjectured weights independent of
unramified twist}
Note that if $\thetabar: G_K\to{\overline{\F}_p}^\times$ is an unramified character, then
$W^?(\bar{r})=W^?(\bar{r}\otimes\thetabar)$. \end{remark}
\subsection{Global conjectures}\label{ss:global} The point of the local definitions
above is to allow us to formulate global Serre weight
conjectures. Following \cite{blggU2}, we work with rank two unitary
groups which are compact at infinity. As we will not need to make
any arguments that depend on the particular definitions made in
\cite{blggU2}, and our main results are purely local, we simply
recall some notation and basic properties of the definitions,
referring the reader to \cite{blggU2} for precise formulations.
We emphasise that our conventions for Hodge-Tate weights are the
opposite of those of \cite{blggU2}; for this reason, we must introduce
a dual into the definitions.
Fix an imaginary CM field $F$, and let $F^+$ be its maximal totally
real subfield. We assume that each prime of $F^+$ over $p$ has residue
field ${\mathbb{F}}_p$ and splits in $F$. We define a global notion of Serre weight by taking a
product of local weights in the following way.
\begin{defn}
\label{defn:global-serre-wts}
Let $S$ denote the set
of places of $F$ above $p$. If $w \in S$ lies over a place $v$ of
$F^+$, write $v = w w^c$. Let
$({\mathbb{Z}}^2_+)_0^{S}$ denote the subset of
$({\mathbb{Z}}^2_+)^{S}$ consisting of elements $a = (a_w)_{w \in S}$ such
that $a_{w,1}+a_{w^c,2}=0$ for all $w\in S$. We say that an
element $a\in({\mathbb{Z}}^2_+)_0^{S}$ is a \emph{Serre
weight} if for each $w|p$ we
have \[p-1\ge a_{w,1}-a_{w,2}.\]
\end{defn}
Let $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous irreducible
representation. Definition 2.1.9 of \cite{blggU2} states what it
means for $\bar{r}$ to be modular, and more precisely for $\bar{r}$ to be
modular of some Serre weight $a$; roughly speaking, $\bar{r}$ is modular
of weight $a$ if there is a cohomology class on some unitary group
with coefficients in the local system corresponding to $a$ whose
Hecke eigenvalues are determined by the characteristic polynomials of
$\bar{r}$ at Frobenius elements. Since our conventions for Hodge-Tate
weights are the opposite of those of \cite{blggU2}, we make the
following definition.
\begin{defn}
Suppose that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous
irreducible modular representation. Then we say that $\bar{r}$ \emph{is modular
of weight} $a\in({\mathbb{Z}}^2_+)_0^S$ if
$\bar{r}^\vee$ is modular of weight $a$ in the sense of Definition 2.1.9
of \cite{blggU2}.
\end{defn} We globalise the definition of the set
$W^?(\rhobar)$ in the following natural fashion.
\begin{defn}
If $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous representation, then
we define $W^?(\bar{r})$ to be the set of Serre weights
$a\in({\mathbb{Z}}^2_+)_0^S$ such that for each
place $w|p$ the corresponding Serre weight
$a_w\in{\mathbb{Z}}^2_+$ is an element of
$W^?(\bar{r}|_{G_{F_w}})$.
\end{defn}
One then has the following conjecture.
\begin{conj}\label{conj: global Serre weight explicit conjecture}
Suppose that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous irreducible
modular representation, and that
$a\in({\mathbb{Z}}^2_+)_0^S$ is a Serre
weight. Then $\bar{r}$ is modular of weight $a$ if and only if
$a\inW^?(\bar{r})$.
\end{conj}
If $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous representation,
then we say that $\bar{r}$ has \emph{split ramification} if any finite
place of $F$ at which $\bar{r}$ is ramified is split over $F^+$. The
following result is
Theorem 5.1.3 of \cite{blggU2}, one of the
main theorems of that paper, in the special case where $F_w/\Qp$ is
totally ramified for all $w|p$. (Note that in \cite{blggU2}, the set
of weights $W^?(\bar{r})$ is referred to as
$W^{\operatorname{explicit}}(\bar{r})$.)
\begin{thm}
\label{thm: explicit local lifts implies Serre
weight}Let $F$ be an imaginary CM field with maximal totally real subfield~$F^+$. Assume that $\zeta_p\notin F$, that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ has residue field ${\mathbb{F}}_p$ and splits completely in $F$,
and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular
representation with split ramification. Assume that $\bar{r}(G_{F(\zeta_p)})$ is adequate.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a
Serre weight. Assume that $a\in W^?(\bar{r})$. Then $\bar{r}$ is
modular of weight $a$.
\end{thm}
Here \emph{adequacy} is a group-theoretic condition, introduced in
\cite{jack}, that for subgroups of $\operatorname{GL}_2({\overline{\F}_p})$ with $p > 5$ is
equivalent to the usual condition that $\bar{r}|_{G_{F(\zeta_p)}}$ is irreducible. For a precise
definition we refer the reader to Definition A.1.1 of \cite{blggU2}.
We also remark that the hypotheses that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
and that $[F^+:{\mathbb{Q}}]$ is even, are in fact part of the definition of
``modular'' made in \cite{blggU2}.)
Theorem~\ref{thm: explicit local lifts implies Serre
weight} establishes one direction of Conjecture \ref{conj: global Serre
weight explicit conjecture}, and we are left with the problem of
``elimination,'' i.e., the problem of proving that if $\bar{r}$ is
modular of weight $a$, then $a\in W^?(\bar{r})$.
We believe that this problem should have a purely local resolution,
as we now explain.
The key point is the relationship between being
modular of weight $a$, and the existence of certain de Rham lifts of
the local Galois representations $\bar{r}|_{G_{F_w}}$, $w|p$. The link
between these properties is provided by local-global compatibility
for the Galois representations associated to the automorphic
representations under consideration; rather than give a detailed
development of this connection, for which see \cite{blggU2}, we
simply summarise the key results from \cite{blggU2} that we will
use. The
following is Corollary 4.1.8 of \cite{blggU2}.
\begin{prop}
\label{prop: modular of some weight implies crystalline lifts
exist}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ has residue field
${\mathbb{F}}_p$ and splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let
$a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre
weight. If $\bar{r}$ is modular of weight $a$, then for each place
$w|p$ of $F$, there is a crystalline representation
$\rho_w:G_{F_w}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting $\bar{r}|_{G_{F_w}}$, such
that $\rho_w$ has Hodge type $\lambda_w$ for some lift
$\lambda_w\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(F_w,{\overline{\Q}_p})}$ of $a$.
\end{prop}
We stress that Proposition~\ref{prop: modular of some weight implies crystalline lifts
exist} does not already complete the proof of Conjecture \ref{conj: global Serre
weight explicit conjecture}, because the representation $\rho_w$
may be irreducible (compare with Definition~\ref{defn: W? niveau 1}).
However, in light of this result, it is natural to make the following
purely local conjecture, which together with Theorem \ref{thm:
explicit local lifts implies Serre weight} would essentially resolve
Conjecture \ref{conj: global Serre weight explicit conjecture}.
\begin{conj}
\label{conj: crystalline lift implies explicit crystalline lift}
Let $K/\Qp$ be a finite totally ramified extension, and let
$\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Let
$a\in{\mathbb{Z}}^2_+$ be a Serre weight, and suppose
that for some lift $\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$, there is
a continuous crystalline representation
$\rho:G_{K}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting $\rhobar$, such
that $\rho$ has Hodge type $\lambda$.
Then $a\inW^?(\bar{r})$.
\end{conj}
We do not know how to prove this conjecture, and we do not directly
address the conjecture in the rest of this paper. Instead, we
proceed more indirectly. Proposition \ref{prop: modular of some
weight implies crystalline lifts exist} is a simple consequence of
lifting automorphic forms of weight $a$ to forms of weight
$\lambda$; we may also obtain non-trivial information by lifting to
forms of weight $0$ and non-trivial type. In this paper, we will
always consider principal series types. Recall that if $K/\Qp$ is a finite extension the
\emph{inertial type} of a potentially semistable Galois
representation $\rho:G_K\to\operatorname{GL}_n({\overline{\Q}_p})$ is the restriction to
$I_K$ of the corresponding Weil-Deligne representation. In this
paper we normalise this definition as in the appendix to
\cite{MR1639612}, so that for example the inertial type of a finite
order character is just the restriction to inertia of that
character.
\begin{prop}
\label{prop: modular of some weight implies potentially BT lifts
exist}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ has residue field
${\F_p}$ and splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a
Serre weight. If $\bar{r}$ is modular of weight $a$, then for each
place $w|p$ of $F$, there is a continuous potentially semistable
representation $\rho_w:G_{F_w}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting
$\bar{r}|_{G_{F_w}}$, such that $\rho_w$ has Hodge type $0$ and
inertial type $\omegat^{a_1}\oplus\omegat^{a_2}$. (Here $\omegat$ is
the Teichm\"uller lift of $\omega$.) Furthermore, $\rho_w$ is
potentially crystalline unless $a_{1}-a_{2}=p-1$ and $\bar{r}|_{G_{F_w}}\cong
\begin{pmatrix}
\chibar\epsilonbar&*\\0&\chibar
\end{pmatrix}
$ for some character $\chibar$.
\end{prop}
\begin{proof}
This may be proved in exactly the same way as Lemma 3.4 of
\cite{geesavitttotallyramified}, working in the setting of
\cite{blggU2} (cf. the proof of Lemma 3.1.1 of \cite{blggU2}). Note
that if $\rho_w$ is not potentially crystalline, then it is
necessarily a twist of an extension of the trivial character by the
cyclotomic character.
\end{proof}
\section{Realising local representations globally}\label{sec:local to
global}\subsection{}We now recall a result from the forthcoming paper \cite{geekisin}
which allows us to realise local representations globally, in order to
apply the results of Section~\ref{ss:global} in a purely local
setting.
\begin{thm}
\label{thm: the final local-to-global result} Suppose that $p>2$,
that $K/\Qp$ is a finite extension, and let
$\bar{r}_K:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Then
there is an imaginary CM field $F$ and a continuous irreducible
representation $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ such that, if $F^+$ denotes the maximal totally real subfield of $F$,
\begin{itemize}
\item each place $v|p$ of $F^+$ splits in $F$ and has $F^+_v\cong
K$,
\item for each place $v|p$ of $F^+$, there is a place ${\widetilde{{v}}}$ of $F$
lying over $F^+$ with $\bar{r}|_{G_{F_{\widetilde{{v}}}}}$ isomorphic to an
unramified twist of $\bar{r}_K$,
\item $\zeta_p\notin F$,
\item $\bar{r}$ is unramified outside of $p$,
\item $\bar{r}$ is modular in the sense of \cite{blggU2}, and
\item $\bar{r}(G_{F(\zeta_p)})$ is adequate.
\end{itemize}
\end{thm}
\begin{proof}We sketch the proof; the full details will appear in
\cite{geekisin}. The argument is a straightforward application of
potential modularity techniques. First, an application of
Proposition 3.2 of \cite{frankII} supplies a totally real field $L^+$ and a continuous irreducible
representation $\bar{r}:G_{L^+}\to\operatorname{GL}_2({\overline{\F}_p})$ such that
\begin{itemize}
\item for each place $v|p$ of $L^+$, $L^+_v\cong K$ and
$\bar{r}|_{L^+_v}\cong\bar{r}_K$,
\item for each place $v|\infty$ of $L^+$, $\det\bar{r}(c_v)=-1$, where
$c_v$ is a complex conjugation at $v$, and
\item there is a non-trivial finite extension ${\mathbb{F}}/{\mathbb{F}}_p$ such that
$\bar{r}(G_{L^+})=\operatorname{GL}_2({\mathbb{F}})$.
\end{itemize}
By a further base change one can also arrange that $\bar{r}|_{G_{L^+_v}}$ is unramified
at each finite place $v\nmid p$ of $L^+$.
By Lemma 6.1.6 of \cite{blggord} and the proof of
Proposition 7.8.1 of \cite{0905.4266}, $\bar{r}_K$ admits a potentially
Barsotti-Tate lift, and one may then apply Proposition 8.2.1 of
\cite{0905.4266} to deduce that there is a finite totally real Galois
extension $F^+/L^+$ in which all primes of $L^+$ above $p$ split
completely, such that $\bar{r}|_{G_{F^+}}$ is modular in the sense
that it is congruent to the Galois representation associated to some
Hilbert modular form of parallel weight $2$.
By the theory of base change between $\operatorname{GL}_2$ and unitary groups
(\textit{cf.} section 2 of \cite{blggU2}), it now suffices to show that
there is a totally imaginary quadratic extension $F/F^+$ and a
character $\thetabar:G_F\to{\overline{\F}_p}^\times$ such that
$\bar{r}|_{G_F}\otimes\thetabar$ has multiplier~$\epsilonbar^{-1}$ and
such that for each place $v|p$ of $F^+$, there is a place ${\widetilde{{v}}}$ of $F$
lying over $v$ with $\thetabar|_{G_{F_{{\widetilde{{v}}}}}}$ unramified. The
existence of such a character is a straightforward exercise in class
field theory, and follows for example from Lemma 4.1.5 of \cite{cht}.
\end{proof}
\section{Congruences}\label{sec: congruences to weight 0}\subsection{} Having realised a local mod $p$
representation globally, we can now use the results explained in
Section \ref{sec: serre
weight definitions} to deduce non-trivial local consequences.
\begin{thm}
\label{thm: explicit weight implies pot BT lift}Let $p>2$ be prime,
let $K/\Qp$ be a finite totally ramified extension, and let
$\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Let
$a\inW^?(\rhobar)$ be a Serre weight. Then there is a continuous
potentially semistable representation $\rho:G_K\to\operatorname{GL}_2({\overline{\Q}_p})$
lifting $\rhobar$, such that $\rho$ has Hodge type $0$ and inertial
type $\omegat^{a_1}\oplus\omegat^{a_2}$. Furthermore, $\rho$ is
potentially crystalline unless $a_{1}-a_{2}=p-1$ and $\rhobar\cong
\begin{pmatrix}
\chibar\epsilonbar&*\\0&\chibar
\end{pmatrix}$ for some character $\chibar$.
\end{thm}
\begin{proof} By Theorem \ref{thm: the final local-to-global result}, there is
an imaginary CM field $F$ and a modular representation
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ such that
\begin{itemize}
\item for each place $v|p$ of $F^+$, $v$ splits in $F$ as
${\widetilde{{v}}}\tv^c$, and we have $F_{\widetilde{{v}}}\cong K$, and $\bar{r}|_{G_{F_{\widetilde{{v}}}}}$ is
isomorphic to an unramified twist of $\rhobar$,
\item $\bar{r}$ is unramified outside of $p$,
\item $\zeta_p\notin F$, and
\item $\bar{r}(G_{F(\zeta_p)})$ is adequate.
\end{itemize}Now, since the truth of the result to be proved is
obviously unaffected by making an unramified twist (if $\rhobar$ is
replaced by a twist by an unramified character $\overline{\theta}$, one may
replace $\rho$ by a twist by an unramified
lift of $\overline{\theta}$), we may without loss of
generality suppose that $\bar{r}|_{G_{F_w}}\cong\rhobar$. Let
$b\in({\mathbb{Z}}^2_+)_0^{S}$ be the Serre weight such that
$b_{\widetilde{{v}}}=a$ for each place $v|p$ of $F^+$, where $S$ denotes the set of
places of $F$ above $p$. By Remark \ref{rem: conjectured weights independent of
unramified twist}, $b\inW^?(\bar{r})$. Then by Theorem \ref{thm: explicit local lifts implies Serre
weight}, $\bar{r}$ is modular of weight $b$. The result now follows
from Proposition \ref{prop: modular of some weight implies potentially BT lifts
exist}.
\end{proof}
\subsection{Spaces of crystalline extensions}\label{subsec: H^1_f}We
now specialise to the setting of Definition \ref{defn: W? niveau
1}. As usual, we let $K/\Qp$ be a finite totally ramified extension with residue
field $k={\F_p}$, ramification index $e$, and uniformiser $\pi$. We fix a Serre weight $a\in{\mathbb{Z}}^2_+$. We fix a
continuous representation $\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$, and we assume
that there is:
\begin{itemize}
\item a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and
\item an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix}
\omega^\delta\prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}}. \end{pmatrix}\]
\end{itemize}
Note that in general there might be several choices
of $J$, $\delta$. Fix such a choice for the
moment. Consider pairs of characters $\chi_1$,
$\chi_2:G_K\to{\overline{\Q}_p}^\times$ with the properties that:
\begin{enumerate}
\item $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$,
\item $\chi_1$ and $\chi_2$ are crystalline, and
\item if we let $S$ denote the set of
${\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$, then either
\begin{enumerate}[(i)]
\item $J$ is non-empty, and there is one embedding $\tau\in
S$ with $\operatorname{HT}_\tau(\chi_1)=a_{1}+1$ and
$\operatorname{HT}_\tau(\chi_2)=a_{2}$, there are $\delta$ embeddings
$\tau\in S$ with $\operatorname{HT}_\tau(\chi_1)=1$ and
$\operatorname{HT}_\tau(\chi_2)=0$, and for the remaining $e-1-\delta$
embeddings $\tau\in S$ we have $\operatorname{HT}_\tau(\chi_1)=0$ and
$\operatorname{HT}_\tau(\chi_2)=1$, or
\item $J=\emptyset$, and there is one embedding $\tau\in
S$ with $\operatorname{HT}_\tau(\chi_1)=a_{2}$ and
$\operatorname{HT}_\tau(\chi_2)=a_{1}+1$, there are $\delta$ embeddings
$\tau\in S$ with $\operatorname{HT}_\tau(\chi_1)=1$ and
$\operatorname{HT}_\tau(\chi_2)=0$, and for the remaining $e-1-\delta$
embeddings $\tau\in S$ we have $\operatorname{HT}_\tau(\chi_1)=0$ and
$\operatorname{HT}_\tau(\chi_2)=1$.
\end{enumerate}
\end{enumerate}
Note that these properties do not specify the characters $\chi_1$ and
$\chi_2$ uniquely, even in the unramified case, as one is always free
to twist either character by an unramified character which is trivial
mod $p$. We point out that the Hodge type of any de Rham extension of
$\chi_2$ by $\chi_1$
will be a lift of $a$. Conversely, by Lemma~6.2 of \cite{geesavitttotallyramified} any $\chi_1,\chi_2$ satisfying
(1) and (2) such that the Hodge type of $\chi_1 \oplus \chi_2$ is a
lift of $a$ will satisfy (3) for a valid choice of $J$ and $\delta$
(unique unless $a=0$).
Suppose now that we have fixed two such characters $\chi_1$ and
$\chi_2$, and we now allow the (line corresponding to the) extension
class of $\rhobar$ in ${\operatorname{Ext}}_{G_K}(\chibar_2,\chibar_1)$ to vary. We
naturally identify ${\operatorname{Ext}}_{G_K}(\chibar_2,\chibar_1)$ with
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ from now on.
\begin{defn}
Let $L_{\chi_1,\chi_2}$ be the subset of
$H^1(G_K,\chibar_1\chibar_2^{-1})$ such that the corresponding
representation $\rhobar$ has a crystalline lift $\rho$ of the
form \[
\begin{pmatrix}
\chi_1&*\\0&\chi_2
\end{pmatrix}.\]
\end{defn}
We have the following variant of Lemma 3.12 of \cite{bdj}.
\begin{lem}
\label{lem: dimension of H^1_f spaces} $L_{\chi_1,\chi_2}$ is an
${\overline{\F}_p}$-vector subspace of $ H^1(G_K,\chibar_1\chibar_2^{-1})$ of
dimension $|J|+\delta$, unless
$\chibar_1=\chibar_2$, in which case it has dimension
$|J|+\delta+1$.
\end{lem}
\begin{proof} Let $\chi=\chi_1\chi_2^{-1}$.
Recall that
$H^1_f(G_K,\overline{\Z}_p(\chi))$ is the preimage of
$H^1_f(G_K,{\overline{\Q}_p}(\chi))$ under the natural map
$\eta : H^1(G_K,\overline{\Z}_p(\chi))\to H^1(G_K,{\overline{\Q}_p}(\chi))$, so that
$L_{\chi_1,\chi_2}$ is the image of $H^1_f(G_K,\overline{\Z}_p(\chi))$ in
$H^1(G_K,\chibar)$. The kernel of $\eta$ is precisely the torsion
part of $H^1_f(G_K,\overline{\Z}_p(\chi))$, which (since $\chi\neq 1$,
e.g. by examining Hodge-Tate weights) is non-zero if and only if
$\chibar=1$, in which case it has the form $\kappa^{-1} \overline{\Z}_p/\overline{\Z}_p$
for some $\kappa \in \mathfrak{m}_{\overline{\Z}_p}$.
By
Proposition 1.24(2) of \cite{nekovar} we see that $\dim_{\overline{\Q}_p}
H^1_f(G_K,{\overline{\Q}_p}(\chi))=|J|+\delta$, again using $\chi \neq 1$. Since
$H^1(G_K,\overline{\Z}_p(\chi))$ is a finitely generated $\overline{\Z}_p$-module,
the result follows.
\end{proof}
\begin{defn}
\label{defn: union of H^1_f subspaces}If $\chibar_1$ and
$\chibar_2$ are fixed, we define $L_{\operatorname{crys}}$ to be the subset of
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ given by the union of the $L_{\chi_1,\chi_2}$
over all $\chi_1$ and $\chi_2$ as above.
\end{defn}
Note that $L_{\operatorname{crys}}$ is a union of subspaces of possibly varying
dimensions, and as such it is not clear that $L_{\operatorname{crys}}$ is itself a
subspace. Note also that the representations $\rhobar$ corresponding
to elements of $L_{\operatorname{crys}}$ are by definition precisely those for which
$F_a\inW^?(\rhobar)$.
\begin{defn}
\label{defn: H^1_flat subspace}Let $L_{\operatorname{flat}}$ be the subset
of $H^1(G_K,\chibar_1\chibar_2^{-1})$ consisting of classes with the property that
if $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$ is the corresponding representation, then there is a
finite field $k_E \subset {\overline{\F}_p}$ and a
finite flat $k_E$-vector space scheme over $\mathcal{O}_{K(\pi^{1/(p-1)})}$ with
generic fibre
descent data to $K$ of the
form $ \omega^{a_{1}}\oplus\omega^{a_{2}}$
(see Definition~\ref{defn:dd-of-the-form}) whose generic fibre is $\rhobar$.
\end{defn}
\begin{thm}
\label{thm: crystalline extension implies flat}Provided that
$a_{1}-a_{2}\ne p-1$ or that $\chibar_1\chibar_2^{-1}\ne \epsilonbar$,
$L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$.
\end{thm}
\begin{proof}
Take a class in $L_{\operatorname{crys}}$, and consider the corresponding
representation $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$. As remarked above, $F_a\inW^?(\rhobar)$, so by
Theorem \ref{thm: explicit weight implies pot BT lift}, $\rhobar$
has a crystalline lift of Hodge type $0$ and inertial
type \[\omegat^{a_{1}}\oplus\omegat^{a_{2}},\] and this
representation can be taken to have coefficients in the ring of
integers $\mathcal{O}_E$ of a finite
extension $E/\Qp$. Let $\varpi$ be a uniformiser of $\mathcal{O}_E$, and $k_E$
the residue field. Such a representation
corresponds to a $p$-divisible $\mathcal{O}_E$-module with generic fibre descent data, and
taking the $\varpi$-torsion
gives a finite flat $k_E$-vector space scheme with generic fibre descent
data whose generic fibre is $\rhobar$. By Corollary 5.2 of \cite{geesavittquaternionalgebras} this
descent data has the form $\omega^{a_1} \oplus \omega^{a_2}$.
\end{proof}
In the next section we will make calculations with finite flat group
schemes in order to relate $L_{\operatorname{flat}}$ and $L_{\operatorname{crys}}$.
\section{Finite flat models}\label{sec: finite flat
models}\subsection{}We work throughout this section in the following setting:
\begin{itemize}
\item $K/\Qp$ is a finite extension with ramification index $e$,
inertial degree $1$, ring
of integers $\mathcal{O}_K$, uniformiser $\pi$ and residue field ${\F_p}$.
\item $\chibar_1$, $\chibar_2$ are characters
$G_K\to{\overline{\F}_p}^\times$.
\item $a\in{\mathbb{Z}}^2_+$ is a Serre weight.
\item There is a decomposition ${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and an integer $0\le
\delta\le e-1$ such that \[\chibar_1|_{I_K}=\omega^\delta\prod_{\sigma\in
J}\omega^{a_{1}+1}\prod_{\sigma\in
J^c}\omega^{a_{2}},\] \[\chibar_2|_{I_K}=\omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega^{a_{1}+1}\prod_{\sigma\in
J}\omega^{a_{2}}.\]
\end{itemize}
Note in particular that $(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$.
Let $K_1:=K(\pi^{1/(p-1)})$. Let $k_E$ be a finite extension of ${\F_p}$
such that $\chibar_1,\chibar_2$ are defined over $k_E$; for the moment
$k_E$ will be fixed, but eventually it will be allowed to vary.
We wish to consider the representations $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$ such that there is a finite flat $k_E$-vector space
scheme $\mathcal{G}$ over $\mathcal{O}_{K_1}$ with generic fibre descent data to $K$ of the form
$\omega^{a_1}\oplus\omega^{a_2}$ (see Definition~\ref{defn:dd-of-the-form}), whose generic fibre is
$\rhobar$.
In order to do, we will work with Breuil modules with descent data
from $K_1$ to~$K$. We
recall the necessary definitions from
\cite{geesavittquaternionalgebras}.
Fix $\pi_1$, a $(p-1)$-st root of $\pi$ in $K_1$. Write
$e'=e(p-1)$. The category $\operatorname{BrMod}_{\operatorname{dd}}$
consists of quadruples $(\mathcal{M},{\operatorname{Fil}}^1
\mathcal{M},\phi_{1},\{\widehat{g}\})$ where:
\begin{itemize}\item $\mathcal{M}$ is a finitely generated free
$k_E[u]/u^{e'p}$-module,
\item ${\operatorname{Fil}}^1 {\mathcal{M}}$ is a $k_E[u]/u^{e'p}$-submodule of ${\mathcal{M}}$ containing $u^{e'}{\mathcal{M}}$,
\item $\phi_{1}:{\operatorname{Fil}}^1{\mathcal{M}}\to{\mathcal{M}}$ is $k_E$-linear and $\phi$-semilinear
(where $\phi:{\F_p}[u]/u^{e'p}\to {\F_p}[u]/u^{e'p}$ is the $p$-th power map)
with image generating ${\mathcal{M}}$ as a $k_E[u]/u^{e'p}$-module, and
\item $\widehat{g}:{\mathcal{M}}\to{\mathcal{M}}$ for each $g\in{\operatorname{Gal}}(K_1/K)$ are additive
bijections that preserve ${\operatorname{Fil}}^1 {\mathcal{M}}$, commute with the $\phi_1$-,
and $k_E$-actions, and satisfy $\widehat{g}_1\circ
\widehat{g}_2=\widehat{g_1\circ g}_2$ for all
$g_1,g_2\in{\operatorname{Gal}}(K_1/K)$; furthermore $\widehat{1}$ is the identity,
and if $a\in k_E$, $m\in{\mathcal{M}}$ then
$\widehat{g}(au^{i}m)=a((g(\pi)/\pi)^{i})u^{i}\widehat{g}(m)$.\end{itemize}
The category $\operatorname{BrMod}_{\operatorname{dd}}$ is equivalent to the category of finite
flat $k_E$-vector space schemes over $\mathcal{O}_{K_1}$ together with
descent data on the generic fibre from $K_1$ to~$K$
(this equivalence depends on $\pi_1$); see \cite{sav06}, for instance. We obtain the associated
$G_{K}$-representation (which we will refer to as the generic fibre)
of an object of $\operatorname{BrMod}_{\operatorname{dd},K_1}$ via the covariant functor
$T_{{\operatorname{st}},2}^{K}$ (which is defined immediately before Lemma 4.9 of
\cite{MR2137952}).
\begin{defn}
\label{defn:dd-of-the-form}
Let ${\mathcal{M}}$ be an object of $\operatorname{BrMod}_{\operatorname{dd}}$ such that the underlying
$k_E$-module has rank two. We say that the finite flat $k_E$-vector
space scheme corresponding to ${\mathcal{M}}$ \emph{has descent data
of the form} $\omega^{a_1} \oplus \omega^{a_2}$ if ${\mathcal{M}}$ has a basis
$e_1,e_2$ such that $\widehat{g}(e_i) = \omega^{a_i}(g) e_i$. (Here
we abuse notation by identifying an element of $G_K$ with its image
in ${\operatorname{Gal}}(K_1/K)$.)
\end{defn}
We now consider a finite flat group scheme with generic fibre descent data $\mathcal{G}$ as above. By a standard scheme-theoretic
closure argument, $\chibar_1$ corresponds to a finite flat subgroup
scheme with generic fibre descent data
$\mathcal{H}$ of $\mathcal{G}$, so we begin by analysing the possible
finite flat group schemes corresponding to characters.
Suppose now that ${\mathcal{M}}$ is an object of $\operatorname{BrMod}_{\operatorname{dd}}$. The rank
one objects of $\operatorname{BrMod}_{\operatorname{dd}}$ are classified as follows.
\begin{prop} \label{prop:rank one breuil modules} With our fixed choice of uniformiser
$\pi$, every rank one object of $\operatorname{BrMod}_{\operatorname{dd}}$ has the form:
\begin{itemize}
\item ${\mathcal{M}} = (k_E[u]/u^{e'p}) \cdot v $,
\item ${\operatorname{Fil}}^1 {\mathcal{M}} = u^{x(p-1)} {\mathcal{M}}$,
\item $\phi_1( u^{x(p-1)} v) = cv$ for some $c \in k_E^{\times}$, and
\item $\widehat{g}(v) = \omega(g)^kv$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}
where $0 \le x \le e$ and $0 \le k< p-1$ are
integers.
Then $T_{{\operatorname{st}},2}^{K}({\mathcal{M}}) =
\omega^{k + x} \cdot \mathrm{ur}_{c^{-1}}$, where $\mathrm{ur}_{c^{-1}}$ is the
unramified character taking an arithmetic Frobenius element to
$c^{-1}$.\end{prop}
\begin{proof}
This is a special case of Proposition 4.2 and Corollary 4.3 of
\cite{geesavittquaternionalgebras}.
\end{proof}
Let ${\mathcal{M}}$ (or ${\mathcal{M}}(x)$) be the rank one Breuil
module with $k_E$-coefficients and
descent data from $K_1$ to $K$ corresponding to $\mathcal{H}$, and
write ${\mathcal{M}}$ in the form given by Proposition \ref{prop:rank one breuil
modules}. Since $\mathcal{G}$ has descent data of the form
$\omega^{a_1}\oplus\omega^{a_2}$,
we must have $\omega^k \in \{\omega^{a_1},\omega^{a_2}\}$.
\subsection{Extensions} Having determined the rank one characters, we
now go further and compute the possible extension
classes. By a scheme-theoretic closure argument, the Breuil module
$\mathcal{P}$ corresponding to $\mathcal{G}$ is an extension of $\mathcal{N}$ by
$\mathcal{M}$, where $\mathcal{M}$ is as in the previous section, and $\mathcal{N}$ (or
$\mathcal{N}(y)$) is defined
by \begin{itemize}
\item ${\mathcal{N}} = (k_E[u]/u^{e'p}) \cdot w $,
\item ${\operatorname{Fil}}^1 {\mathcal{N}} = u^{y(p-1)} {\mathcal{N}}$,
\item $\phi_1( u^{y(p-1)} v) = dw$ for some $d \in k_E^{\times}$, and
\item $\widehat{g}(v) = \omega(g)^lv$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}
where $0 \le y \le e$ and $0 \le l< p-1$ are
integers. Now, as noted above, the descent data for $\mathcal{G}$ is of the form
$\omega^{a_1}\oplus\omega^{a_2}$, so we must have that either $\omega^k=\omega^{a_1}$
and $\omega^l=\omega^{a_2}$, or $\omega^{k}=\omega^{a_2}$ and $\omega^l=\omega^{a_1}$. Since by definition we have
$(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$, we see from
Proposition \ref{prop:rank one breuil modules} that \[x+y\equiv e\pmod{p-1}.\]
We have the following classification of extensions of $\mathcal{N}$ by $\mathcal{M}$.
\begin{prop}\label{prop: possible extensions of Breuil modules} Every extension of $\mathcal{N}$ by
$\mathcal{M}$ is isomorphic to exactly one of the form
\begin{itemize}
\item $\mathcal{P} = (k_E[u]/u^{e'p}) \cdot v + (k_E[u]/u^{e'p}) \cdot w $,
\item ${\operatorname{Fil}}^1 \mathcal{P} =(k_E[u]/u^{e'p}) \cdot u^{x(p-1)} v +
(k_E[u]/u^{e'p}) \cdot (u^{y(p-1)}w+\lambda v) $,
\item $\phi_1(u^{x(p-1)} v) = cv$, $\phi_1(u^{y(p-1)}w+\lambda v)=dw$,
\item $\widehat{g}(v) =\omega^k(g)v$ and $\widehat{g}(w) =\omega^l(g)w$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}where $\lambda\in u^{\max\{0,(x+y-e)(p-1)\}}k_E[u]/u^{e'p}$
has all nonzero terms of degree congruent to $l-k$ modulo $p-1$, and has all terms
of degree less than $x(p-1)$, unless $\chibar_1=\chibar_2$ and $x\ge y$,
in which case it may additionally have a term of degree $px-y$.
\end{prop}
\begin{proof}
This is a special case of Theorem 7.5 of \cite{MR2004122}, with the
addition of $k_E$-coefficients in place of ${\F_p}$-coefficients. When
$K$ (in the notation of \emph{loc.~cit.}) is totally ramified over $\Qp$, the
proof of \emph{loc.~cit.} is argued in precisely the same manner when
coefficients are added, taking care to note the following changes:
\begin{itemize}
\item Replace Lemma 7.1 of \emph{loc.~cit.} (i.e., Lemma 5.2.2 of
\cite{MR1839918}) with Lemma 5.2.4 of \cite{MR1839918} (with
$k'=k_E$ and $k={\F_p}$ in the notation of that Lemma). In particular
replace $t^l$ with $\phi(t)$ wherever it appears in the proof, where~$\phi$
is the $k_E$-linear endomorphism of $k_E[u]/u^{e'p}$ sending
$u^i$ to $u^{pi}$.
\item Instead of applying Lemma 4.1 of \cite{MR2004122}, note that the
cohomology group
$H^1({\operatorname{Gal}}(K_1/K),k_E[u]/u^{e'p})$ vanishes because ${\operatorname{Gal}}(K_1/K)$ has prime-to-$p$
order while $k_E[u]/u^{e'p}$ has $p$-power order.
\item Every occurrence of $T_{i}^l$ in the proof (for any subscript $i$) should be replaced with
$T_{i}$. In the notation of \cite{MR2004122} the element $\eta$ is
defined when the map $\alpha \mapsto (1-b/a)\alpha$ on $k_E$ is not
surjective, i.e., when $a=b$; we may then take $\eta=1$.
\item The coefficients of $h,t$ are permitted to lie in $k_E$
(i.e., they are not constrained to lie in any particular proper subfield).
\end{itemize}
\end{proof}
Note that the recipe for $\mathcal{P}$ in the statement of
Proposition~\ref{prop: possible extensions of Breuil modules} defines
an extension of $\mathcal{N}$ by $\mathcal{M}$ provided that $\lambda$ lies in $u^{\max\{0,(x+y-e)(p-1)\}}k_E[u]/u^{e'p}$
and has all nonzero terms of degree congruent to $l-k$ modulo $p-1$
(\emph{cf.} the discussion in Section 7 of \cite{MR2004122}). Denote
this Breuil module by $\mathcal{P}(x,y,\lambda)$. Note that $c$ is fixed
while $x$ determines
$k$, since we require $\omega^{k+x} \cdot \mathrm{ur}_{c^{-1}} =
\chibar_1$; similarly $d$ is fixed and $y$ determines $l$. So this notation
is reasonable.
We would like to compare the generic fibres of extensions of different
choices of $\mathcal{M}$ and $\mathcal{N}$. To this end, we have the following
result. Write
$\chibar_1|_{I_K}=\omega^\alpha$, $\chibar_2|_{I_K}=\omega^\beta$.
\begin{prop}
\label{prop: comparing extensions}The Breuil module $\mathcal{P}(x,y,\lambda)$ has the same generic fibre as the Breuil module $\mathcal{P}'$,
where \begin{itemize}
\item $\mathcal{P}' = (k_E[u]/u^{e'p}) \cdot v' + (k_E[u]/u^{e'p}) \cdot w' $,
\item ${\operatorname{Fil}}^1 \mathcal{P}' =(k_E[u]/u^{e'p}) \cdot u^{e(p-1)} v' +
(k_E[u]/u^{e'p}) \cdot (w'+u^{p(e-x)+y}\lambda v') $,
\item $\phi_1(u^{e(p-1)} v') = cv'$, $\phi_1(w'+u^{p(e-x)+y}\lambda v')=dw'$,
\item $\widehat{g}(v') =\omega^{\alpha-e}(g)v'$ and $\widehat{g}(w') =\omega^{\beta}(g)w'$ for all $g \in {\operatorname{Gal}}(K_1/K)$.
\end{itemize}
\end{prop}
\begin{proof}
Consider the Breuil module $\mathcal{P}''$ defined by \begin{itemize}
\item $\mathcal{P}'' = (k_E[u]/u^{e'p}) \cdot v'' + (k_E[u]/u^{e'p}) \cdot w'' $,
\item ${\operatorname{Fil}}^1 \mathcal{P}'' =(k_E[u]/u^{e'p}) \cdot u^{e(p-1)} v'' +
(k_E[u]/u^{e'p}) \cdot (u^{y(p-1)}w''+u^{p(e-x)}\lambda v'') $,
\item $\phi_1(u^{e(p-1)} v'') = cv''$, $\phi_1(u^{y(p-1)}w''+u^{p(e-x)+y}\lambda v'')=dw''$,
\item $\widehat{g}(v'') =\omega^{k+x-e}(g)v''$ and $\widehat{g}(w'') =\omega^{l}(g)w''$ for all $g \in {\operatorname{Gal}}(K_1/K)$.
\end{itemize}
(One checks without difficulty that this \emph{is} a Breuil module. For instance the condition
on the minimum degree of terms appearing in $\lambda$ guarantees that
${\operatorname{Fil}}^1 \mathcal{P}''$ contains $u ^{e'}\mathcal{P}''$.) Note that $k+x\equiv \alpha\pmod{p-1}$,
$l+y\equiv\beta\pmod{p-1}$. We claim that $\mathcal{P}$, $\mathcal{P}'$ and $\mathcal{P}''$ all have the
same generic fibre. To see this, one can check directly that there is a morphism
$\mathcal{P}\to\mathcal{P}''$ given by \[v\mapsto u^{p(e-x)}v'',\ w\mapsto w'',\]and a
morphism $\mathcal{P}'\to\mathcal{P}''$ given by \[v'\mapsto v'',\ w'\mapsto
u^{py}w''.\] By Proposition 8.3 of \cite{MR2004122}, it is enough to
check that the kernels of these maps do not contain any free
$k_E[u]/(u^{e'p})$-submodules, which is an immediate consequence of
the inequalities $p(e-x),py<e'p$.
\end{proof}
\begin{rem}
\label{rem:extension-classes}
We note for future reference that while the classes in
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ realised by $\mathcal{P}(x,y,\lambda)$ and
$\mathcal{P}'$ may not coincide, they differ at most by multiplication
by a $k_E$-scalar. To see this, observe that the maps $\mathcal{P} \to
\mathcal{P}''$ and $\mathcal{P}' \to \mathcal{P}''$ induce $k_E$-isomorphisms on the
rank one sub- and quotient Breuil modules.
\end{rem}
We review the constraints on the integers $x,y$: they must lie
between $0$ and~$e$, and if we let $k,l$ be the residues of
$\alpha-x,\beta-y \pmod{p-1}$ in the interval $[0,p-1)$ then we must
have $\{\omega^k,\omega^l\} = \{\omega^{a_1},\omega^{a_2}\}$. Call such a pair $x,y$ \emph{valid}.
Note that $l-k \equiv \beta-\alpha + x - y \pmod{p-1}$ for any valid pair.
\begin{cor}
\label{cor:comparison-of-good-models}
Let $x',y'$ be another valid pair.
Suppose that $x' + y' \le e$ and $p(x'-x)+(y -y') \ge 0$. Then $\mathcal{P}(x,y,\lambda)$ has
the same generic fibre as
$\mathcal{P}(x',y',\lambda')$, where $\lambda' = u^{p(x'-x)+(y-y')} \lambda$.
\end{cor}
\begin{proof}
The Breuil module $\mathcal{P}(x',y',\lambda')$ is well-defined: one checks
from the definition that the congruence
condition on the degrees of the nonzero terms in $\lambda'$ is
satisfied, while since $x'+y' \le e$
there is no condition on the lowest degrees appearing in $\lambda'$.
Now the result is immediate from Proposition~\ref{prop: comparing extensions},
since $u^{p(e-x)+y}\lambda = u^{p(e-x')+y'}\lambda'$.
\end{proof}
Recall that $x+y \equiv e\pmod{p-1}$, so that $x$ and $e-y$ have the
same residue modulo $p-1$. It follows that if $x,y$ is a valid pair
of parameters, then so is $e-y,y$. Let $X$ be the largest
value of $x$ over all valid pairs $x,y$, and similarly $Y$ the smallest value of $y$;
then $Y=e-X$, since if if we had $Y > e-X$ then $e-Y$ would be a
smaller possible value for $x$.
\begin{cor}
\label{cor:generic-fibres-all-occur-extremally}
The module $\mathcal{P}(x,y,\lambda)$ has the same generic fibre as
$\mathcal{P}(X,Y,\mu)$ where $\mu \in k_E[u]/u^{e'p}$ has all nonzero terms of degree congruent to $\beta-\alpha+X-Y$ modulo $p-1$, and has all terms
of degree less than $X(p-1)$, unless $\chibar_1=\chibar_2$,
in which case it may additionally have a term of degree
$pX-Y$.
\end{cor}
\begin{proof}
Since $X+Y=e$ and $p(X-x)+(y-Y) \ge 0$ from the choice of $X,Y$, the
previous Corollary shows that $\mathcal{P}(x,y,\lambda)$ has the same generic
fibre as some $\mathcal{P}(X,Y,\lambda')$; by Proposition~\ref{prop:
possible extensions of Breuil modules} this has the same generic
fibre as $\mathcal{P}(X,Y,\mu)$ for $\mu$ as in the statement. (Note that
if $\chibar_1=\chibar_2$ then automatically $X \ge Y$, because in this
case if $x,y$ is a valid pair then so is $y,x$.)
\end{proof}
\begin{prop}
\label{prop:computation of the dimension of Lflat}Let $X$ be as
above, i.e., $X$ is the maximal integer
such
that
\begin{itemize}
\item $0\le X\le e$, and
\item either $\chibar_1|_{I_K}=\omega^{a_1+X}$ or
$\chibar_1|_{I_K}=\omega^{a_2+X}$.
\end{itemize}
Then $L_{\operatorname{flat}}$is an ${\overline{\F}_p}$-vector space of dimension at most
$X$, unless $\chibar_1=\chibar_2$, in which case it has
dimension at most $X+1$.
\end{prop}
\begin{proof}
Let $L_{\mathrm{flat},k_E} \subset L_{\operatorname{flat}}$ consist of the classes $\eta$ such that the containment $\eta \in L_{\operatorname{flat}}$ is
witnessed by a $k_E$-vector space scheme with generic fibre descent
data. By
Corollary~\ref{cor:generic-fibres-all-occur-extremally} and Remark~\ref{rem:extension-classes} these are exactly
the classes arising from the Breuil modules $\mathcal{P}(X,Y,\mu)$ with
$k_E$-coefficients as in
Corollary~\ref{cor:generic-fibres-all-occur-extremally}. These
classes form a $k_E$-vector space (since they are \emph{all} the
extension classes arising from extensions of $\mathcal{N}(Y)$ by $\mathcal{M}(X)$),
and by counting the (finite) number of possibilities for $\mu$ we see
that $\dim_{k_E} L_{\mathrm{flat},k_E}$ is at most $X$ (resp.
$X+1$ when $\chibar_1=\chibar_2$).
Since $L_{\mathrm{flat},k_E} \subset L_{\mathrm{flat},k'_E}$ if
$k_E \subset k'_E$ it follows easily that $L_{\operatorname{flat}} = \cup_{k_E}
L_{\mathrm{flat},k_E}$ is an ${\overline{\F}_p}$-vector space of dimension at
most $X$ (resp. $X+1$).
\end{proof}
We can now prove our main local result, the promised relation between $L_{\operatorname{flat}}$ and
$L_{\operatorname{crys}}$. \begin{thm}
\label{thm: crystalline equals flat}Provided that either $a_1-a_2\ne
p-1$ or $\chibar_1\chibar_2^{-1}\ne\epsilonbar$, we
have $L_{\operatorname{flat}}=L_{\operatorname{crys}}$.
\end{thm}
\begin{proof}By Theorem \ref{thm: crystalline extension implies flat},
we know that $L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$, so by Proposition~\ref{prop:computation of the dimension of Lflat} it suffices to show that
$L_{\operatorname{crys}}$ contains an ${\overline{\F}_p}$-subspace of dimension
$X$ (respectively $X+1$ if $\chibar_1 = \chibar_2$). Since $L_{\operatorname{crys}}$ is the union of the spaces
$L_{\chi_1,\chi_2}$, it suffices to show that one of these spaces
has the required dimension. Let $X$ be as in the statement of
Proposition \ref{prop:computation of the dimension of Lflat}, so
that $X$ is maximal in $[0,e]$ with the property that either $\chibar_1|_{I_K}=\omega^{a_1+X}$ or
$\chibar_1|_{I_K}=\omega^{a_2+X}$. Note that by the assumption
that there is a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix}
\omega^\delta \prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}} \end{pmatrix},\]we see that
if $X=0$ then $\chibar_1|_{I_K}=\omega^{a_2}$ (and $J$ must be empty).
If $\chibar_1|_{I_K}=\omega^{a_2+X}$ then we take $J$ to be empty
and we take $\delta=X$; otherwise $X > 0$ and $\chibar_1|_{I_K} =
\omega^{a_1+X}$, and we can take $J^c$ to be empty and
$\delta=X-1$. In either case, we may define characters $\chi_1$ and
$\chi_2$ as in Section \ref{subsec: H^1_f}, and we see from Lemma
\ref{lem: dimension of H^1_f spaces} that
$\dim_{{\overline{\F}_p}}L_{\chi_1,\chi_2}=X$ unless $\chibar_1=\chibar_2$, in
which case it is $X+1$. The result follows.\end{proof}
As a consequence of this result, we can also address the question of
the relationship between the different spaces $L_{\chi_1,\chi_2}$ for
a fixed Serre weight $a\inW^?(\rhobar)$. If $e$ is large, then
these spaces do not necessarily have the same dimension, so they
cannot always be equal. However, it is usually the case that the
spaces of maximal dimension coincide, as we can now see.
\begin{cor}
\label{cor: independence of lift for H^1_f}If either $a_1-a_2\ne
p-1$ or $\chibar_1\chibar_2^{-1}\ne\epsilonbar$, then
the spaces $L_{\chi_1,\chi_2}$ of maximal dimension are all equal.
\end{cor}
\begin{proof}
In this case $\dim_{{\overline{\F}_p}} L_{\chi_1,\chi_2}=\dim_{{\overline{\F}_p}}L_{\operatorname{crys}}$
by the proof of Theorem \ref{thm: crystalline equals flat}, so we
must have $L_{\chi_1,\chi_2}=L_{\operatorname{crys}}$.
\end{proof}
Finally, we determine $L_{\operatorname{crys}}$ in the one remaining case, where the
spaces $L_{\chi_1,\chi_2}$ of maximal dimension no longer coincide.
\begin{prop}
\label{prop: Lcrys in the exceptional case}Suppose that
$a_1-a_2=p-1$ and that $\chibar_1\chibar_2^{-1}=\epsilonbar$. Then $L_{\operatorname{crys}}=H^1(G_K,\epsilonbar)$.
\end{prop}
\begin{proof}We prove this in a similar fashion to the proof of Lemma
6.1.6 of \cite{blggord}. By twisting we can reduce to the case
$(a_1,a_2)=(p-1,0)$. Let $L$ be a given line in
$H^1(G_K,\epsilonbar)$, and choose an unramified character $\psi$
with trivial reduction. Let
$\chi$ be some fixed crystalline character of $G_K$ with Hodge-Tate weights
$p,1,\dots,1$ such that $\chibar=\epsilonbar$. Let $E/\Qp$ be a finite extension with ring
of integers $\mathcal{O}$, uniformiser $\varpi$ and residue field ${\mathbb{F}}$, such
that $\psi$ and $\chi$ are defined over $E$ and $L$ is defined over ${\mathbb{F}}$. Since any extension of $1$ by $\chi\psi$ is
automatically crystalline, it suffices to show that we can choose
$\psi$ so that $L$ lifts to $H^1(G_K,\mathcal{O}(\psi\chi))$.
Let $H$ be the
hyperplane in $H^1(G_K,\mathbb{F})$ which annihilates $L$ under the Tate
pairing. Let $\delta_1 : H^1(G_K,\mathbb F(\overline{\epsilon})) \to
H^2(G_K,\mathcal{O}(\psi\chi))$ be the map coming from
the exact sequence $0\to \mathcal{O}(\psi\chi)\stackrel{\varpi}{\to}\mathcal
O(\psi\chi)\to \mathbb F(\overline{\epsilon})\to 0$ of
$G_K$-modules. We need to show that $\delta_1(L)=0$ for some choice
of $\psi$.
Let $\delta_0$ be the map
$H^0(G_K,(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon)) \to
H^{1}(G_K,\mathbb{F})$ coming from the exact sequence $0 \to \mathbb{F} \to
(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon) \stackrel{\varpi}{\to}
(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon) \to 0$ of $G_K$-modules. By
Tate local duality, the condition that $L$ vanishes under the map
$\delta_1$ is equivalent to the condition that the image of the map
$\delta_0$ is contained in $H$. Let $n \geq 1$ be the largest
integer with the property that $\psi^{-1}\chi^{-1}\epsilon \equiv 1
\pmod{\varpi^n}$. Then we can write $\psi^{-1}\chi^{-1}\epsilon(x)=
1+\varpi^n \alpha(x)$ for some function $\alpha : G_K \to
\mathcal{O}$. Let $\overline{\alpha}$ denote $\alpha \pmod{\varpi} : G_K
\to \mathbb{F}$. Then $\overline{\alpha}$ is additive and the choice of
$n$ ensures that it is non-trivial. It is straightforward to check
that the image of the map $\delta_0$ is the line spanned by
$\overline{\alpha}$. If $\overline{\alpha}$ is in $H$, we are
done. Suppose this is not the case. We break the rest of the proof
into two cases.
\medskip{\sl Case 1: $L$ is
tr\`es ramifi\'e:} To begin, we observe that it is
possible to have chosen
$\psi$ so that
$\overline{\alpha}$ is ramified. To see this, let $m$ be the largest integer with the property that
$(\psi^{-1} \chi^{-1} \epsilon)|_{I_K} \equiv 1 \pmod{\varpi^m}$. Note that $m$ exists since the
Hodge-Tate weights of $\psi^{-1}\chi^{-1}\epsilon$ are not all $0$.
If $m = n$ then we are done, so assume instead that $m >n$. Let $g\in
G_K$ be a lift of ${\operatorname{Frob}}_K$. We claim that
$\psi^{-1}\chi^{-1}\epsilon(g)= 1 +\varpi^{n} \alpha(g)$ such that
$\alpha (g) \not \equiv 0 \pmod{\varpi}$. In fact, if $\alpha
(g)\equiv 0 \pmod{\varpi}$ then $\psi^{-1}\chi^{-1}\epsilon(g) \in 1
+ \varpi^{n+1} \mathcal{O}_K$. Since $m > n$ we see that
$\psi^{-1}\chi^{-1}\epsilon(G_K) \subset 1 + \varpi^{n+1} \mathcal{O}_K$
and this contradicts the selection of $n$. Now define a unramifed
character $\psi'$ with trivial reduction by setting $\psi' (g) =
1 - \varpi^n \alpha (g)$. After replacing $\psi$ by $\psi \psi'$ we
see that $n$ has increased but $m$ has not changed. After finitely
many iterations of this procedure we have $m=n$, completing the
claim.
Suppose, then, that $\overline{\alpha}$ is ramified. The fact that $L$ is tr\`es
ramifi\'e implies that $H$ does not contain the unramified line in
$H^1(G_K,\mathbb{F})$. Thus there is a unique $\overline{x} \in
\mathbb{F}^\times$ such that $\overline{\alpha}+u_{\overline{x}} \in H$
where $u_{\overline{x}}: G_K\to \mathbb{F}$ is the unramified
homomorphism sending ${\operatorname{Frob}}_K$ to $\overline{x}$. Replacing $\psi$ with $\psi$ times
the unramified character sending ${\operatorname{Frob}}_K$ to $(1+\varpi^n x)^{-1}$,
for $x$ a lift of $\overline{x}$, we are done.
\medskip{\sl Case 2: $L$ is peu ramifi\'e:} Making a ramified
extension of $\mathcal{O}$ if necessary, we can and do assume that $n\geq
2$. The fact that $L$ is peu ramifi\'e implies that $H$ contains the
unramified line. It follows that if we replace $\psi$ with $\psi$
times the unramified character sending ${\operatorname{Frob}}_K$ to $1+\varpi$, then
we are done (as the new $\overline{\alpha}$ will be unramified).
\end{proof}
\section{Global consequences}\label{sec: global
consequences}\subsection{}We now deduce our main global results,
using the main theorems of \cite{blggU2} together with our local
results to precisely determine the set of Serre weights for a global
representation in the totally ramified case.
\begin{prop}
\label{prop: semisimple elimination if totally ramified}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let
$a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre
weight such that $\bar{r}$ is modular of weight $a$. Let $w$ be a
place of $F$ such that $F_w/\Qp$ is totally ramified of degree $e$. Write
$a_w=(a_1,a_2)$, and write $\omega$ for the unique fundamental
character of $I_{F_w}$ of niveau one.
Then $a_w\inW^?(\bar{r}|_{G_{F_w}})$.
\end{prop}
\begin{proof}
Suppose first that $\bar{r}|_{G_{F_w}}$ is irreducible. Then the
proof of Lemma 5.5 of \cite{geesavitttotallyramified} goes through
unchanged, and gives the required result. So we may suppose that
$\bar{r}|_{G_{F_w}}$ is reducible. In this case the proof of Lemma 5.4 of
\cite{geesavitttotallyramified} goes through unchanged, and shows
that we have \[\bar{r}|_{G_{F_w}}\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}\]where
$(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$, and either
$\chibar_1|_{I_K}=\omega^{a_1+z}$ or
$\chibar_1|_{I_K}=\omega^{a_2+e-z}$ for some $1\le z\le e$, so we
are in the situation of Section \ref{subsec: H^1_f}. Consider the
extension class in $H^1(G_{F_w},\chibar_1\chibar_2^{-1})$
corresponding to $\bar{r}|_{G_{F_w}}$. By Proposition \ref{prop:
modular of some weight implies potentially BT lifts exist}, either
$a_1-a_2=p-1$ and $\chibar_1\chibar_2^{-1}=\epsilonbar$, or this extension class is in $L_{\operatorname{flat}}$. In either case,
by Theorem \ref{thm: crystalline equals flat} and Proposition
\ref{prop: Lcrys in the exceptional case}, the extension class is in
$L_{\operatorname{crys}}$, so that $a_w\inW^?(\bar{r}|_{G_{F_w}})$, as required.
\end{proof}
Combining this with Theorem 5.1.3 of \cite{blggU2}, we obtain our
final result.
\begin{thm}
\label{thm: the main result, modular if and only if predicted}Let
$F$ be an imaginary CM field with maximal totally real subfield
$F^+$, and suppose that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
that $\zeta_p\notin F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that
$p>2$, and that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible
modular representation with split ramification such that
$\bar{r}(G_{F(\zeta_p)})$ is adequate. Assume that for each place $w|p$
of $F$, $F_w/\Qp$ is totally ramified.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre weight. Then
$a_w\inW^?(\bar{r}|_{G_{F_w}})$ for all $w$ if and only if $\bar{r}$ is modular of
weight $a$.
\end{thm}
\bibliographystyle{amsalpha}
|
\section{Overview}
ALICE (A Large Ion Collider Experiment)\cite{ALICEref} at the LHC\cite{LHCref} is a general purpose experiment designed to study the phase transition between ordinary nuclear matter and the quark-gluon plasma, which occurs in high energy nucleus-nucleus collisions.
To enhance its capabilities for measuring jet properties, the ALICE detector has been upgraded in 2010 with a large acceptance ($\Delta \eta \times \Delta \phi = 1.4 \times 1.86$ (107\textdegree)) ElectroMagnetic Calorimeter (EMCal)\cite{EMCALTDR} providing a measurement of the neutral fraction of the jet energy and an unbiased jet trigger, thanks to a centrality dependent energy threshold.
The sampling calorimeter consists of 12288 towers of layered Pb-scintillator arranged in modules of $2 \times 2$ towers, with each tower containing 77 layers for a total of 20.1 radiation lengths.
A tower is read out with an avalanche photodiode (APD) which collects, via a bundle of optical fibers, the light created by particle interactions.
A charge sensitive preamplifier (CSP) is used to instrument each APD.
A supermodule (SM) is made of 24 strips of 12 modules (1152 towers). In 2010, the EMCal comprised four SM, in 2011 there were ten SM and in 2012 the full EMCal contains ten complete SM plus two thirds of a SM.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{./figs/trigger_overview}
\caption{Flat view of the EMCal detector with its surrounding trigger electronics.}
\label{elec_overview}
\end{center}
\end{figure}
A schematic view of the EMCal detector with its front-end and trigger electronics is sketched in fig.~\ref{elec_overview}.
Each SM is divided into three regions, and each region is instrumented by 12 FEE cards, so each SM has 36 FEE cards\cite{FEEpaper}.
Each FEE takes 32 analog inputs and generates eight fastOR signals. These are fast shaped (100\,ns) analog sums over one module, i.e. four tower signals.
The individual tower signals are used for energy measurement while the 3072 module analog sums (fastOR), are used to build the trigger.
The Trigger Region Unit (TRU) \cite{TRUpaper} are used to digitize, at the machine bunch crossing rate (40.08\,MHz), the fastOR signals provided by the FEE and to compute and generate the local Level 0 (L0) triggers. Finally, the Summary Trigger Unit (STU), computes the global L0 trigger by ORing the local L0 triggers.
The STU also collects and aggregates the TRU data used to compute the the two Level 1 (L1) triggers, the photon trigger and the jet trigger.
The L1 thresholds are computed event-by-event using the ALICE beam-beam counter detector\cite{V0paper} (V0) according to a 2\textsuperscript{nd} order fit function $A \cdot V0_{count}^2 + B \cdot V0_{count}+C$, where $V0_{count}$ is the total charge information provided by the V0 and A, B, C the threshold parameters.
The communication between the TRUs and the STU is performed through 12\,m point to point cat7 Ethernet cables.
Additionally, the STU is included in the EMCal readout via a Detector Data Link\cite{DDLpaper} (DDL) to the ALICE DAQ\cite{ALICEDAQ}. The readout, which is primarily used to return the triggering indexes and thresholds used on a event-by-event basis, can also be used to provide the primitive triggering data in order to recheck off-line the on-line trigger quality.
Additionally, an Ethernet interface to the Detector Control System (DCS) interface is used for fast FPGA firmware upload and run configuration (thresholds parameters, trigger delays, etc).
\section{Trigger algorithms}
\subsection{TRU L0 algorithm}
After digitization, each fastOR is digitally integrated over a sliding time window of four samples. Then, the results of these operations are continuously fed to $2 \times 2$ spatial sum processors that compute the energy deposit in patches of $4\times4$ towers (or $2 \times 2$ fastOR) for the region managed. Each patch energy is constantly compared to a minimum bias threshold; whenever it is crossed and the maximum of the peak has been found, a local L0 trigger is fired. In preparation for the L1 algorithm, the time integrated sums are also stored in a circular buffer for later retrieval and transmission to STU.
Note that Level 0 trigger suffers from some spatial trigger inefficiencies due to the fact that TRUs cannot compute the spatial sum for patches sitting on region boundaries
\subsection{Global EMCal triggers computed in STU}
The STU is the access point to the Central trigger Processor CTP\cite{CTPpaper} for EMCal.
Consequently, it is used to provide the global L0, which is an OR of the 32 L0 locally calculated by the TRUs and two L1 triggers: the L1-gamma trigger and the L1-jet trigger.
The L1-gamma trigger uses the same patch size as L0, but without the inefficiencies displayed by the local L0 (i.e. $2\times2$ patch across several TRU regions can be computed).
The L1-jet trigger is built by summing energy over a sliding window of $4\times4$ subregions, where a subregion is defined as a $4 \times 4$ fastOR (or $8 \times 8$ towers) area, see fig.~\ref{SM_map}.
\begin{figure}[b]
\begin{center}
\includegraphics[angle=0,width=0.8\textwidth]{./figs/SM_map3}
\caption{Cartoon of different possible L0, L1-gamma and L1-jet trigger patches.}
\label{SM_map}
\end{center}
\end{figure}
With the given EMCal geometry and due to the various trigger patches sizes, there are a total of 2208 L0, 2961 L1-gamma and 117 L1-jet trigger patches that can be fired.
\subsection{L1 trigger processing}
A block diagram of the L1 trigger processing is shown in fig.~\ref{L1_trig_proc}.
The L1-processing is not continuously running, i.e. pipelined, it is instead initiated on the confirmed L0 reception provided by the CTP via the TTC\cite{TTCpaper} links (TRUs and STU).
At this moment, 1.2\,\textmu s after interaction, the TRUs send to the STU the appropriate time integrated data from their circular buffers to the STU via the custom serial links.
The serialization, propagation delay and deserialization takes 3075\,ns.
Meanwhile, the V0 detector transfers its charge information to the STU via a direct optical link. The thresholds for photon and jet patches are immediately processed and made available before the actual trigger processing starts.
Once the TRU data reception is achieved, the L1-photon trigger processing and also the subregion energy calculation are done in parallel for each TRU region.
Then when the previous processing is over, the L1-jet trigger starts and uses the previously generated subregion accumulation. Finally, both triggers are adequately delayed to accommodate the L1-trigger latency expected by the CTP.
More technical details about the trigger implementation may be found in \cite{STU_twepp2010}.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.9\textwidth]{./figs/L1_trig_proc}
\caption{Block diagram of the L1 trigger processing annotated with the time required to go through each step. }
\label{L1_trig_proc}
\end{center}
\end{figure}
\section{Custom serial protocol}
\subsection{Original solution}
The main motivation for the development of this custom serial link was the desire to reuse the TRU design made for the \textbf{PHO}ton \textbf{S}pectrometer (PHOS) which was equipped with a spare RJ45 connector directly linked to its FPGA.
The original trigger timing constraints drove the design in the same direction.
This solution minimizes transmission latency and meets some functional requirements, allowing the STU to be used as a low jitter reference clock distributor for TRUs.
Additionally, the fact that the local L0s had to be forwarded to STU for feeding its global OR required a custom solution.
Thus, the choice was made to use a four-pair LVDS link transported over CAT7 Ethernet cables because they have the appropriate impedance and feature low signal attenuation and low skew between pairs (see fig.~\ref{original_serial_link}).
Pair usage is as follows: one pair is dedicated for the LHC reference clock transfer to the TRU, another is used by the TRUs to forward their local L0 candidates and the two remaining are used for synchronous serial data transfer without any encoding.
Each data pair was running at 400\,Mb/s and the clock used for transfer is the LHC clock multiplied by 10.
With this very light protocol, the latency is only the sum of the cable delay and bit transmission time.
Each TRU sends simultaneously its 96 values of 12 bit coded time integrated fastOR data to the STU at 800\,Mb/s; in this case the serialization latency is thus 1.44\,\textmu s. The communication protocol was simple, outside of the data payload transmission a known inter-packet word was continuously transfered. Then at the transmission time, right after the confirmed L0 reception, a header packet was sent followed by the time-integrated data.
The link synchronization is done before each start of run by a Finite State Machine (FSM) implemented in the FPGA.
This is done in two steps. In the first step, the data phase alignment takes place, it relies on the individual data path fine granularity delaying feature available in the Virtex 5 FPGA (up to 64 steps of 78\,ps).
A scanning of all delay values is made in order to obtain the zone where data reception is stable and then the central value is applied.
In the second step, character framing is performed for associating individually each incoming bit to the good deserialized word.
This whole process is performed with the inter-packet data word used as the synchronization/training pattern.
For the link quality monitoring, error counter monitors were implemented.
These were incremented for each bad inter-packet received outside of the expected payload transmission.
The counters were checked every minute via DCS, and an alarm was raised in case of transmission errors.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.85\textwidth]{./figs/original_serial_link}
\caption{Sketch of the original custom serial link.}
\label{original_serial_link}
\end{center}
\end{figure}
\subsection{Problem encountered and diagnosis tool}
The original custom serial link solution was successfully validated in the laboratory and also in 2010 with four installed SM by regularly performing TRU/STU data correlation checks. Unfortunately, in 2011, when the EMCal was fully installed, several random TRU-STU links were displaying communication errors during some runs, while for all links the start of run synchronization went through correctly.
As expected from the missing links, off-line validation showed missing L1-photon triggers for the missing regions.
But, from time to time, the on-line L1 jet trigger rate (relative to accepted L0 triggers) jumped from a nominal value of 2\% to 100\% (no rejection).
In order to understand where the problem lay, a frame reception monitor was inserted in the deployed firmware.
It is able to check, for each TRU-STU link and for each event, the good or bad reception of the packet header.
The resulting reception bit mask is inserted in the data stream along with the corresponding event.
A run diagnosis example is shown in fig.~\ref{frame_errors_vs_rate} for run 163532. It can be seen that while the error counter monitoring tool is indicating that TRU 1, 21 and 30 are badly communicating with the STU, the frame reception monitor shows that in fact TRU~1 is not communicating at all with the STU and that TRU~21 is transmitting data most of the time. Remarkably, TRU~30 has only three successful data transfers toward STU and the first one is actually causing the trigger rate increase.
This observation not only confirmed the suspected communication problem, but also revealed that there was a flaw in the L1-jet trigger algorithm implementation.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.95\textwidth]{./figs/frame_errors_vs_rate}
\caption{Communication failure and trigger rate diagnosis of run 163532.
Top figure shown the L1 trigger rate relative to accepted L0 trigger (L1-jet in red and L1-photon in blue). This plot was obtained by dividing the error count (maximum of 65535) by 100000 and by adding the corresponding TRU number.
Bottom left plot shows the error count recorded for every minute and for each TRU. Bottom right plot shows the frame bit received for each accepted trigger and for each TRU. This latter plot was obtained by dividinng the frame error (maximum of 1) by 2 and by adding the corresponding TRU number.
The correlation between the first successful communication of TRU-STU link 30 and the L1-jet trigger rate increase is shown.}
\label{frame_errors_vs_rate}
\end{center}
\end{figure}
For fixing the communication problem, the first cure attempt was to decrease the transmission rate to $2 \times 240$\,Mb/s, thus relaxing the serial link timing constraints. This was possible in 2011, thanks to the increased timing budget for providing the candidate L1 trigger at the CTP input (from 6.2 to 7.3\,\textmu s).
Unfortunately, this did not solve the problem.
By performing a data recording at fixed latency after confirmed L0 reception, instead of recording the payload after a packet header reception, the issue was found to be due to the serialization/deserialization.
While the synchronization seemed good, sometimes a cycle delay between the LSB part and MSB part of the transmitted data word appeared. Obviously, this problem could not be observed at the synchronization time with a single word training pattern.
Therefore, the second, and successful, cure applied was to use a three word training pattern, in conjunction with the possibility to delay the MSB or the LSB part of a word during the synchronization phase.
\section{Correcting fake and missing triggers fixing, from simulation to on-line debugging}
From the early development stage of the hardware and firmware, gateway tools were developed to exchange data between ``physics and ''firmware`` simulations, as shown schematically in fig.~\ref{vhdl_aliroot}.
This allowed for the validation of the core STU algorithms (jet and photon) before deployment and, as a side benefit, for the quick adaptation of gateway tools --- such as the trigger index decoding routine --- to the off-line software.
While these tools were useful in the early stage of development, they were limited.
For instance, the firmware simulation is slow. Moreover, it is not easy to validate all possible external effects which could cause false and/or missing triggers.
Examples of such possible effects include communication breakdown, clock jitter, and other, not necessarily predictable, issues.
Consequently, an ``event player'' feature was added in the STU firmware.
As shown in fig.~\ref{L1_trig_proc_pattern}, this on-line tool allows to select the data to be used by the trigger processors between the TRU received data and DCS preloaded data.
The ``event player'' can play up to eight different patterns.
It offers the possibility of validating the entire L1 algorithms in-situ and to check the compliance with the ALICE DAQ after each data packet modification.
Additionally, it may be used to accumulate statistics to check for eventual timing issues or other such as radiations effect.
Thanks to this debugging tool, the L1-jet trigger rate issue was pinpointed to the missing ``sub-region'' buffer clearing when serial communication links failed.
Indeed, for flaky links, the ``sub-region'' computed from the last correctly received data were constantly used by the L1-jet processor.
Hence, when the last received information contained a high energy event, the subsequent L0 confirmed event were mistakenly accepted at L1.
After correcting this problem, both L1 triggers performed as expected, as detailed in the next section.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.6\textwidth]{./figs/vhdl_aliroot}
\caption{Overview of the ``physics'' and ``firmware'' co-simulation.}
\label{vhdl_aliroot}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.8\textwidth]{./figs/L1_trig_proc_pattern}
\caption{Modified STU firmware featuring the ``event-player'' allowing to select the data source to be used by the trigger processor between the TRU received data and DCS preloaded data. The buffer causing fake L1-jet trigger when not reseted between event and with flaky communication is colored in orange. }
\label{L1_trig_proc_pattern}
\end{center}
\end{figure}
\section{Trigger performance}
As an illustration of the trigger performance during the 2011 lead beam period (Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76\,TeV), the event selection as a function of the centrality of the collision for Minimum Bias (MB) and EMCal L1-jet trigger classes is shown in fig.~\ref{plot_efficacite}.
The upper plot shows the minimum bias\footnote{The minimum bias trigger class is composed of the coincidence of the V0 detector L0 trigger signal and the ZDC L1 trigger signal (Zero Degree Calorimeter).} and L1-jet samples\footnote{The L1-jet sample is a subsample of the MB, obtained with the coincidence with EMCal L1 jet trigger signal.} for a linear energy threshold with two threshold parameter sets.
The lower plot shows the MB to L1-jet ratios for the different threshold parameters.
The set of parameters giving the magenta distribution rejects too many central events, while the set of of parameters giving the red one is more uniform for V0A + V0C signal above 5000 ADC.
The L1 trigger could provide a uniform background rejection, in a large centrality region, while disfavoring the most peripheral events.
This behavior, inherent to the order of the threshold computation, will be improved by using a second order centrality dependent energy threshold.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{./figs_ext/plot_efficacite}
\caption{Plot of the event selection for Pb/Pb in 2011. The event yields versus the centrality for minimum bias and EMCal L1-jet trigger classes is shown.
The lower plot shows the MB to L1-jet ratios distribution for the different threshold parameters. Horizontal scale is the total amount of V0 charge expressed in ADC counts.}
\label{plot_efficacite}
\end{center}
\end{figure}
As shown on the left of fig.~\ref{spatial_uniformity}, a spatial non uniformity in jet triggers was observed.
While the APD inter-calibration was done in the laboratory using cosmics, an in-situ calibration was performed using $\pi^0$ data at the end of 2011.
The calibration constants obtained roughly reproduce the trigger non uniformity.
This modified APD inter-calibration correction was used in 2012, further detailed analysis are required to assess the correction benefit.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.48\textwidth]{./figs_ext/L1PatchPosition}
\includegraphics[angle=0,width=0.45\textwidth]{./figs_ext/coefCalib}
\caption{Left plot shows the occurrence of a jet trigger patch, a factor of six can be observed between the most active and the least active patch. Right plot shows in-situ calibration constants using $\pi^0$ for each jet trigger patch.}
\label{spatial_uniformity}
\end{center}
\end{figure}
\section{Perspectives}
Thanks to STU flexibility (available FPGA resources and spare trigger outputs), a 2\textsuperscript{nd} set of threshold parameters has been implemented to improve the L1 data sample overlap with the MB data sample. The resource usage increased from 69\% to 92\%.
In mid-2013, ALICE is forecasted to be upgraded with the Di-jet CALorimeter, that will increase the coverage by $\Delta \eta \times \Delta \phi = 1.4 \times 100$\textdegree (PHOS included).
It is composed of six shorter SM that will be installed on either side of the PHOS. For this operation one or two STUs will be used, depending whether PHOS will be included in the DCAL trigger or not (one STU for DCAL, one STU for PHOS).
\section{Summary}
The STU has been installed for two years, and all the system interfaces have been validated.
The custom serial protocol, which has been modified, has been demonstrated to operate in realistic conditions with intensive readout.
The fast FPGA remote configuration proved to be an asset for regular upgrades and problem solving. Moreover, it has been noted that from early on, it is an advantage to implement monitoring tools and to develop diagnosis tools. For instance, the ``event player'' demonstrated to be good tool for in situ validation of trigger algorithm without beam.
|
\section{Introduction\label{sec-introduction}}
All the assessment reports (AR) published by the Intergovernmental Panel of Climate Change (IPCC) show that there is overwhelming scientific evidence of the existence of global warming (GW). It is also well known that climate change (CC) is a non-uniform phenomenon. What is not so clear is the degree of heterogeneity across all the regions in our planet. In fact, an important part of the Sixth Assessment Report (AR6) published by the IPCC in 2021-2022 is dedicated to this issue: climate (warming) heterogeneity. This is reflected in the chapters studying regional climate change. Our paper introduces a new quantitative methodology that builds on that described in Gadea and Gonzalo 2020 (GG2020) to characterize, measure and test the existence of such climate change heterogeneity (CCH). This is done in three steps. First, we introduce a warming typology (\textit{W1}, \textit{W2} and \textit{W3}) based on the trending behavior of the quantiles of the temperature distribution of a given geographical location. Second, we define in a testable format the concepts of warming acceleration and warming amplification. These concepts help to characterize (more ordinally than cardinally) the warming process of different regions. And third, we propose the new concept of warming dominance (WD) to establish when region \textit{A} suffers a worse warming process than region \textit{B}.
We have chosen Spain as a benchmark geographical location because, as the AR6 report states “. . . Spain is fully included in the Mediterranean (MED) Reference Region, but is one of the most climatically diverse countries in the world. . . ”. This fact opens up the possibility of studying warming heterogeneity (WH) from Spain to the Globe (outer heterogeneity, OWH) and also from Spain to some of its regions represented by Madrid and Barcelona (inner heterogeneity, IWH).
The three steps rely on the results reported in GG2020, where the different distributional characteristics (moments, quantiles, inter quantile range, etc.) of the temperature distribution of a given geographical location are converted into time series objects. By doing this, we can easily implement and test all the concepts involved in the three steps.
A summary of the results is as follows. Spain and the Globe present a clear warming process; but it evolves differently. Spain goes from a warming process where lower and upper temperatures share the same trend behavior (\textit{IQR} is maintained constant over time, warming type \textit{W1}) to one characterized by a larger increase in the upper temperatures (\textit{IQR} increases over time, warming type \textit{W3}). In contrast, the Globe as a whole maintains a stable warming type process characterized by lower temperatures that increase more than the upper ones (\textit{IQR} decreases in time).\footnote{Similar results for Central England are found in GG2020 and for the US in Diebold and Rudebush, 2022.} In our typology, this constitutes a case of warming type \textit{W2}. Climate heterogeneity can go further. For instance, within Spain we find that Madrid is of type \textit{W3} while the warming process of Barcelona is of type \textit{W1}. This is in concordance with the Madrid climate being considered a Continental Mediterranean climate while Barcelona is more a pure Mediterranean one.
The proposed warming typology (\textit{W1}, \textit{W2} and \textit{W3}), although dynamic, is more ordinal than cardinal. In this paper, the strength of a warming process is captured in the second step by analyzing its acceleration and its amplification with respect to a central tendency measure of the temperature distribution. Acceleration and amplification contribute to the analysis of warming heterogeneity. The acceleration in the Globe is present in all the quantiles above \textit{q30} while in Spain it already becomes significant above the 10$^{th}$ quantile. We find an asymmetric behavior of warming amplification; in Spain (in comparison with the Globe mean temperature) this is present in the upper temperatures (above the 80$^{th}$ and 90$^{th}$ quantiles) while in the Globe the opposite occurs (below the 20$^{th}$ and 30$^{th}$ quantiles). Within Spain, Madrid and Barcelona also behave differently in terms of acceleration and amplification. Overall, warming in Spain dominates that of the Globe in all the quantiles except for the lower quantile \textit{q05}, and between Madrid and Barcelona there is a partial WD. Madrid WD Barcelona in the upper part of the distribution and Barcelona WD Madrid in the lower one.
The existence of a clear heterogeneous warming process opens the door to the need of a new non-uniform causal (effect) research. One that goes beyond the standard causality in mean analysis (see Tol, 2021). CCH also suggests that in order for the
mitigation-adaptation policies to be as efficient as possible they should be designed following a type of common factor structure: a common global component plus an idiosyncratic local element. This goes in the line with the results found in Brock and Xepapadeas (2017), D’Autume et al. (2016) and Peng et al. (2021). Future climate agreements should clearly have this CCH into account. An important by-product of our warming heterogeneity results is the increase that this heterogeneity can generate in the public awareness of the GW process. A possible explanation for that can be found in the behavioral economics work by Malmendier (2021), in the results of the European Social Survey analyzed in Nowakowski and Oswald (2020) or in the psychology survey by Maiella et al. (2020).
The rest of the paper is organized as follows. Section 2 describes our basic climate econometrics methodology. Section 3 presents a brief description of the temperature data from Spain and the Globe. Section 4 addresses the application of our quantitative methodology in the cross-sectional version (temperatures measured monthly by stations in an annual interval) to Spain and (versus) the Globe. It also reports the results of applying the methodology using a purely temporal dimension (local daily temperature on an annual basis) for two representative stations in Spain (Madrid and Barcelona, empirical details in the Appendix). Section 5 offers a comparison and interpretation of the results. Finally, Section 6 concludes the paper.
\section{Climate Econometrics Methodology\label{sec-method}}
In this section, we briefly summarize the novel econometric methodology introduced in GG2020 to analyze Global and Local Warming processes.
Following GG2020, Warming is defined as an increasing trend in certain characteristics of the temperature distribution. More precisely:
\begin{defn} \label{def1} \textit{(\underline{Warming})}:
\textit{ Warming is defined as the existence of an increasing trend in some of the characteristics measuring the central tendency or position (quantiles) of the temperature distribution.}
\end{defn}
An example is a deterministic trend with a polynomial function for certain values of the $\beta$ parameters $C_{t}=\beta _{0}+\beta _{1}t+\beta _{2}t^{2}+...+\beta _{k}t^{k}$. \\
In GG2020 temperature is viewed as a functional stochastic process $X=(X_{t}(\omega), t \in T)$, where $T$ is an interval in $\mathbb{R}$, defined in a probability space $(\Omega, \Im, P)$. A convenient example of an infinite-dimensional discrete-time process consists of associating $\xi=(\xi_n, n \in \mathbb{R}_{+})$ with a sequence of random variables whose values are in an appropriate function space. This may be obtained by setting
\begin{equation}
X_{t}(n)=\xi_{tN+n}, \text{ } 0\leq n \leq N, \text{ } t=0,1,2, ..., T \label{example}
\end{equation}
so $X=(X_{t}, t=0,1,2,...,T)$. If the sample paths of $\xi$ are continuous, then we have a sequence $X_{0}, X_{1}, ....$ of random variables in the space $C[0, N]$. The choice of the period or segment $t$ will depend on the situation in hand. In our case, $t$ will be the period of a year, and $N$ represents cross-sectional units or higher-frequency time series.
We may be interested in modeling the whole sequence of $\mathbf{G}$ functions, for instance the sequence of state densities ($f_{1}(\omega), f_{2}(\omega), ..., f_{T}(\omega) $ ) as in Chang et al. (2015, 2016) or only certain characteristics ($C_{t}(w)$) of these $\mathbf{G}$ functions, for instance, the state mean, the state variance, the state quantile, etc. These characteristics can be considered time series objects and, therefore, all the econometric tools already developed in the time series literature can be applied to $C_{t}(w)$. With this characteristic approach we go from $\Omega$ to $\mathbb{R}^{T}$, as in a standard stochastic process, passing through a $\mathbf{G}$ functional space:
\begin{center}
$\underset{(w)}{\Omega} \xrightarrow{X} \underset{X_{t}(w)}{\mathbf{G}} \xrightarrow{C} \underset{C_{t}(w)}{\mathbb{R}}$ \\
\end{center}
Going back to the convenient example and abusing notation, the stochastic structure can be summarized in the following array:
\begin{equation}
\scalebox{0.8}{
\begin{tabular}{|c|cc|c|c|}
\hline
${\small X}_{{\small 10}}{\small (w)=}$ $\xi _{0}(w)$ & \multicolumn{1}{|c|}{%
${\small X}_{{\small 11}}{\small (w)=}$ $\xi _{1}(w)$} & $.$ $.$ $.$ &
\multicolumn{1}{|c|}{${\small X}_{{\small 1N}}{\small (w)=}$ $\xi _{N}(w)$}
& ${\small C}_{{\small 1}}{\small (w)}$ \\ \hline
${\small X}_{{\small 20}}{\small (w)=}$ $\xi _{N+1}(w)$ &
\multicolumn{1}{|c|}{${\small X}_{{\small 21}}{\small (w)=}$ $\xi _{N+2}(w)$}
& $.$ $.$ $.$ & \multicolumn{1}{|c|}{${\small X}_{{\small 2N}}{\small (w)=}$
$\xi _{2N}(w)$} & ${\small C}_{{\small 2}}{\small (w)}$ \\ \hline
$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$ & \multicolumn{1}{|c|}{$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$} & $%
\begin{array}{c}
.\text{ }.\text{ }. \\
.\text{ }.\text{ }. \\
.\text{ }.\text{ }.%
\end{array}%
$ & \multicolumn{1}{|c|}{$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$} & $%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$ \\ \hline
${\small X}_{{\small T0}}{\small (w)=}$ $\xi _{(T-1)N+1}(w)$ &
\multicolumn{1}{|c|}{${\small X}_{{\small T1}}{\small (w)=}$ $\xi
_{(T-1)N+2}(w)$} & $.$ $.$ $.$ & \multicolumn{1}{|c|}{${\small X}_{{\small TN%
}}{\small (w)=}$ $\xi _{TN}(w)$} & ${\small C}_{{\small T}}{\small (w)}$ \\
\hline
\end{tabular}
}
\label{eq-scheme}
\end{equation}
The objective of this section is to provide a simple test to detect the existence of a general unknown trend component in a given characteristic $C_t$ of the temperature process $X_t$. To do this, we need to convert Definition \ref{def1} into a more practical definition.
\begin{defn} \label{def2} \textit{(\underline{Trend test})}: \textit{Let $h(t)$ be an increasing function of $t$. A characteristic $C_{t}$ of a functional stochastic process $X_{t}$ contains a trend if $\beta \neq 0$ in the regression}
\begin{equation}
C_{t}=\alpha +\beta h(t)+u_{t}, \text{ } t=1,...,T. \label{tbeta}
\end{equation}
\end{defn}
The main problem of this definition is that the trend component in $C_t$ as well as the function $h(t)$ are unknown. Therefore this definition can not be easily implemented. If we assume that $C_t$ does not have a trend component (it is $I(0)$)\footnote{Our definition of an I(0) process follows Johansen (1995). A stochastic process $Y_{t}$ that satisfies $Y_{t}-E(Y_{t})$ $%
=\sum \limits_{i=1}^{\infty }\Psi_{i}\varepsilon _{t-i}$ is called I(0) if $%
\sum \limits_{i=1}^{\infty }\Psi$ $_{i}z^{i}$ converges for $\left \vert
z\right \vert <1+\delta$, for some $\delta>0$ and $\sum \limits_{i=1}^{\infty }\Psi$ $_{i}\neq 0,$ where the
condition $\varepsilon_{t}\thicksim $ iid(0,$\sigma ^{2})$ with $\sigma ^{2}>0$ is understood.} and $h(t)$ is linear, then we have the following well known result.
\begin{prop}\label{prop1}
Let $C_{t}=I(0)$. In the regression
\begin{equation}
C_{t}=\alpha +\beta t + u_{t}
\label{eq-reg}
\end{equation}
the OLS estimator
\begin{equation}
\widehat{\beta}=\frac{\sum \limits_{t=1}^{T}(C_{t}-\overline{C})(t-\overline{t})}{\sum \limits_{t=1}^{T}(t-\overline{t})^{2}}
\end{equation}
satisfies
\begin{equation}
T^{3/2}\widehat{\beta }=O_{p}(1)
\end{equation}
and asymptotically ($T \rightarrow \infty$)
\begin{equation*}
t_{\beta =0} \text{ is } N(0,1).
\end{equation*}
\end{prop}
In order to analyze the behavior of the t-statistic $t_{\beta }=0,$ for a general trend component in $C_t$, it is very convenient to use the concept of \textit{Summability} (Berenguer-Rico and Gonzalo, 2014)
\begin{defn} \label{def3} \textit{(\underline{Order of Summability})}: \textit{ A trend $h(t)$ is said to be summable of order ``$\delta$'' $(S(\delta ))$ if there exists a slowly varying function $L(T)$,\footnote{A positive Lebesgue measurable function, L, on $(0,\infty)$ is slowly varying (in Karamata's sense) at $\infty$ if
\begin{equation}
\frac{L(\lambda n)}{L(n)}\rightarrow 1\text{ }(n\rightarrow \infty )\text{ }%
\forall \lambda >0.
\end{equation}
(See Embrechts et al., 1999, p. 564).} such that}
\begin{equation}
S_{T}=\frac{1}{T^{1+\delta }}L(T)\sum_{t=1}^{T}h(t) \label{eq_sum}
\end{equation}
\textit{is $O(1)$, but not $o(1)$.}
\end{defn}
\begin{prop}\label{prop2}
Let $C_{t}=h(t)+I(0)$ such that $h(t)$ is $ S(\delta )$ with $\delta \geq 0$, and such that the function $g(t)=h(t)t $ is $ S(\delta +1)$.
In the regression
\begin{equation}
C_{t}=\alpha +\beta t + u_{t} \label{tbeta2}
\end{equation}
the OLS $\widehat{\beta}$ estimator satisfies
\begin{equation}
T^{(1-\delta )}\widehat{\beta }=O_{p}(1).
\end{equation}
Assuming that the function $h(t)^{2}$ is $ S(1+2 \delta-\gamma)$ with $0\leq \gamma \leq1+\delta $, then
\begin{equation}
t_{\beta =0} = \left \{
\begin{array}{l}
O_{p}(T^{\gamma/2})$ for $0\leq \gamma \leq1 \\
O_{p}(T^{1/2})$ for $1\leq \gamma \leq1+\delta
\end{array} \right.
\end{equation}
\end{prop}
Examples of how this proposition applies for different particular Data Generating Processes (DGP) can be found in GG.\\
A question of great empirical importance is how our trend test ($TT$) of Proposition \ref{prop2} behaves when $C_t=I(1)$ (accumulation of an I(0) process). Following Durlauf and Phillips (1988), $T^{1/2}\widehat{\beta}=O_{p}(1)$; however, $t_{\beta =0}$ diverges as $ T {\rightarrow } \infty$. Therefore, our $TT$ can detect the stochastic trend generated by an I(1) process. In fact, our test will detect trends generated by any of the three standard persistent processes considered in the literature (see Muller and Watson, 2008): (i) fractional or long-memory models; (ii) near-unit-root AR models; and (iii) local-level models. Let
\begin{equation}
C_{t}=\mu+z_{t},\text{ } t=1,...,T. \label{eq-sto_trend}
\end{equation}
In the first model, $z_{t}$ is a fractional process with $1/2<d<3/2$. In the second model, $z_{t}$ follows an AR, with its largest root close to unity, $\rho _{T}=1-c/T$. In the third model, $z_{t}$ is decomposed into an I(1) and an I(0) component. Its simplest format is $z_{t}$ = $\upsilon _{t}$ + $\epsilon _{t}$ with $\upsilon _{t}$ = $\upsilon _{t-1}$ +$\eta _{t}$, where $\epsilon _{t}$ is $ID(0,q\ast \sigma ^{2}$), $\eta _{t}$ is $ID(0,\sigma ^{2})$, $\sigma^{2} >0$ and both disturbances are serially and mutually independent. Note that the pure unit-root process is nested in all three models: $d=1$, $c=0$, and $q=0$.
The long-run properties implied by each of these models can be characterized using the stochastic properties of the partial sum process for $z_{t}$. The standard assumptions considered in the macroeconomics or finance literature assume the existence of a ``$\delta$,'' such that $T^{-1/2+\delta }\sum_{t=1}^{T}z_{t}\longrightarrow \sigma $ $H(.)$, where ``$\delta$'' is a model-specific constant and $H$ is a model-specific zero-mean Gaussian process with a given covariance kernel $k(r,s).$ Then, it is clear that the process $C_{t}=\mu+z_{t}$ is summable (see Berenguer-Rico and Gonzalo, 2014). This is the main reason why Proposition \ref{prop3} holds for these three persistent processes.
\begin{prop}\label{prop3}
Let $C_{t}=\mu+z_{t},t=1,...,T$, with $z_{t}$ any of the following three processes: (i) a fractional or long-memory model, with $1/2<d<3/2$; (ii) a near-unit-root AR model; or (iii) a local-level model. Furthermore, $T^{-1/2+\delta }\sum_{t=1}^{T}z_{t}\longrightarrow \sigma $ $H(.)$,
where ``$\delta$'' is a model-specific constant and $H$ is a model-specific zero-mean Gaussian process with a given covariance kernel $k(r,s).$
Then, in the LS regression
\begin{equation*}
C_{t}=\alpha+\beta t+u_{t},
\end{equation*}
the t-statistic diverges,
\begin{equation*}
t_{\beta =0}=O_{p}(T^{1/2}).
\end{equation*}
\end{prop}
After the development of the theoretical core, we are in a position to design tools to approach the empirical strategy. The following subsection describes each of them.
\subsection{Empirical tools: definitions and tests}
From Propositions \ref{prop2} and \ref{prop3}, Definition \ref{def2} can be simplified into the following testable and practical definition.
\begin{defn} \label{def4} \textit{(\underline{Practical definition 2})}: \textit{ A characteristic $C_{t}$ of a functional stochastic process $X_{t}$ contains a trend if in the LS regression,}
\begin{equation}
C_{t}=\alpha +\beta t+u_{t}, \text{ } t=1,...,T, \label{tbeta3}
\end{equation}
\textit{$\beta=0$ is rejected.}
\end{defn}
Several remarks are relevant with respect to this definition: (i) regression (\ref{tbeta3}) has to be understood as the linear LS approximation of an unknown trend function $h(t)$ (see White, 1980); (ii) the parameter $\beta$ is the plim of $\widehat{\beta}_{ols}$; (iii) if the regression (\ref{tbeta3}) is the true data-generating process, with $u_t\sim I(0)$, then the OLS $\widehat{\beta }$ estimator is asymptotically equivalent to the GLS estimator (see Grenander and Rosenblatt, 1957); (iv) in practice, in order to test $\beta=0$, it is recommended to use a robust HAC version of $t_{\beta =0}$ (see Busetti and Harvey, 2008); and (v) this test only detects the existence of a trend but not the type of trend.
For all these reasons, in the empirical applications we implement Definition \ref{def4} by estimating regression (\ref{tbeta3}) using OLS and constructing a HAC version of $t_{\beta =0}$ (Newey and West, 1987).
These linear trends can be common across characteristics indicating similar patters in the time evolution of these characteristics.
\begin{defn} \label{def5} \textit{(\underline{Co-trending})}: \textit{A set of $m $ distributional characteristics ($C_{1t}$,$C_{2t}$,...,$C_{mt}$) do linearly co-trend if in the multivariate regression \\}
\begin{equation}
\begin{pmatrix}
C_{1t} \\
... \\
C_{mt}%
\end{pmatrix}%
=%
\begin{pmatrix}
\alpha _{1} \\
... \\
\alpha _{m}%
\end{pmatrix}%
+%
\begin{pmatrix}
\beta _{1} \\
... \\
\beta _{m}%
\end{pmatrix}%
t+%
\begin{pmatrix}
u_{1t} \\
... \\
u_{mt}%
\end{pmatrix}%
\label{cotrend}
\end{equation}
\textit{ all the slopes are equal, $\beta _{1}=\beta _{2}=...=\beta _{m}.$} \footnote{This definition is slightly different from the one in Carrion-i-Silvestre and Kim (2019).}
\end{defn}
This co-trending hypothesis can be tested by a standard Wald test.
When $m=2$ an alternative linear co-trending test can be obtained from
the regression
\begin{equation*}
C_{it}-C_{jt}=\alpha +\beta t+u_{t}
\end{equation*}
$i\neq j$ $i,j=1,...,m$ by testing the null hypothesis of $\beta =0$ vs $\beta \neq 0$ using
a simple $t_{\beta =0}$ test.
Climate classification is a tool used to recognize, clarify and simplify the existent climate heterogeneity in the Globe. It also helps us to better understand the Globe’s climate and therefore to design more efficient global warming mitigation policies. The prevalent climate typology is that proposed by K\"oppen (1900) and later on modified in K\"oppen and Geiger (1930). It is an empirical classification that divides the climate into five major types, which are represented by the capital letters A (tropical zone), B (dry zone), C (temperate zone), D (continental zone), and E (polar zone). Each of these climate types except for B is defined by temperature criteria. More recent classifications can been found in the AR6 of the IPCC (2021, 2022) but all of them share the spirit of the original one of K\"oppen (1900).
The climate classification we propose in this section is also based on temperature data and it has three simple distinctive characteristics:
\begin{itemize}
\item It considers the whole temperature distribution and not only the average
\item It has a dynamic nature: it is based on the evolution of the trend of the temperature quantiles (lower and upper).
\item It can be easily tested
\end{itemize}
\begin{defn} \label{def6} \textit{(\underline{Warming Typology})}:
\textit{We define four types of warming processes:}
\begin{itemize}
\item \textbf{W0}: \textit{There is no trend in any of the quantiles (No warming).}
\item \textbf{W1}: \textit{All the location distributional characteristics have the same positive trend (dispersion does not contain a trend)}
\item \textbf{W2}: \textit{The Lower quantiles have a larger positive trend than the Upper quantiles (dispersion has a negative trend)}
\item \textbf{W3}: \textit{The Upper quantiles have a larger positive trend than the Lower quantiles (dispersion has a positive trend).}
\end{itemize}
\end{defn}
Climate is understood, unlike weather, as a medium and long-term phenomenon and, therefore, it is crucial to take trends into account. Notice that this typology can be used to describe macroclimate as well as microclimate locations.
Most of the literature on Global or Local warming only considers the trend behavior of the central part of the distribution (mean or median). By doing this, we are losing very useful information that can be used to describe the whole warming process. This information is considered in the other elements of the typology \textit{W1}, \textit{W2} and \textit{W3}. This typology does not say anything about the intensity of the warming process and its dynamic. Part of this intensity is captured in the following definitions of warming acceleration and warming amplification.
\begin{defn} \label{def7} \textit{(\underline{Warming Acceleration})}:
\textit{We say that there is warming acceleration in a distributional temperature characteristic $C_{t}$ between the time periods $t_1=(1,..., s)$ and $t_2=(s+1,..., T)$ if in the following two regressions:
}
\begin{equation}
C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T,
\end{equation}
\begin{equation}
C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \label{acc}
\end{equation}
\textit{the second trend slope is larger than the first one: $\beta_{2} > \beta_{1}$.}\\
\end{defn}
In practice, we implement this definition by testing in the previous system the null hypothesis $\beta_{2}=\beta_{1}$ against the alternative $\beta_{2}>\beta_{1}$ An alternative warming acceleration test can be formed by testing for a structural break at $t=s$. Nevertheless, we prefer the approach of Definition \ref{def7} because it matches closely the existent narrative on warming acceleration in the climate literature.
\begin{defn} \label{def8} \textit{(\underline{Warming Amplification with respect to the mean})}:
\textit{ We say that there is a warming amplification in distributional characteristic $C_{t}$ with respect the $mean$ if in the following regression:}
\begin{equation}
C_{t}=\beta _{0}+\beta _{1} mean_{t}+\epsilon_{t} \label{ampl}
\end{equation}
\textit{the mean slope is greater than one: $\beta_{1} >1$. }
\end{defn}
When the mean, $mean_{t}$, and $C_{t}$ come from the same distribution, we name this ``inner'' warming amplification. Otherwise, the mean may come from an external environment and, in that case, we call it ``outer'' warming amplification.
Both concepts, acceleration and amplification, introduce a quantitative dimension to the ordinarily defined classification. For example, the acceleration, which has a dynamic character, allows us to observe the transition from one type of climate to another. Amplification, on the other hand, makes it possible to compare the magnitude of the trends that define each type of climate. It should be noted that, although static in nature, it can be computed recursively at different points in time.
In the previous definitions, we classify the warming process of different regions which is crucial in the design of local mitigation and adaptation policies. But we, also, need to compare the different climate change processes of two regions in order to characterize climate heterogeneity independently of the type of warming they are experimenting. For this purpose, we propose the following definition that shares the spirit of the stochastic dominance concept used in the economic-finance literature.
\begin{defn} \label{def9} \textit{(\underline{Warming Dominance (WD)}}:
\textit{We say that the temperature distributions of \textbf{Region $A$} warming dominates (\textbf{$WD$}) the temperature distributions of \textbf{Region $B$} if in the following regression
}
\begin{equation}
q_{\tau t}(A)- q_{\tau t}(B)=\alpha_{\tau} +\beta_{\tau} t +u_{\tau t} \label{wd},
\end{equation}
\textit{$\beta_{\tau}\geq 0$ for all $0<\tau<1$ and there is at least one value $\tau^{*}$ for which a strict inequality holds.}
\end{defn}
It is also possible to have only \emph{partial} (\textbf{$WD$}). For instance, in the lower or upper quantiles.
\section{The data\label{sec-data}}
\subsection{Spain}
The measurement of meteorological information in Spain started in the eighteenth century. However, it was not until the mid-nineteenth century that reliable and regular data became available. In Spain, there are four main sources of meteorological information: the Resumen Anual, Bolet\'{\i}n Diario, Bolet\'{\i}n Mensual de Climatolog\'{\i}a and Calendario Meteorol\'ogico. These were first published in 1866, 1893, 1940 and 1943, respectively. A detailed explanation of the different sources can be found in Carreras and Tafunell (2006).
Currently, AEMET (Agencia Estatal de Meterolog\'{\i}a) is the agency responsible for storing, managing and providing meteorological data to the public. Some of the historical publications, such as the Bolet\'{\i}n Diario and Calendario Meteorol\'ogico can be found in digital format in their respective archives for whose use it is necessary to use some kind of Optical Character Recognition (OCR) software.\footnote{$http://www.aemet.es/es/conocermas/recursos_en_linea/calendarios?n=todos$ and $https://repositorio.aemet.es/handle/20.500.11765/6290$.}
In 2015, AEMET developed AEMET OpenData, an Application Programming Interface (API REST) that allows the dissemination and reuse of Spanish meteorological and climatological information. To use it, the user needs to obtain an API key to allow access to the application. Then, either through the GUI or through a programming language such as Java or Python, the user can request data. More information about the use of the API can be found on their webpage.\footnote{$https://opendata.aemet.es/centrodedescargas/inicio$. The use of AEMET data is regulated in the following resolution $https://www.boe.es/boe/dias/2016/01/05/pdfs/BOE-A-2016-111.pdf$.}
In this paper, we are concerned with Spanish daily station data, specifically temperature data. Each station records the minimum, maximum and average temperature as well as the amount of precipitation, measured as liters per square meter. The data period ranges from 1920 to 2019. However, in 1920 there were only 13 provinces (out of 52) who had stations available. It was not until 1965 that all the 52 provinces had at least one working station. Moreover, it is important to keep in mind that the number of stations has increased substantially from only 14 stations in 1920 to more than 250 in 2019.
With this information in mind, we select the longest span of time that guarantees a wide sample of stations so that all the geographical areas of peninsular Spain are represented. For this reason, we decided to work with station data from 1950 to 2019. There are 30 stations whose geographical distribution is displayed in the map in Figure \ref{fig-data}. The original daily data are converted into monthly data, so that we finally work with a total of 30x12 station-month units corresponding to peninsular Spain and, consequently, we have 360 observations each year with which to construct the annual distributional characteristics.
\subsection{The Globe}
In the case of the Globe, we use the database of the Climate Research Unit (CRU) that offers monthly and yearly data of land and sea temperatures in both hemispheres from 1850 to the present, collected from different stations around the world.\footnote{We use CRUTEM version 5.0.1.0, which can be downloaded from (https://crudata.uea.ac.uk/cru/data/temperature/). A recent revision of the methodology can be found in Jones et al. (2012).} Each station temperature is converted to an anomaly, taking 1961-1990 as the base period, and each grid-box value, on a five-degree grid, is the mean of all the station anomalies within that grid box. This database (in particular, the annual temperature of the Northern Hemisphere) has become one of the most widely used to illustrate GW from records of thermometer readings. These records form the blade of the well-known ``hockey stick'' graph, frequently used by academics and other institutions, such as, the IPCC. In this paper, we prefer to base our analysis on raw station data, as in GG2020.
The database provides data from 1850 to nowadays, although due to the high variability at the beginning of the period it is customary in the literature to begin in 1880. In this work, we have selected the stations that are permanently present in the period 1950-2019 according to the concept of the station-month unit. In this way, the results are comparable with those obtained for Spain. Although there are 10,633 stations on record, the effective number fluctuates each year and there are only 2,192 stations with data for all the years in the sample period, which yields 19,284 station-month units each year (see this geographical distribution in the map in Figure \ref{fig-data}).\footnote{In the CRU data there are 115 Spanish stations. However, after removing stations not present for the whole 1880 to 2019 period, only Madrid-Retiro, Valladolid and Soria remain. Since 1950, applying the same criteria, only 30 remain.} In summary, we analyze raw global data (stations instead of grids) for the period 1950 to 2019, compute station-month units that remain all the time and with these build the annual distributional characteristics.
\begin{figure}[h!]
\begin{center}
\caption{Geographical distribution of stations}
\label{fig-data}
\subfloat[{\small Spain. Selected stations, AEMET data 1950-2019}]{
\includegraphics[scale=0.9]{Figures/stations_1950e}}\\
\subfloat[{\small The Globe. Selected stations, CRU data 1950-2019}]{
\includegraphics[scale=0.7]{Figures/Map_Globe2}}
\end{center}
\end{figure}
\section{Empirical strategy\label{sec-emp}}
In this section we apply our three-step quantitative methodology to show the existent climate heterogeneity between Spain and the Globe as well as within Spain, between Madrid and Barcelona. Because all our definitions are written in a testing format, it is straightforward to empirically apply them. First, we test for the existence of warming by testing the existence of a trend in a given distributional characteristic. How common are the trends of the different characteristics (revealed by a co-trending test) determine the warming typology. Second, the strength of the warming process is tested by testing the hypothesis of warming acceleration and warming amplification. And third, independently of the warming typology, we determine how the warming process of Spain compares with that of the Globe as a whole (we do the same for Madrid and Barcelona). This is done by testing for warming dominance.
The results are presented according to the following steps: first, we apply our trend test (see Definition \ref{def4}) to determine the existence of local or global warming and test for any possible warming acceleration; second, we test different co-trending hypotheses to determine the type of warming of each area; thirdly, we test the warming amplification hypothesis for different quantiles with respect to the mean (of Spain as well as of the Globe): $H_{0}: \beta_{1}=1$ versus $H_{a}: \beta_{1}>1$ in (\ref{ampl}); and finally, we compare the \textit{CC} of different regions, for Spain and the Globe, and within Spain, between Madrid and Barcelona, with our warming dominance test (see \ref{wd}).\footnote{Before testing for the presence of trends in the distributional characteristics of the data, we test for the existence of unit roots. To do so, we use the well-known Augmented Dickey-Fuller test (ADF; Dickey and Fuller, 1979), where the number of lags is selected in accordance with the SBIC criterion. The results, available from the authors on request, show that the null hypothesis of a unit root is rejected for all the characteristics considered.}
\subsection{Local warming: Spain \label{sec-cross-Spain}}
The cross-sectional analysis is approached under two assumptions. First, choosing a sufficiently long and representative period of the geographical diversity of the Spanish Iberian Peninsula, 1950-2019. Second, we work with month-station units from daily observations to construct the annual observations of the time series object from the data supplied by the stations, following a methodology similar to that carried out for the whole planet in GG2020.\footnote{The results with daily averages are very similar. The decision to work with monthly data instead of daily in the cross-sectional approach has been based on its compatibility with the data available for the Globe. } The study comprises the steps described in the previous section. The density of the data and the evolution of characteristics are displayed, respectively in Figures \ref{fig-density-Spain} and \ref{fig-char-1950-monthly}.
We find positive and significant trends in the \textit{mean}, \textit{max}, \textit{min} and all the quantiles. Therefore from definition \ref{def1}, we conclude there exists a clear local warming (see Table \ref{tab-1950-Spain-monthly-rec-acc}).
The recursive evolution for the periods 1950-2019 and 1970-2019 shows a clear increase in the trends of the \textit{mean}, some dispersion measures and higher quantiles (see the last column of Table \ref{tab-1950-Spain-monthly-rec-acc}). More precisely, there is a significant trend acceleration in most of the distributional characteristics except the lower quantiles (below \textit{q20}). These quantiles, \textit{q05} and \textit{q10}, remain stable.
The co-trending tests for the full sample 1950-2019 show a similar evolution of the trend for all the quantiles with a constant \textit{iqr} (see Table \ref{Tab-cotrend-since1950-monthly-1950}). This indicates that in this period the warming process of Spain can be considered a \textit{W1} type. More recently, 1970-2019, the co-trending tests (see Table \ref{Tab-cotrend-since1950-monthly-1970}) indicate the upper quantiles grow faster than the lower ones. This, together with a positive trend in the dispersion measured by the \textit{iqr} shows that Spain has evolved from a \textit{W1} to a \textit{W3} warming type process
Finally, no evidence of ``inner'' amplification during the period 1950-2019 is found in the lower quantiles. Regarding the upper quantiles, we found both ``inner'' and ``outer'' amplification in the second period, which supports the previous finding of a transition from type \textit{W1} to type \textit{W3} (see Table \ref{tab-amplif-Spain}).
Summing up, with our proposed tests for the evolution of the trend of the whole temperature distribution, we conclude that Spain has evolved from a \textit{W1} type to a much more dangerous \textit{W3} type. The results of acceleration and dynamic amplification reinforce the finding of this transition to type \textit{W3}.
\begin{figure}[h!]
\begin{center}
\caption{Spain annual temperature density calculated with monthly data across stations} \label{fig-density-Spain}
\includegraphics[scale=0.5]{Figures/Figure_density_Spain_1950}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Characteristics of temperature data in Spain with stations selected since 1950 (monthly data across stations, AEMET, 1950-2019)} \label{fig-char-1950-monthly}
\includegraphics[scale=0.5]{Figures/fig_quantiles_monthly_selected_stations_since_1950.png}
\end{center}
\end{figure}
\begin{table}[h!]\caption{Trend acceleration hypothesis (Spain monthly data across stations, AEMET, 1950-2019)}\label{tab-1950-Spain-monthly-rec-acc}\begin{center}\scalebox{0.5}{\begin{tabular}{l*{5}{c}} \hline \hline
& \multicolumn{2}{c}{Trend test by periods}& \multicolumn{1}{c}{Acceleration test}\\
names/periods&1950-2019& 1970-2019& 1950-2019, 1970-2019\\ \hline
mean &0.0242&0.0389&3.0294\\
& (0.0000)& (0.0000)& (0.0015) \\
max &0.0312&0.0526&2.7871\\
& (0.0000)& (0.0000)& (0.0030) \\
min &0.0289&0.0251&-0.2557\\
& (0.0000)& (0.0654)& (0.6007) \\
std &0.0036&0.0098&1.7952\\
& (0.0518)& (0.0021)& (0.0374) \\
iqr &0.0051&0.0158&1.8197\\
& (0.1793)& (0.0028)& (0.0355) \\
rank &0.0023&0.0276&1.2705\\
& (0.8249)& (0.1127)& (0.1030) \\
kur &-0.0010&-0.0018&-0.9191\\
& (0.0203)& (0.0198)& (0.8202) \\
skw &0.0011&-0.0002&-1.5989\\
& (0.0271)& (0.7423)& (0.9439) \\
q5 &0.0227&0.0206&-0.2559\\
& (0.0000)& (0.0059)& (0.6008) \\
q10 &0.0200&0.0203&0.0406\\
& (0.0000)& (0.0077)& (0.4838) \\
q20 &0.0209&0.0300&1.4158\\
& (0.0000)& (0.0000)& (0.0796) \\
q30 &0.0221&0.0333&2.0100\\
& (0.0000)& (0.0000)& (0.0232) \\
q40 &0.0213&0.0366&2.4867\\
& (0.0000)& (0.0000)& (0.0071) \\
q50 &0.0211&0.0404&3.2496\\
& (0.0000)& (0.0000)& (0.0007) \\
q60 &0.0246&0.0446&3.1147\\
& (0.0000)& (0.0000)& (0.0011) \\
q70 &0.0273&0.0478&3.3143\\
& (0.0000)& (0.0000)& (0.0006) \\
q80 &0.0275&0.0471&2.6949\\
& (0.0000)& (0.0000)& (0.0040) \\
q90 &0.0321&0.0548&3.2441\\
& (0.0000)& (0.0000)& (0.0007) \\
q95 &0.0335&0.0526&3.3568\\
& (0.0000)& (0.0000)& (0.0005) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: OLS estimates and HAC p-values in parenthesis of the $t_{\beta=0}$ test from regression: $C_{t}=\alpha+\beta t+u_{t}$, for two different time periods. For the acceleration hypothesis we run the system: $C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T, C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \text{and test the null hypothesis } \beta_{2}=\beta_{1} \text{ against the alternative} \beta_{2}>\beta_{1}$. We show the value of the t-statistic and its HAC p-value.}
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (Spain monthly data across stations, AEMET, 1950-2019)}\label{Tab-cotrend-since1950-monthly-1950}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&13.235&0.211 \\
Lower quantiles (q05, q10, q20, q30) &0.310&0.958 \\
Medium quantiles (q40, q50, q60) &0.438&0.803 \\
Upper quantiles (q70, q80, q90, q95) &1.515&0.679 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &0.771&0.993 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &8.331&0.215 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &11.705&0.111 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.002&0.786 \\
q95-q50&0.012&0.000 \\
q95-q05 &0.011&0.096 \\
q75-q25 (iqr) &0.005&0.179 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (Spain monthly data across stations, AEMET, 1970-2019)}\label{Tab-cotrend-since1950-monthly-1970}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&38.879&0.000 \\
Lower quantiles (q05, q10, q20, q30) &3.121&0.373 \\
Medium quantiles (q40, q50, q60) &1.314&0.518 \\
Upper quantiles (q70, q80, q90, q95) &1.719&0.633 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &12.771&0.047 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &10.675&0.099 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &37.892&0.000 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &0.020&0.029 \\
q95-q50&0.012&0.050 \\
q55-q05 &0.032&0.002 \\
q75-q25 (iqr) &0.016&0.003 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Amplification hypothesis (Spain monthly data, AEMET 1950-2019}\label{tab-amplif-Spain}\begin{center}\scalebox{0.8}{\begin{tabular}{l*{5}{c}} \hline \hline
periods/variables&1950-2019&1970-2019&1950-2019&1970-2019\\ \hline
& \multicolumn{2}{c}{Inner}& \multicolumn{2}{c}{Outer}\\ \hline
q05&0.80&0.56&0.55&0.39\\
& (0.866)& (0.998)& (0.990)& (0.996) \\
q10&0.83&0.65&0.62&0.52\\
& (0.899)& (0.994)& (0.992)& (0.986) \\
q20&0.94&0.90&0.76&0.81\\
& (0.816)& (0.890)& (0.993)& (0.899) \\
q30&0.93&0.91&0.77&0.87\\
& (0.935)& (0.929)& (0.997)& (0.834) \\
q40&0.97&1.03&0.80&0.97\\
& (0.744)& (0.318)& (0.978)& (0.566) \\
q50&0.98&1.10&0.83&1.12\\
& (0.612)& (0.067)& (0.944)& (0.212) \\
q60&1.09&1.15&0.96&1.23\\
& (0.103)& (0.051)& (0.619)& (0.056) \\
q70&1.11&1.16&1.05&1.30\\
& (0.040)& (0.006)& (0.350)& (0.028) \\
q80&1.11&1.14&1.06&1.29\\
& (0.083)& (0.071)& (0.325)& (0.060) \\
q90&1.14&1.16&1.19&1.45\\
& (0.101)& (0.118)& (0.078)& (0.007) \\
q95&1.10&1.09&1.18&1.36\\
& (0.089)& (0.191)& (0.051)& (0.008) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: OLS estimates and HAC p-values of the t-statistic of testing $H_{0}: \beta_{i}=1$ versus $H_{a}: \beta_{i}>1$ in the regression: $C_{it}=\beta _{i0}+\beta _{i1} mean_{t}+\epsilon_{it}$. $mean$ refers to the average of the Spanish Global temperature distribution for the ``inner'' and ``outer''cases, respectively.
\end{tablenotes}\end{table}
\clearpage
\subsection{Global warming: the Globe}
In this section, we carry out a similar analysis to that described in the previous subsection for Spain. Figures \ref{fig-density-Globe} and \ref{fig-quantiles-Globe-monthly} show the time evolution of the Global temperature densities and their different distributional characteristics from 1950 to 2019. The data in both figures are obtained from stations that report data throughout the sample period.
Table \ref{Tab-1950-Globe-monthly-acc} shows a positive trend in the mean as well as in all the quantiles. This indicates the clear existence of Global warming, more pronounced (larger trend) in the lower part of the distribution (a negative trend in the dispersion measures). The warming process suffers an acceleration in all the quantiles above \textit{q30}.
From the co-trending analysis (see Tables \ref{Tab-cotrend-Globe-monthly-1950-2019} and \ref{Tab-cotrend-Globe-monthly-1970-2019}) we can determine the type of warming process characterizing the whole Globe. Table \ref{Tab-cotrend-Globe-monthly-1950-2019} indicates that in the period 1950-2019 the Globe experimented a \textit{W2} warming type (the lower part of the temperature distribution grows faster than the middle and upper part, implying \textit{iqr} and \textit{std} have a negative trend). Similar results are maintained for the period 1970-2019 (in this case only the dispersion measure \textit{std} has a negative trend).
The asymmetric amplification results shown in Table \ref{tab-amplif-Globe} reinforce the \textit{W2} typology for the whole Globe: an increase of one degree in the global mean temperature increases the lower quantiles by more than one degree. This does not occur with the upper part of the distribution. Notice that this amplification goes beyond the standard Artic amplification (\textit{q05}) affecting also \textit{q10}, \textit{q20} and \textit{q30}.
Summing up, the results from our different proposed tests for the evolution of the trend of the whole temperature distribution indicate that the Globe can be cataloged as a undergoing type \textit{W2} warming process. This warming type may have more serious consequences for ice melting, sea level increases, permafrost, $CO_{2}$ migration, etc. than the other types.
\begin{figure}[h!]
\begin{center}
\caption{Global annual temperature density calculated with monthly data across stations} \label{fig-density-Globe}
\includegraphics[scale=0.5]{Figures/Figure_density_Globe_1950}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Characteristics of temperature data in the Globe (monthly data across stations, CRU, 1950-2019)} \label{fig-quantiles-Globe-monthly}
\includegraphics[scale=0.5]{Figures/Fig_quantiles_Globe_1950_2019}
\end{center}
\end{figure}
\begin{table}[h!]\caption{Trend acceleration hypothesis (CRU monthly data across stations, 1950-2019)}\label{Tab-1950-Globe-monthly-acc}\begin{center}\scalebox{0.5}{\begin{tabular}{l*{5}{c}} \hline \hline
& \multicolumn{2}{c}{Trend test by periods}& \multicolumn{1}{c}{Acceleration test}\\
names/periods&1950-2019& 1970-2019& 1950-2019, 1970-2019\\ \hline
mean &0.0213&0.0300&2.2023\\
& (0.0000)& (0.0000)& (0.0147) \\
max &0.0361&0.0523&1.1217\\
& (0.0000)& (0.0001)& (0.1320) \\
min &0.0423&-0.0109&0.5016\\
& (0.0000)& (0.5867)& (0.3084) \\
std &-0.0070&-0.0057&0.1776\\
& (0.0000)& (0.0570)& (0.4296) \\
iqr &-0.0067&-0.0043&0.2454\\
& (0.0435)& (0.4183)& (0.4033) \\
rank &-0.0062&0.0632&0.2181\\
& (0.5876)& (0.0005)& (0.4138) \\
kur &-0.0010&0.0001&0.0445\\
& (0.5205)& (0.9566)& (0.4823) \\
skw &0.0006&0.0003&0.0301\\
& (0.0577)& (0.5726)& (0.4880) \\
q5 &0.0404&0.0468&0.7035\\
& (0.0000)& (0.0000)& (0.2415) \\
q10 &0.0305&0.0406&0.9273\\
& (0.0000)& (0.0001)& (0.1777) \\
q20 &0.0253&0.0342&1.0156\\
& (0.0000)& (0.0000)& (0.1558) \\
q30 &0.0215&0.0280&1.2056\\
& (0.0000)& (0.0000)& (0.1150) \\
q40 &0.0192&0.0293&1.9873\\
& (0.0000)& (0.0000)& (0.0245) \\
q50 &0.0179&0.0268&1.8614\\
& (0.0000)& (0.0000)& (0.0324) \\
q60 &0.0185&0.0291&2.1971\\
& (0.0000)& (0.0000)& (0.0149) \\
q70 &0.0185&0.0288&2.5770\\
& (0.0000)& (0.0000)& (0.0055) \\
q80 &0.0160&0.0257&2.2460\\
& (0.0000)& (0.0000)& (0.0132) \\
q90 &0.0146&0.0243&2.0848\\
& (0.0005)& (0.0000)& (0.0195) \\
q95&0.0143&0.0239&1.7520\\
& (0.0001)& (0.0000)& (0.0410) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: OLS estimates and HAC p-values in parenthesis of the $t_{\beta=0}$ test from regression: $C_{t}=\alpha+\beta t+u_{t}$, for two different time periods. For the acceleration hypothesis we run the system: $C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T, C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \text{and test the null hypothesis } \beta_{2}=\beta_{1} \text{ against the alternative} \beta_{2}>\beta_{1}$. We show the value of the t-statistic and its HAC p-value.}
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (CRU montly data, 1950-2019)}\label{Tab-cotrend-Globe-monthly-1950-2019}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&25.143&0.005 \\
Lower quantiles (q05, q10, q20, q30) &9.545&0.023 \\
Medium quantiles (q40, q50, q60) &0.078&0.962 \\
Upper quantiles (q70, q80, q90, q95) &1.099&0.777 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &17.691&0.007 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &2.041&0.916 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &24.683&0.001 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.022&0.000 \\
q95-q50&-0.004&0.193 \\
q95-q05 &-0.026&0.000 \\
q75-q25 (iqr) &-0.007&0.043 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (CRU montly data, 1970-2019)}\label{Tab-cotrend-Globe-monthly-1970-2019}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&18.478&0.047 \\
Lower quantiles (q05, q10, q20, q30) &5.523&0.137 \\
Medium quantiles (q40, q50, q60) &0.569&0.752 \\
Upper quantiles (q70, q80, q90, q95) &2.667&0.446 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &7.606&0.268 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &6.714&0.348 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &14.520&0.043 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.020&0.047 \\
q95-q50&-0.003&0.462 \\
q95-q05 &-0.023&0.048 \\
q75-q25 (iqr) &-0.004&0.418 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Amplification hypotheses (CRU monthly data across stations, 1950-2019)}\label{tab-amplif-Globe}\begin{center}\scalebox{0.8}{\begin{tabular}{l*{3}{c}} \hline \hline
periods/variables&1950-2019&1970-2019\\ \hline
q05&2.00&1.83\\
& (0.000)& (0.000) \\
q10&1.79&1.73\\
& (0.000)& (0.001) \\
q20&1.41&1.37\\
& (0.000)& (0.000) \\
q30&1.07&1.00\\
& (0.089)& (0.502) \\
q40&0.88&0.91\\
& (0.999)& (0.973) \\
q50&0.74&0.81\\
& (1.000)& (0.997) \\
q60&0.74&0.85\\
& (0.999)& (0.973) \\
q70&0.77&0.85\\
& (1.000)& (0.988) \\
q80&0.72&0.78\\
& (1.000)& (1.000) \\
q90&0.69&0.70\\
& (1.000)& (1.000) \\
q95&0.60&0.64\\
& (1.000)& (1.000) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: OLS estimates and HAC p-values of the t-statistic of testing $H_{0}: \beta_{i}=1$ versus $H_{a}: \beta_{i}>1$ in the regression: $C_{it}=\beta _{i0}+\beta _{i1} mean_{t}+\epsilon_{it}$. $mean$ refers to the average of the Global temperature distribution.
\end{tablenotes}\end{table}
\clearpage
\subsection{Micro-local warming: Madrid and Barcelona}
The existence of warming heterogeneity implies that in order to design more efficient mitigation policies, they have to be developed at different levels: global, country, region etc. How local we need to go will depend on the existing degree of micro-warming heterogeneity. In this subsection, we go to the smallest level, climate station level . We analyze, within Spain, the warming process in two weather stations corresponding to two cities: Madrid (Retiro station) and Barcelona (Fabra station). \footnote{From Madrid and Barcelona there is data since 1920's, nevertheless we began the study in 1950 for consistency with the previous analysis of Spain and the Globe.} Obviously, the data provided by these stations is not cross-sectional data but directly pure time series data. Our methodology can be easily applied to higher frequency time series, in this case daily data, to compute the distributional characteristics (see Figures \ref{fig-char-daily-Madrid-1950} and \ref{fig-char-daily-Barcelona-1950})\footnote{See the application to Central England in GG2020 and in Gadea and Gonzalo (2022) to Madrid, Zaragoza and Oxford.}.
The results are shown in the Appendix. These two stations, Madrid-Retiro and Barcelona-Fabra clearly experience two different types of warming. First, there is evidence of micro-local warming, understood as the presence of significant and positive trends, in all the important temperature distributional characteristics of both stations. The acceleration phenomenon is also clearly detected, in other words, the warming increases as time passes (see Tables \ref{Tab-1950-Madrid-daily-rec-acc} and \ref{Tab-1950-Barcelona-daily-rec-acc}). Secondly, from the co-trending tests (Tables \ref{Tab-cotrend-Madrid-daily-1950}-\ref{Tab-cotrend-Madrid-daily-1970} and \ref{Tab-cotrend-Barcelona-daily-1950}-\ref{Tab-cotrend-Barcelona-daily-1970}), it can be concluded that the warming process of Madrid-Retiro is type \textit{W3} while for Barcelona-Fabra it is type \textit{W1}. In both cases the warming typology is stable through both sample periods (1950-2019 and 1970-2019). Thirdly, as expected, Madrid-Retiro presents ``inner'' and ``outer'' amplification for the upper quantiles, while Barcelona-Fabra does so only for the center part of its temperature distribution (see Tables \ref{Tab-amplif-Madrid-1950} and \ref{Tab-amplif-Barcelona-1950}).
Summing up, even within Spain we find evidence of warming heterogeneity. While Madrid (Continental Mediterranean climate) has a similar pattern as that of peninsular Spain (1970-2019) \textit{W3}, Barcelona (Mediterranean coastline climate) maintains a \textit{W1} typology. Thus there are two different warming processes which require mitigation policies at the country as well as the very local level.
\section{Comparing results}
The goal of this section is to show the existence of climate heterogeneity by comparing the results obtained from applying our three-step methodology to different regions. These results are summarized in Table \ref{Tab-summary}. It is clear that there is distributional warming in all the analyzed areas; but this warming follows different patterns and sometimes the warming type is not even stable. In the case of Spain, it depends on the period under consideration. Figure \ref{fig-comp-Globe-Spain-Madrid-Barcelona} captures graphically the different trend behavior and intensity of the distributional characteristics by regions (Spain and the Globe and Madrid and Barcelona).\footnote{The analysis of other characteristics such as the third and fourth order moments can contribute to the temperature distributions. In the case of Spain, the kurtosis is always negative with a mean value of -0.8 and a significant negative trend, which means that we are dealing with a platykurtic distribution with tails less thick than Normal, a shape that is accelerating over time. However, it is ot possible to draw conclusions about symmetry given its high variability over time. Conversely, the temperature distribution in the Globe is clearly leptokurtic with an average kurtosis of 0.9 and a negative but not significant trend. The global temperature observations are therefore more concentrated around the mean and their tails are thicker than in a Normal distribution. The skewness is clearly negative although a decreasing and significant trend points to a reduction of the negative skewness. } The graphical results in this figure coincide with the results of the warming typology tests shown in Table \ref{Tab-summary}.
The middle of Table \ref{Tab-summary} shows that warming acceleration is detected in all the locations. This acceleration is more general in Spain than in the Globe (see also the heatmap in Figure \ref{fig-comp-Globe-Spain-heatmap}) and in Barcelona than in Madrid. Apart from these differences, the acceleration shares certain similarities across regions. This is not the case for the warming amplification that is clearly asymmetric. Spain suffers an amplification in the upper quantiles while the Globe does so in the lower ones. Notice that the latter amplification goes beyond the standard results found in the literature for the Arctic region (\textit{q05}). We detect amplification also for the regions corresponding to the quantiles \textit{q10}-\textit{q30}. In the case of Madrid and Barcelona, Madrid suffers a wider warming amplification than Barcelona.
The results of the first two steps of our methodology are obtained region by region (Spain, the Globe, Madrid and Barcelona). It is the last step, via the warming dominance test (see the numerical results in Table \ref{tab-WD}) where we compare directly one region with another. Warming in Spain dominates that of the Globe in all the quantiles except the lower \textit{q05}.\footnote{A more detailed analysis of the warming process suffered in the Artic region can be found in Gadea and Gonzalo (2021).} This would support the idea held in European institutions and gathered in international reports on the greater intensity of climate change in the Iberian Peninsula. Warming in Madrid dominates that of Barcelona in the upper quantiles, while the reverse is the case in the lower quantiles. This latter result coincides with the idea that regions close to the sea have milder upper temperatures.
Further research (beyond the scope of this paper) will go in the direction of finding the possible causes behind the warming types \textit{W1}, \textit{W2}, and \textit{W3}. Following the literature, on diurnal temperature asymmetry (Diurnal Temperature Range $=DTR= T_{max}-T_{min}$) we can suggest as possible causes for \textit{W2} the cloud coverage (Karl et al. 1993) and the planetary boundary layer (see Davy et al. 2017). For \textit{W3}, the process of desertification (see Karl et al. 1993).
Summarizing, in this section we describe, measure and test the existence of warming heterogeneity in different regions of the planet. It is important to note that these extensive results can not be obtained by the standard analysis of the average temperature.
\begin{table}[h!]\caption{Warming dominance}\label{tab-WD}\begin{center}\scalebox{1}{\begin{tabular}{lcccc} \hline \hline
& \multicolumn{2}{c}{Spain-Globe}& \multicolumn{2}{c}{Madrid-Barcelona}\\
Quantile&$\beta$&t-ratio&$\beta$&t-ratio\\ \hline
q05 &-0.018&(-2.770)&-0.013&(-3.730)\\
q10 &-0.010&(-1.504)&-0.013&(-4.215)\\
q20 &-0.004&(-0.950)&-0.012&(-2.988)\\
q30 &0.001&(0.180)&-0.013&(-4.164)\\
q40 &0.002&(0.788)&-0.009&(-2.909)\\
q50 &0.003&(1.025)&-0.003&(-0.701)\\
q60 &0.006&(1.933)&-0.001&(-0.219)\\
q70 &0.009&(3.266)&0.006&(1.252)\\
q80 &0.012&(3.203)&0.016&(3.331)\\
q90 &0.017&(3.862)&0.010&(1.869)\\
q95 &0.019&(4.930)&0.014&(1.993)\\
\hline \hline
\end{tabular}
}\end{center}
\begin{tablenotes}
\textit{Note}: The slopes (t-statistic) of the following regression \begin{equation*}
q_{\tau t}(A)- q_{\tau t}(B)=\alpha_{\tau} +\beta_{\tau} t +u_{\tau t}
\end{equation*}
In the first column \textit{A}=Spain, \textit{B}=Globe and in the second \textit{A}=Madrid, \textit{B}=Barcelona.
\end{tablenotes}
\end{table}
\begin{table}[h!]
\caption{Summary of results}\label{Tab-summary}\begin{center}\scalebox{0.65}{
\begin{tabular}{c|c|c|c|c|c|c} \\ \hline \hline
\multicolumn{7}{c}{Cross analysis} \\ \hline
Sample & Period & Type & Acceleration & \multicolumn{2}{c}{Amplification} & Dominance \\ \hline
& & & & Inner & Outer & \\
Spain & & & & & & \\
& 1950-2019 & \textit{W1} & [\textit{mean, std, iqr, rank, } & [\textit{q70, q80, q95]} & [\textit{q90, q95]} & [q60,..., q95] \\
& & & \textit{q20,..., q95]} & & \\
& 1970-2019 & \textit{W3} & & [\textit{q50,..., q80]} & [\textit{q60,..., q95]} & \\
The Globe & & & & & & \\
& 1950-2019 & \textit{W2} & [\textit{mean} & [\textit{q05,..., q30]} & & [\textit{q05]} \\
& & & \textit{q40,..., q95]} & & & \\
& 1970-2019 & \textit{W2} & & [\textit{q05,..., q20]} & & \\
& & & & & & \\ \hline
\multicolumn{7}{c}{Time analysis} \\ \hline
Sample & Period & Type & Acceleration & \multicolumn{2}{c}{Amplification} & Dominance \\ \hline
Madrid, Retiro Station & & & & & & \\
& 1950-2019 & \textit{W3} & [\textit{mean, std, rank, } & [\textit{q50,..., q95]} & [\textit{ q40,..., q95]} & [q80,..., q95] \\
& & & \textit{q40, ..., q95]} & & & \\
& 1970-2019 & \textit{W3} & & [\textit{q50,..., q95]} & [\textit{q40,..., q95]} & \\
Barcelona, Fabra Station & & & & & & \\
& 1950-2019 & \textit{W1} & [\textit{mean, } & \textit{-} & [\textit{q30,..., q90]} & \textit{[q05,..., q40]} \\
& & & \textit{q20,..., q95]} & & & \\
& 1970-2019 & \textit{W1} & & [\textit{q60, q70]} & [\textit{q30,..., q70]} & \\
& & & & & & \\ \hline \hline
\end{tabular}
}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: For Spain and the Globe we build characteristics from station-months units. For Madrid and Barcelona we use daily frequency time series. A significance level of 10\% is considered for all tests and characteristics.}
\end{tablenotes}
\end{table}
\begin{figure}[h!]
\begin{center}
\caption{Trend evolution of different temperature distributional characteristics} \label{fig-comp-Globe-Spain-Madrid-Barcelona}
\includegraphics[scale=0.5]{Figures/Figure_comp_Globe_Spain_Madrid_Barcelona}
\end{center}
\begin{figurenotes}
\textit{Note}: The bars represent the intensity of the trends found in each characteristic measured through the value of the $\beta$-coefficient estimated in the regression $C_{t}=\alpha+\beta t+u_{t}$.
\end{figurenotes}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Comparing heatmaps}
\label{fig-comp-Globe-Spain-heatmap}
\subfloat[{\small Globe}]{
\includegraphics[scale=0.4]{Figures/Heatmap_comp_Globe_1950}}\\
\subfloat[{\small Spain}]{
\includegraphics[scale=0.4]{Figures/Heatmap_comp_Spain_1950}}\\
\end{center}
\begin{figurenotes}
\textit{Note}: The color scale on the right side of the figure shows the intensity of the trend, based on the value of the $\beta$-coefficient estimated in the regression $C_{t}=\alpha+\beta t+u_{t}$.
\end{figurenotes}
\end{figure}
\clearpage
\section{Conclusions}
The existence of Global Warming is very well documented in all the scientific reports published by the IPCC. In the last one, the AR6 report (2022), special attention is dedicated to climate change heterogeneity (regional climate). Our paper presents a new quantitative methodology, based on the evolution of the trend of the whole temperature distribution and not only on the average, to characterize, to measure and to test the existence of such warming heterogeneity.
It is found that the local warming experienced by Spain (one of most climatically diverse areas) is very different from that of the Globe as a whole. In Spain, the upper-temperature quantiles tend to increase more than the lower ones, while in the Globe just the opposite occurs. In both cases the warming process is accelerating over time. Both regions suffer an amplification effect of an asymmetric nature: there is warming amplification in the lower quantiles of the Globe temperature (beyond the standard well-known results of the Arctic zone) and in the upper ones of Spain. Overall, warming in Spain dominates that of the Globe in all the quantiles except the lower \textit{q05}. This places Spain in a very difficult warming situation compared to the Globe. Such a situation requires stronger mitigation-adaptation policies. For this reason, future climate agreements should take into consideration the whole temperature distribution and not only the average.
Any time a novel methodology is proposed, new research issues emerge for future investigation. Among those which have been left out of this paper (some are part of our current research agenda), three points stand out as important:
\begin{itemize}
\item There is a clear need for a new non-uniform causal-effect climate change analysis beyond the standard causality in mean.
\item In order to improve efficiency, mitigation-adaptation policies should be designed containing a common global component and an idiosyncratic regional element.
\item The relation between warming heterogeneity and public awareness of climate change deserves to be analyzed.
\end{itemize}
|
\section{Introduction}
Throughout this paper, we consider simple and connected graphs. A simple connected graph $G=(V,E)$ consists of the vertex set $V(G)=\{v_{1},v_{2},\ldots,v_{n}\}$ and the edge set $E(G)$. The \textit{order} and \textit{size} of $G$ are $|V(G)|=n$ and $|E(G)|=m$, respectively. The \textit{degree} of a vertex $v,$ denoted by $d_{G}(v)$ (we simply write by $d_v$) is the number of edges incident on the vertex $v$. Further, $N_G (v)$ denotes the set of all vertices that are adjacent to $v$ in $G$ and $\overline{G}$ denotes the complement of the graph $G$. A vertex $u\in V(G)$ is called a pendant vertex if $d_{G}(u) =1$.For other standard definitions, we refer to \cite{5R8,5R9}.\\
\indent If $A$ is the adjacency matrix and $D(G)=diag(d_1 ,d_2 ,\dots,d_n)$ is the diagonal matrix of vertex degrees of $G$, the $Laplacian$ $matrix$ of $G$ is defined as $ L(G)=D(G)-A$. By the spectrum of $G$, we mean the spectrum of its adjacency matrix, and it consists of the eigenvalues $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$. The Laplacian spectrum of $G$ is the spectrum of its Laplacian matrix, and is denoted by $\mu_1 (G) \geq \mu_2 (G) \geq \dots \geq \mu_n (G) =0$. For any interval $I$, let $m_{L(G)}I$ be the number of Laplacian eigenvalues of $G$ that lie in the interval $I$. Also, let $m_{L(G)}(\mu_i (G) )$ denote the multiplicity of the Laplacian eigenvalue $\mu_i (G) $ .\\
\indent In $G$, the \textit{distance} between the two vertices $u,v\in V(G),$ denoted by $d_{uv}$, is defined as the length of a shortest path between $u$ and $v$. The \textit{diameter} of $G$, denoted by $d$, is the maximum distance between any two vertices of $G.$ The \textit{distance matrix} of $G$, denoted by $D(G)$, is defined as $D(G)=(d_{uv})_{u,v\in V(G)}$.
The \textit{transmission} $Tr_{G}(v)$
(we will write $Tr(v)$ if the graph $G$ is understood) of a vertex $v$ is defined as the sum of the distances from $v$ to all other vertices in $G$, that is, $Tr_{G}(v)=\sum\limits_{u\in V(G)}d_{uv}.$\\
\indent Let $Tr(G)=diag (Tr(v_1),Tr(v_2),\ldots,Tr(v_n)) $ be the diagonal matrix of vertex transmissions of $G$. Aouchiche and Hansen \cite{5R1} defined the \textit{distance Laplacian matrix} of a connected graph as $D^L(G)=Tr(G)-D(G)$ (or briefly written as $D^{L}$). The eigenvalues of $D^{L}(G)$ are called the distance Laplacian eigenvalues of $G$. Since $ D^L(G) $ is a real symmetric positive semi-definite matrix, we denote its eigenvalues by $\partial_{i}^{L}(G) $'s and order them as $0=\partial_{n}^{L}(G)\leq \partial_{n-1}^{L}(G)\leq \dots\leq \partial_{1}^{L}(G)$. The distance Laplacian eigenvalues are referred as $D^L-eigenvalues$ of $G$ whenever the graph $G$ is understood. Some recent work can be seen in \cite{pk1,pk2}. For any interval $I$, $m_{D^L (G)}I$ represents the number of distance Laplacian eigenvalues of $G$ that lie in the interval $I$. Also, $m_{D^L (G)}(\partial_{i}^{L}(G) )$ denotes the multiplicity of the distance Laplacian eigenvalue $ \partial_{i}^{L}(G) $. The multiset of eigenvalues of $ D^L(G)$ is called the \textit{distance Laplacian spectrum} of $G$. If there are only $k$ distinct distance Laplacian eigenvalues of $G$, say, $\partial_{1}^{L}(G),\partial_{2}^{L}(G),\dots,\partial_{k}^{L}(G)$ with corresponding multiplicities as $n_1 ,n_2 ,\dots, n_k$, then we convey this information in the matrix form as\\
$$\begin{pmatrix}
\partial_{1}^{L}(G) & \partial_{2}^{L}(G) & \dots & \partial_{k}^{L}(G)\\
n_1 & n_2 & \dots & n_k\\
\end{pmatrix}.$$
\indent We denote by $K_n$ the complete graph of order $n$ and by $K_{t_1 ,\dots, t_k}$ the complete multipartite graph with order of parts $t_1 ,\dots, t_k$. The star graph of order $n$ is denoted by $S_n$. Further, $SK_{n,\alpha}$ denotes the complete split graph, that is, the complement of the disjoint union of a clique $K_\alpha$ and $n-\alpha$ isolated vertices.
For two disjoint graphs $G$ and $H$ of order $n_1$ and $n_2$, respectively, the \textit{corona graph} $GoH$ is the graph obtained by taking one copy of $G$ and $n_1$ copies of $H$, and then joining the \textit{i}th vertex of $G$ to every vertex in the \textit{i}th copy of $H$, for all $ 1\leq i\leq n_1$.\\
\indent In a graph $G$, the subset $M\subseteq V(G)$ is called an \textit{independent set} if no two vertices of $M$ are adjacent. The \textit{independence number} of $G$ is the cardinality of the largest independent set of $G$ and is denoted by $\alpha(G)$. A set $M\subseteq V(G)$ is \textit{dominating} if every $v\in V(G) \setminus M$ is adjacent to some member in $S$. The \textit{domination number} $\gamma(G)$ is the minimum size of a dominating set.\\
\indent The \textit{chromatic number} of a graph $G$ is the minimum number of colors required to color the vertices of $G$ such that no two adjacent vertices get the same color. It is denoted by $\chi(G)$. The set of all vertices with the same color is called a \textit{color class}. \\
\indent The distribution of Laplacian eigenvalues of a graph $G$ in relation to various graph parameters of $G$ has been studied extensively. Grone and Merris \cite{5R10} and Merris \cite{5R11} obtained bounds for $m_{L(G)}[0,1) $ and $m_{L(G)}[0,2) $. Guo and Wang \cite{5R12} showed that if $G$ is a connected graph with matching number $\nu(G)$, then $m_{L(G)}(2,n]>\nu(G)$, where $n>2\nu(G)$. Some work in this direction can be seen in \cite{cjt}. Recently, Ahanjideh et al \cite{5R0} obtained bounds for $m_{L(G)}I $ in terms of structural parameters of $G$. In particular, they showed that $m_{L(G)}(n -\alpha(G), n] \leq n -\alpha(G)$ and $m_{L(G)}(n-d(G)+3, n]\leq n -d(G) -1$, where $\alpha(G)$ and $d(G)$ denote the independence number and the diameter of $G$, respectively. The distribution of the distance Laplacian eigenvalues of a graph $G$ with respect to its structural parameters has not got its due attention and our investigation in this manuscript is an attempt in that direction. \\
\indent The rest of the paper is organized as follows. In Section 2, we find the distribution of $G$ in relation to the chromatic number $\chi$ and the number of pendant vertices. We show that $m_{D^{L}(G) }[n,n+2)\leq \chi-1$ and show that the inequality is sharp. We also prove that $m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1 $, where $C_{\overline{G}}$ is the number of components in $\overline{G}$, and discuss some cases where the bound is best possible. In addition, we prove that $m_{D^{L} (G )}[n,n+p)\leq n-p$, where $p\geq 1$ is the number of pendant vertices. In Section 3, we determine the distribution of distance Laplacian eigenvalues of $G$ in terms of the independence number $\alpha(G)$ and diameter $d$. In particular, we show that $m_{D^{L} (G)}[n,n+\alpha(G))\leq n-\alpha(G)$ and show that the inequality is sharp. We show that $m_{D^{L}(G)}[0,dn]\geq d+1$. We characterize the graphs having diameter $d\leq 2$ satisfying $m_{D^{L}(G) } (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$. In Section 4, we propose some research problems.
\section{Distribution of distance Laplacian eigenvalues, chromatic number and pendant vertices }
For a graph $G$ with $n$ vertices, let $Tr_{max}(G)=max\{Tr(v):v\in V(G)\}$ . Whenever the graph $G$ is understood, we will write $Tr_{max}$ in place of $Tr_{max}(G)$. We have the following important result from matrix theory.
\begin{lemma}\label{L2}\emph {\cite{5R3}} Let $M=(m_{ij})$ be a $n\times n$ complex matrix having $l_1 ,l_2 ,\dots,l_p$ as its distinct eigenvalues. Then
$$\{l_1 ,l_2 ,\dots,l_p\}\subset \bigcup\limits_{i=1}^{n}\Big \{z:|z-m_{ii}|\leq \sum\limits_{j\neq i}|m_{ij}|\Big\}.$$
\end{lemma}
\indent By using Lemma \ref{L2} for the distance Laplacian matrix of a graph $G$ with $n$ vertices, we get
\begin{equation}
\partial^L_{1}(G)\leq 2Tr_{max}
\end{equation}
The following fact about distance Laplacian eigenvalues will be used in the sequel.\\
\textbf{Fact 1.} Let $G$ be a connected graph of order $n$ and having distance Laplacian eigenvalues in the order $\partial^L_{1}(G)\geq \partial^L_{2}(G)\geq \dots \geq \partial^L_{n}(G)$. Then,\\
\hspace*{25mm} $\partial^L_{n}(G)=0$ and $\partial^L_{i}(G)\geq n$ for all $i=1,2,\dots,n-1.$\\\\
We recall the following important results.
\begin{theorem}\label{T7}
(Cauchy Interlacing Theorem). Let $M$ be a real symmetric matrix of order $n$, and let $A$ be a principal submatrix of $M$ with order $s\leq n$. Then $$\lambda_i (M)\geq \lambda_i (A) \geq \lambda_{i+n-s} (M)\hspace{1cm}(1\leq i\leq s).$$
\end{theorem}
\begin{lemma} \label{L1}\emph {\cite{5R1}} Let $G$ be a connected graph with $n$ vertices and $m$ edges, where $m\geq n$. Let $G^*$ be the connected graph obtained from $G$ by deleting an edge. Let $\partial^L_1 \geq \partial^L_2 \geq ...\geq \partial^L_n$ and ${\partial^*_1}^L \geq {\partial^*_2}^L \geq ...\geq {\partial^*_n}^L$ be the spectrum of $G$ and $G^*$, respectively. Then ${\partial^*_i}^L \geq \partial^L_i $ for all $i=1,\dots,n$.
\end{lemma}
\begin{lemma}\label{L8} \emph{\cite{5R7} } Let $t_{1},t_{2},\dots,t_{k}$ and n be integers such that $t_{1}+t_{2}+\dots+t_{k}=n$ and $t_{i}\geq 1$ for $i=1,2,\dots,k$. Let $p=|\{i:t_{i}\geq 2\}|$. The distance Laplacian spectrum of the complete $k-partite$ graph $K_{t_{1},t_{2},\dots,t_{k}}$ is$ \Big((n+t_{1})^{(t_{1}-1)},\dots,(n+t_{p})^{(t_{p}-1)},n^{(k-1)},0\Big)$.
\end{lemma}
\begin{lemma}\label{L3} \emph {\cite{5R1}} Let $G$ be a connected graph with $n$ vertices. Then $\partial^L_{n-1}\geq n$ with equality if and only if $\overline{G}$ is disconnected. Furthermore, the multiplicity of $n$ as an eigenvalue of $D^L (G)$ is one less than the number of components of $\overline{G}$.
\end{lemma}
First we obtain an upper bound for $m_{D^{L} (G)} I$, where $I$ is the interval $[n,n+2)$, in terms of the chromatic number $\chi$ of $G$.
\begin{theorem} \label{T8} Let $G$ be a connected graph of order $n$ and having chromatic number $\chi$. Then $$m_{D^{L} (G)} [n,n+2 ) \leq \chi-1.$$ Inequality is sharp and is shown by all complete multipartite graphs.
\end{theorem}
\noindent {\bf Proof.} Let $t_1 ,t_2 ,\dots,t_\chi $ be $\chi$ positive integers such that $t_1 +t_2 +\dots+t_{\chi} =n$ and let these numbers be the cardinalities of $\chi$ partite classes of $G$. We order these numbers as $t_1 \geq t_2 \geq \dots\geq t_{\chi(G)} $. Thus $G$ can be considered as a spanning subgraph of the complete multipartite graph $H=K_{t_1 ,t_2 ,\dots,t_{\chi}}$ with $t_1 \geq t_2 \geq \dots\geq t_{\chi} $ as the cardinalities of its partite classes. Using Lemma \ref{L8}, we see that $m_{D^{L} (H )} [n,n+2 ) = \chi-1$. By Lemma \ref{L1} and the Fact 1, we have $ m_{D^{L} (G )} [n,n+2 ) \leq m_{D^{L} (H )} [n,n+2 ) = \chi-1$, proving the inequality. Using Lemma \ref{L8}, we see that the equality holds for all complete multipartite graphs. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
As a consequence of Theorem \ref{T8}, we have the following observation.
\begin{corollary} \label{C2} Let $G$ be a connected graph of order $n$ having chromatic number $\chi$. Then $$ m_{D^{L} (G )} [n+2,2Tr_{max} ]\geq n- \chi.$$ Inequality is sharp and is shown by all complete multipartite graphs.
\end{corollary}
\noindent {\bf Proof.} By using the Fact 1, we get
\begin{align*}
&m_{D^{L} (G )} [n,n+2 )+ m_{D^{L} (G )}[n+2,2Tr_{max} ] =n-1, \\&
or ~~~~~ \chi-1+ m_{D^{L} (G )}[n+2,2Tr_{max} ] \geq n-1, \\&
or ~~~~~~ m_{D^{L} (G )}[n+2,2Tr_{max} ] \geq n- \chi.
\end{align*}
Therefore, the inequality is established. The remaining part of the proof follows from Theorem \ref{T8}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
In the following theorem, we characterize the unique graph with chromatic classes of the same cardinality having $n-1$ eigenvalues in the interval $\big[n,n+\frac{n}{\chi}\big]$.
\begin{theorem} \label{T9} Let $G$ be a connected graph of order $n$ and having the chromatic number $\chi$. If the chromatic classes are of the same cardinality, then
$$ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]\leq n-1$$ with equality if and only if $G\cong K_{\frac{n}{\chi},\dots,\frac{n}{\chi}}$.
\end{theorem}
\noindent {\bf Proof.} Using Fact 1, we get the required inequality. Now, we will show that the equality holds for the graph $H= K_{\frac{n}{\chi},\dots,\frac{n}{\chi}}$. Using Lemma \ref{L8}, we have the distance Laplacian spectrum of $H$ as
$$\begin{pmatrix}
0 & n & n+\frac{n}{\chi} \\
1 & \chi-1 & n-\chi \\
\end{pmatrix},$$
which clearly shows that the equality holds for the graph $H$. To complete the proof, we will show that if $G\ncong H$, then $ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]< n-1$. Since the chromatic classes are of the same cardinality, we see that $G$ has to be an induced subgraph of $H$ and $n=s\chi$ for some integer $s$, so that $s=\frac{n}{\chi}$. In $H$, let $e=\{u,v\}$ be an edge between the vertices $u $ and $v$. Using Lemma \ref{L1}, it is sufficient to take $G=H-e$. In $G$, we see that $Tr(u)=Tr(v)=n+s-1$. Let $A$ be the principal submatrix of $D^L (G)$ corresponding to the vertices $u$ and $v$. Then $A$ is given by
\begin{equation*}
A=
\begin{bmatrix}
n+s-1 & -2 \\
-2 & n+s-1
\end{bmatrix}.
\end{equation*}
Let $c(x)$ be the characteristic polynomial of $A$. Then $c(x)=x^2 -2(n+s-1)x+{(n+s-1)}^2-4$. Let $x_1 $ and $x_2$ be the roots of $c(x)$ with $x_1 \geq x_2$. It can be easily seen that $x_1=n+s+1$. Using Theorem \ref{T7}, we have $\partial^L _1 (G)\geq x_1 =n+s+1>n+s=n+\frac{n}{\chi}$. Thus, $ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]< n-1$ and the proof is complete. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
Now, we obtain an upper bound for the number of distance Laplacian eigenvalues which fall in the interval $\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)$.
\begin{theorem}\label{TN1}Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. Then,
\begin{equation}
m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1
\end{equation}
where $C_{\overline{G}}$ is the number of components in $\overline{G}$. The bound is best possible for $\chi=2$ (when $n$ is odd) and $\chi=n-1$ as shown by $K_{m+1, m}$, where $n=2m+1$, and $K_{2,\underbrace{1,1,\dots,1}_{n-2}} $, respectively.
\end{theorem}
\noindent {\bf Proof.} Let $n_1 \geq n_2 \geq \dots\geq n_{\chi} $ be $\chi$ positive integers in that order such that $n_1 +n_2 +\dots+n_{\chi} =n$ and let these numbers be the cardinalities of $\chi$ partite classes of $G$. Clearly, $G$ can be considered as a spanning subgraph of the complete multipartite graph $H=K_{n_1 ,n_2 ,\dots,n_{\chi}}$. Using Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (H)=n+n_1, ~~~~~~ \text{for all} ~ 1\leq i\leq n_1 -1.$$
As $n_1$ is largest among the cardinalities of chromatic classes, it is at least equal to average, that is,
$n_1 \geq \frac{n}{\chi}$. Also, $n_1$ is an integer, therefore, $n_1 \geq \left\lceil\frac{n}{\chi}\right\rceil$. Using this fact in above inequality, we get
$$
\partial^L _i (G)\geq n+\left\lceil\frac{n}{\chi}\right\rceil ~~~~~~ \text{ for all} ~ 1\leq i\leq n_1 -1.
$$
Thus, there at least $n_1 -1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $n+\left\lceil\frac{n}{\chi}\right\rceil$.
Also from Lemma \ref{L3}, we see that $n$ is a distance Laplacian eigenvalues of $G$ with multiplicity exactly $C_{\overline{G}}-1$. Using these observations with Fact 1, we get
\begin{align*}
m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)& \leq n- (n_1 -1)-(C_{\overline{G}}-1)-1\\
& = n-n_1 -C_{\overline{G}}+1\\
& \leq n-\left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1,
\end{align*}
proving the required inequality. \\
Let $G^*=K_{2,\underbrace{1,1,\dots,1}_{n-2}} $. It is easy to see that $\left\lceil\frac{n}{n-1}\right\rceil=2$. Also, the complement of $G^*$ has exactly $n-1$ components. By Lemma \ref{L8}, the distance Laplacian spectrum of $G^*$ is given as follows
$$\begin{pmatrix}
0 & n & n+2 \\
1 & n-2 & 1 \\
\end{pmatrix}.$$
Putting all these observations in Inequality (2.2), we see that the equality holds for $G^*$ which shows that the bound is best possible when $\chi=n-1$.
Let $G^{**}=K_{m+1, m}$, where $n=2m+1$. In this case, we see that $\left\lceil\frac{n}{2}\right\rceil=m+1=\frac{n+1}{2}$ and the complement of $G^{**}$ has exactly $2$ components. By Lemma \ref{L8}, we observe that the distance Laplacian spectrum of $G^{**}$ is given as follows
$$\begin{pmatrix}
0 & n & \frac{3n+1}{2} & \frac{3n-1}{2} \\
1 & 1 & \frac{n-1}{2} & \frac{n-3}{2}\\
\end{pmatrix}.$$
Using all the above observations in Inequality (2.2), we see that the equality holds for $G^{**}=K_{m+1, m}$ which shows that the bound is best possible when $\chi=2$ and $n$ is odd. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The following are some immediate consequences of Theorem \ref{TN1}.
\begin{corollary}\label{CN1} Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. Then,
$$
m_{D^{L} (G )}\bigg[ n+\left\lceil\frac{n}{\chi}\right\rceil,\partial^L _1 (G)\bigg]\geq \left\lceil\frac{n}{\chi}\right\rceil-1.
$$
The bound is best possible for $\chi=2$ (when $n$ is odd) and $\chi=n-1$ as shown by $K_{m+1, m}$, where $n=2m+1$, and $K_{2,\underbrace{1,1,\dots,1}_{n-2}} $, respectively.
\end{corollary}
\begin{corollary}\label{CN2}Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. If $\overline{G}$ is connected, then
$$m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil.$$
\end{corollary}
\noindent{\bf Proof.} Since $\overline{G}$ is connected, therefore, $C_{\overline{G}}=1$. Putting $C_{\overline{G}}=1$ in Inequality (2.2) proves the desired result. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The next theorem shows that there are at most $n-p$ distance Laplacian eigenvalues of $G$ in the interval $[n,n+p)$, where $p\geq 1$ is the number of pendant vertices in $G$.
\begin{theorem}\label{TN2} Let $G\ncong K_n$ be a connected graph on $n$
vertices having $p\geq 1$ pendant vertices, then
$$m_{D^{L} (G )}[n,n+p)\leq n-p.$$
For $p=n-1$, equality holds if and only if $G\cong S_n$.
\end{theorem}
\noindent{\bf Proof.} Let $S$ be the set of pendant vertices so that $|S|=p$. Clearly, $S$ is an independent set of $G$. Obviously, the induced subgraph, say $H$, on the vertex set $M=V(G)\setminus S$ is connected. Let the chromatic number of $H$ be $q$ and $n_1 \geq n_2 \geq \dots \geq n_q$ be the cardinalities of these chromatic classes in that order, where $1\leq q \leq n-p$ and $n_1 +n_2 +\dots+n_q =n-p$. Let $n_k \geq p \geq n_{k+1}$, where $0\leq k \leq q$, $n_0 =p$ if $k=0$ and $n_{q+1}=p$ if $k=q$. With this partition of the vertex set $V(G)$ into $q+1$ independent sets, we easily see that $G$ can be considered as an induced subgraph of complete $q+1$-partite graph $L=K_{n_1 ,n_2,\dots, n_k ,p,n_{k+1} ,\dots,n_q} $. Consider the following two cases.\\
\noindent{\bf Case 1.} Let $1\leq k \leq q$ so that $n_1 \geq p$. Then, from Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (L)=n+n_1\geq n+p, ~~~ \text{ for all} ~ 1\leq i \leq n_1 -1. $$
\noindent{\bf Case 2.} Let $k=0$ so that $p\geq n_1$. Again, using Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (L)=n+p, ~~~ \text{ for all} ~ 1\leq i \leq p -1.$$
Thus, in both cases, we see that there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $n+p$. As $p\geq 1$, so $\overline{G}$ has at most two components, which after using Lemma \ref{L3} shows that $n$ is a distance Laplacian eigenvalue of $G$ of multiplicity at most one. From the above observations and Fact 1, we get
$$m_{D^{L} (G )}[n,n+p)\leq n-p,$$
which proves the required inequality.
For the second part of the theorem, we see that $S_n$ is the only connected graph having $n-1$ pendant vertices. The distance Laplacian spectrum of $S_n$ by Lemma \ref{L8} is given as
$$\begin{pmatrix}
0 & n & 2n-1 \\
1 & 1 & n-2\\
\end{pmatrix}$$
and the proof is complete. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
An immediate consequence is as follows.
\begin{corollary}\label{CN3}
Let $G\ncong K_n$ be a connected graph on $n$
vertices having $p\geq 1$ pendant vertices, then
$$m_{D^{L} (G )}[n+p,\partial^L _1 (G)]\geq p-1.$$
For $p=n-1$, equality holds if and only if $G\cong S_n$.
\end{corollary}
The following lemma will be used in the proof of Theorem \ref{T11}.
\begin{lemma}\label{L9} \emph{\cite{5R2}} Let $G$ be a graph with $n$ vertices. If $K=\{v_1 ,v_2 ,\dots,v_p\}$ is an independent set of $G$ such that $N(v_i)=N(v_j)$ for all $i,j\in \{1,2,\dots,p\}$, then $\partial=Tr(v_i)=Tr(v_j)$ for all $i,j\in \{1,2,\dots,p\}$ and $\partial +2$ is an eigenvalue of $D^L (G)$ with multiplicity at least $p-1$.
\end{lemma}
\begin{theorem} \label{T11} Let $G$ be a connected graph of order $n\geq 4$ having chromatic number $\chi$. If $S=\{v_1 ,v_2 ,\dots,v_p\} \subseteq V(G)$, where $|S|=p\geq \frac{n}{2}$, is the set of pendant vertices such that every vertex in $S$ has the same neighbour in $V(G)\setminus S$, then
$$ m_{D^{L} (G )} [n,2n-1)\leq n-\chi.$$
\end{theorem}
\noindent {\bf Proof.} Clearly, all the vertices in $S$ form an independent set. Since all the vertices in $S$ are adjacent to the same vertex, therefore, all the vertices of $S$ have the same transmission. Now, for any $v_i$ $(i=1,2,\dots,p)$ of $S$, we have
\begin{align*}
T=Tr(v_i ) \geq 2(p-1)+1+2(n-p-1) =2n-3.
\end{align*}
From Lemma \ref{L9}, there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $T+2$. From above, we have $T+2\geq 2n-3+2=2n-1$. Thus, there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $2n-1$, that is, $ m_{D^{L} (G )} [2n-1,2Tr_{max}]\geq p-1$. Using Fact 1, we have
\begin{equation}
m_{D^{L} (G )} [n,2n-1)\leq n-p.
\end{equation}
We claim that $\chi(G)\leq \frac{n}{2}$. If possible, let $\chi(G)> \frac{n}{2}$. We have following two cases to consider.\\
$\bf {Case ~ 1.}$ Let $p=n-1$. Clearly, the star is the only connected graph having $n-1$ pendant vertices. Thus, $G\cong S_n$. Also, $\chi(S_n)=2$, a contradiction, as $\chi(S_n)=2\leq\frac{n}{2}$, for $n\geq 4$.\\
$\bf {Case ~ 2.}$ $\frac{n}{2}\leq p \leq n-2$. Since $p\leq n-2$, there is at least one vertex, say $u$, which is not adjacent to any vertex in $S$. Thus in the minimal coloring of $G$, at least $p+1$ vertices, say, $u,v_1 ,\dots,v_p$ can be colored using only one color. The rest $n-p-1$ vertices can be colored with at most $n-p-1$ colors. Thus, $\chi\leq 1+n-p-1=n-p\leq n-\frac{n}{2}=\frac{n}{2}$, a contradiction. Therefore, $\chi \leq \frac{n}{2}\leq p$. Using this in Inequality (2.3), we get
$$ m_{D^{L} (G )} [n,2n-3)\leq n-\chi,$$
completing the proof. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
To have a bound only in terms of order $n$ and the number of pendant vertices $p$, we can relax the conditions $p\geq \frac{n}{2}$ and $n\geq 4$ in Theorem \ref{T11}. This is given in the following corollary.
\begin{corollary} \label{C3} Let $G$ be a connected graph of order $n$ . If $S=\{v_1 ,v_2 ,\dots,v_p\} \subseteq V(G)$ is the set of pendant vertices such that every vertex in $S$ has the same neighbour in $V(G)\setminus S$, then
$$ m_{D^{L} (G )} [n,2n-1)\leq n-p.$$
\end{corollary}
\section{Distribution of distance Laplacian eigenvalues, independence number and diameter}
The following lemma will be useful.
Now, we obtain an upper bound for $m_{D^{L} (G)}I$, where $I$ is the interval $[n,n+\alpha(G))$, in terms of order $n$ and independence number $\alpha(G)$.
\begin{theorem} \label{T1} Let $G$ be a connected graph of order $n$ having independence number $\alpha (G)$. Then $m_{D^{L} (G)} [n,n+\alpha(G))\leq n-\alpha(G)$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, the equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Moreover, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{theorem}
\noindent {\bf Proof.} We have the following three cases to consider.\\
{\bf Case 1.} $\alpha(G)=1$. Clearly, in this case $G\cong K_n$ and the distance Laplacian spectrum of a complete graph is
$$\begin{pmatrix}
0 & n \\
1 & n-1 \\
\end{pmatrix}.$$
Therefore, we have $m_{D^{L} (K_n)} [n,n+1)= n-1$ which proves the result in this case. \\
{\bf Case 2.} $\alpha(G)= n-1$. Since the star $S_n$ is the only connected graph having independence number $n-1$, therefore, $G\cong S_n$ in this case. Now, $n-\alpha(S_n)=n-n+1=1$. From Lemma \ref{L8}, the distance Laplacian spectrum of $S_n $ is given as \\
$$\begin{pmatrix}
0 & n & 2n-1 \\
1 & 1 & n-2 \\
\end{pmatrix}.$$
Therefore, $m_{D^{L} (S_n)} [n,2n-1)= 1$, proving the result in this case.\\
{\bf Case 3.} $2\leq \alpha(G)\leq n-2$. Without loss of generality, assume that $N=\{v_1 ,v_2 ,\dots ,v_{\alpha(G)}\} \subseteq V(G)$ is an independent set with maximum cardinality. Let $H$ be the new graph obtained by adding edges between all non-adjacent vertices in $V(G)\setminus N$ and adding edges between each vertex of $N$ to vertex of $V(G)\setminus N$. With this construction, we see that $H\cong SK_{n,\alpha}$. Using Fact 1 and Lemma \ref{L1}, we see that $m_{D^{L} (G)} [n,n+\alpha(G))\leq m_{D^{L} (H)} [n,n+\alpha(G))$. So to complete the proof in this case, it is sufficient to prove that $ m_{D^{L} (H)} [n,n+\alpha(G))\leq n-\alpha(G)$. By Corollary 2.4 in \cite{5R2}, the distance Laplacian spectrum of $H$ is given by
$$\begin{pmatrix}
0 & n & n+\alpha(G) \\
1 & n-\alpha(G) & \alpha(G)-1 \\
\end{pmatrix}.$$
This shows that $ m_{D^{L} (H)} [n,n+\alpha(G))= n-\alpha(G)$. Thus the bound is established. Also, it is clear that $SK_{n,\alpha}$ satisfies the inequality for $2\leq \alpha(G)\leq n-2$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
From Theorem \ref{T1}, we have the following observation.
\begin{corollary} \label{c1} If $G$ is a connected graph of order $n$ having independence number $\alpha (G)$, then $\alpha(G) \leq 1+m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, the equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Moreover, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{corollary}
\noindent {\bf Proof.} Using Inequality (2.1) and Theorem \ref{T1}, we have
\begin{align*}
& m_{D^{L} (G)} [n,n+\alpha(G))+ m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]=n-1\\
or ~~~~~~~~~~~~~~ & ~ n-\alpha(G)+ m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]\geq n-1\\
or ~~~~~~~~~~~~~~ & ~ \alpha(G) \leq 1+m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}],
\end{align*}
which proves the inequality. The proof of the remaining part is similar to the proof of Theorem \ref{T1}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The next result is an upper bound for $ m_{D^{L} (G)} (n,n+\alpha(G))$ in terms of the independence number $\alpha(G)$, order $n$ and number of components of the complement $\overline{G}$ of $G$.
\begin{theorem} \label{T2} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$. Then
$$ m_{D^{L} (G)} (n,n+\alpha(G))\leq n-\alpha(G) +1-k,$$
where $k$ is the number of components of $\overline{G}$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Furthermore, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{theorem}
\noindent {\bf Proof.} Since $\overline{G}$ has $k$ components, therefore by Lemma \ref{L3}, $n$ is a distance Laplacian eigenvalue of multiplicity exactly $k-1$. Using Theorem \ref{T1}, we have
\begin{align*}
m_{D^{L} (G)} (n,n+\alpha(G)) & =m_{D^{L} (G)} [n,n+\alpha(G))-m_{D^{L} (G)} (n)\\
& =m_{D^{L} (G)} [n,n+\alpha(G))-k+1\\
& \leq n-\alpha(G) +1-k.
\end{align*}
Thus the inequality is established. The remaining part of the proof follows by observing the distance Laplacian spectrum of the graphs $ K_n$, $ S_n$ and $SK_{n,\alpha}$ given in Theorem \ref{T1}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
We will use the following lemmas in the proof of Theorem \ref{T3}.
\begin{lemma} \label{L4} \emph {\cite{5R4}} If $G$ is a graph with domination number $\gamma (G)$, then $ m_{L(G)} [0,1)\leq \gamma (G) $.
\end{lemma}
\begin{lemma}\label{L5}\emph{\cite{5R1}} Let $G$ be a connected graph with $n$ vertices and diameter $d(G)\leq 2$. Let $\mu_1 (G) \geq \mu_2 (G)\geq \dots \geq \mu_n (G)=0$ be the Laplacian spectrum of $G$. Then the distance Laplacian spectrum of $G$ is $2n-\mu_{n-1} (G) \geq 2n- \mu_{n-2} (G)\geq \dots \geq 2n-\mu_1 (G)>\partial^L_n (G)=0$. Moreover, for every $i\in \{1,2,\dots,n-1\}$, the eigenspaces corresponding to $\mu_i (G)$ and $2n-\mu_i (G)$ are same.
\end{lemma}
Now, we obtain an upper bound for $m_{D^{L}(G) }$, where $I$ is the interval $(2n-1,2n)$, in terms of the independence number $\alpha(G)$. This upper bound is for graphs with diameter $d(G)\leq 2$.
\begin{theorem} \label{T3} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then
$$m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -1$$and inequality is sharp as shown by $K_n$.
\end{theorem}
\noindent {\bf Proof.} We know that every maximal independent set of a graph $G$ is a minimal dominating set of $G$. Therefore, $\alpha(G)\leq \gamma (G)$. Using Lemma \ref{L4}, we get $\alpha(G)\geq m_{L(G)} [0,1)$. As $G$ connected, the multiplicity of 0 as a Laplacian eigenvalue of $G$ is one. Thus, $\alpha(G)-1\geq m_{L(G)} (0,1)$, that is, there are at least $\alpha(G)-1$ Laplacian eigenvalues of $G$ which are greater than zero and less than one. Using this fact in Lemma \ref{L5}, we observe that there are at least $\alpha(G)-1$ distance Laplacian eigenvalues of $G$ which are greater than $2n-1$ and less than $2n$. Thus,
$$m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -1.$$
Clearly, $ m_{D^{L} (K_n)} (2n-1,2n )=0$ and $\alpha(K_n)=1$, which shows that equality holds for $K_n$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
Our next result shows that the upper bound in Theorem \ref{T3} can be improved for the graphs having independence number greater than $\frac{n}{2}$.
\begin{theorem} \label{L7} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)>\frac{n}{2}$ and diameter $d(G)\leq 2$ . Then $m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -2.$
\end{theorem}
\noindent {\bf Proof.} If possible, let $m_{D^{L} (G)} (2n-1,2n )\geq \alpha(G) -1$. Using Lemma \ref{L5}, we see that there are at least $\alpha(G) -1$ Laplacian eigenvalues of $G$ which are greater than zero and less than one. As $G$ is connected, 0 is a Laplacian eigenvalue of multiplicity one. Using these facts and Lemma \ref{L4}, we have $\alpha(G) \leq m_{L(G)} [0,1)\leq \gamma(G) \leq \alpha(G).$ Thus, $ \gamma(G) =\alpha(G) >\frac{n}{2}$. This contradicts the well known fact that $\gamma(G) \leq \frac{n}{2}$ . Thus the result is established.\nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
We also use the following lemma in our next result.
\begin{lemma} \label{L6} \emph{\cite{5R5}} Let $G$ and $G^*$ be graphs with $n_1$ and $n_2$ vertices, respectively. Assume that $\mu_1 \leq \dots \leq \mu_{n_1 }$ and $\lambda_1 \leq \dots \leq \lambda_{n_2 }$ are the Laplacian eigenvalues of $G$ and $G^*$ , respectively. Then the Laplacian spectrum of $GoG^*$ is given as follows.\\
(i) The eigenvalue $\lambda_j +1$ with multiplicity $n_1$ for every eigenvalue $\lambda_j (j=2,\dots,n_2)$ of $G^*$;\\
(ii) Two multiplicity-one eigenvalues $\frac{\mu_i +n_2 +1\pm \sqrt{{(\mu_i +n_2 +1)}^2-4\mu_i}}{2}$, for each eigenvalue $\mu_i (i=1,\dots,n_1)$ of $G$.
\end{lemma}
The following result characterizes the graphs with diameter $d(G)\leq 2$ and independence number $\alpha(G)$ which satisfy $m_{D^{L} } (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$.
\begin{theorem} \label{T5} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$ if and only if $G=HoK_1$ for some connected graph $H$.
\end{theorem}
\noindent {\bf Proof.} Assume that $G=HoK_1$ for some connected graph $H$. Then $|H|=\frac{n}{2}$. Let the Laplacian eigenvalues of $H$ be $\mu_1 \geq \dots \geq \mu_{\frac{n}{2}}$. By Lemma \ref{L6}, the Laplacian eigenvalues of $G$ are equal to $\frac{\mu_i +2\pm \sqrt{{\mu_i}^2 +4}}{2} $, $i=1,\dots,\frac{n}{2}$. We observe that half of these eigenvalues are greater than 1 and the other half are less than 1. As $G$ is connected, 0 is a Laplacian eigenvalue of multiplicity one. So $m_{{L} (G)} (0,1 )=\frac{n}{2}-1$. Using Lemma \ref{L5}, we see that there are $\frac{n}{2}-1$ distance Laplacian eigenvalues which are greater than $2n-1$ and less than $2n$. Thus, $m_{D^{L} (G)} (2n-1,2n )= \frac{n}{2}-1$. Now, we will show that $\alpha(G)=\frac{n}{2} $. Assume that $V(G)=\{v_1, \dots,v_{\frac{n}{2}}, v'_1 ,\dots,v'_{\frac{n}{2}}\},$ where $V(H)=\{v_1, \dots,v_{\frac{n}{2}}\}$ and $N_G (v'_i)=\{v_i \}$. If $A$ is a maximal independent set, then $|A|\leq \frac{n}{2}$. For if $|A|> \frac{n}{2}$, then from the structure of $G$, we have at least one pair of vertices in $A$, say $v_i ,v'_i$, which are adjacent, a contradiction. As $\{ v'_1 ,\dots,v'_{\frac{n}{2}}\}$ is an independent set, therefore $\alpha(G)=\frac{n}{2}$. Thus, we have $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$.\\
\indent Conversely, assume that $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$. Using Lemmas \ref{L4} and \ref{L5}, we see that $ \alpha(G)=m_{L (G)} [0,1)\leq \gamma(G) \leq \alpha(G)$ which shows that $\gamma(G)=\alpha(G)=\frac{n}{2}$. Therefore, by Theorem 3 of {\cite{5R6}}, $G=HoK_1$ for some connected graph $H$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
In the following theorem, we show that we can relax the condition $\alpha(G)=\frac{n}{2}$ in Theorem \ref{T5} for the class of bipartite graphs.
\begin{theorem} \label{T6} Let $G$ be a connected bipartite graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then, $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$ if and only if $G=HoK_1$ for some connected graph $H$.
\end{theorem}
\noindent {\bf Proof.} Assume that $G=HoK_1$, for some connected graph $H$. Then the proof follows by Theorem \ref{T5}. So let $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$. Using Theorem \ref{T5}, it is sufficient to show that $\alpha(G)=\frac{n}{2}$. If possible, let the two parts of $G$ have different orders. Then, using Lemmas \ref{L4} and \ref{L5}, we have
$$ \gamma(G)<\frac{n}{2}<\alpha(G)=m_{D^{L} (G)} (2n-1,2n )+1= m_{L (G)} [0,1)\leq \gamma(G),$$
which is a contradiction. Therefore, the two parts of $G$ have the same order. Now, if $ \alpha(G)> \frac{n}{2}$, then by Lemma \ref{L7}, $m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G)-2$, a contradiction. Hence $\alpha(G)\leq \frac{n}{2}$. Since the partite sets have the same order, we get $\alpha(G)=\frac{n}{2}$.\nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
\noindent {\bf {Remark.}} From the above theorem, we see that if $G$ is a connected bipartite graph with $n$ vertices, having independence number $\alpha(G)$ and diameter $d\leq 2$ satisfying either of the conditions (i) $G=HoK_1$ for some connected graph $H$, or (ii) $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$, then $\alpha(G)=\frac{n}{2}$ and $n$ is even.\\
The following theorem shows that the number of distance Laplacian eigenvalues of the graph $G$ in the interval $[0,dn]$ is at least $d+1$.
\begin{theorem} \label{T10} If $G$ is a connected graph of order $n$ having diameter $d$, then $$ m_{D^{L} (G )} \big[0,dn]\geq d+1.$$
\end{theorem}
\noindent {\bf Proof.} We consider the principal submatrix, say $M$, corresponding to the vertices $v_1 ,v_2 ,\dots, v_{d+1}$ which belong to the induced path $P_{d+1}$ in the distance Laplacian matrix of $G$. Clearly, the transmission of any vertex in the path $P_{d+1}$ is at most $\frac{d(2n-d-1)}{2}$, that is, $Tr(v_i )\leq \frac{d(2n-d-1)}{2}$, for all $i=1,2,\dots,d+1$. Also, the sum of the off diagonal elements of any row of $M$ is less than or equal to $\frac{d(d+1)}{2}$. Using Lemma \ref{L2}, we conclude that the maximum eigenvalue of $M$ is at most $dn$. Using Fact 1 and Theorem \ref{T7}, there at least $d+1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $0$ and less than or equal to $dn$, that is, $ m_{D^{L} (G )} \big[0,dn]\geq d+1.$ \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
From Theorem \ref{T10}, we get the following observation after using Inequality (2.1).
\begin{corollary} \label{C4} Let $G$ be a connected graph of order $n$ having diameter $d$. If $dn<2Tr_{max}$, then $$ m_{D^{L} (G )} \big(dn,2Tr_{max}]\leq n- d-1.$$
\end{corollary}
\section{Concluding Remarks}
In the entire generality, we believe it is hard to characterize all the graphs satisfying the bounds given in Theorem \ref{T1} and Theorem \ref{T8}. Also in Theorem \ref{T5}, we characterized graphs with diameter $d\leq 2$ satisfying $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$ and we left the case when $d\geq 3$. So, the following problems will be interesting for future research.\\
{\bf Problem 1.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} [n,n+\alpha(G))= n-\alpha(G)$, for any $G\in \vartheta$. } \\
{\bf Problem 2.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} [n,n+2)= \chi-1$, for any $G\in \vartheta$. }\\
{\bf Problem 3.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$, for any $G\in \vartheta$ with $d\geq 3$. }\\
\noindent{\bf Data availibility} Data sharing is not applicable to this article as no data sets were generated or analyzed
during the current study.
|
\section{Introduction}
In what follows, $k\ge1$ and $a\ge2$ are fixed integers, and we refer to elements in the set $\mathbb{V}:=\{0,\ldots,a-1\}^k$ as $\hbox{$k$-mers}$, which we represent either as strings or row vectors depending on the context.
The Hamming distance between two $\hbox{$k$-mers}$ $u$ and $v$, from now on denoted as $d(u,v)$, is the number of coordinates where the $\hbox{$k$-mers}$ differ, and is a valid metric. The Hamming graph $\mathbb{H}_{k,a}$ has $\mathbb{V}$ as its vertex set, and two $\hbox{$k$-mers}$ $u$ and $v$ are adjacent (i.e. connected by an undirected edge) if and only if $d(u,v)=1$, i.e. $u$ and $v$ differ at exactly one coordinate. As a result, the (geodesic) distance between two vertices in $\mathbb{H}_{k,a}$ is precisely their Hamming distance (see Figure~\ref{fig:HamResEx}). The literature refers to the Hamming graph with $a=2$ as the ($k$-dimensional) hypercube.
A non-empty set $R\subseteq\mathbb{V}$ is called resolving when for all $u,v\in\mathbb{V}$, with $u\ne v$, there exists $r\in R$ such that $d(u,r)\ne d(v,r)$. In other words, $R$ multilaterates $\mathbb{V}$. For instance, $\mathbb{V}$ resolves $\mathbb{H}_{k,a}$ because $d(u,v)=0$ if and only if $u=v$. Equivalently, $R\subseteq\mathbb{V}$ is resolving if and only if the transformation $\Phi:\mathbb{V}\to\mathbb{R}^{|R|}$ defined as $\Phi(v):=(d(v,r))_{r\in R}$ is one-to-one. In particular, the smaller a resolving set of $\mathbb{H}_{k,a}$, the lower the dimension needed to represent $\hbox{$k$-mers}$ as points in a Euclidean space, which may be handy e.g. to represent symbolic data numerically for machine learning tasks~\cite{TilLla19}.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.33\textwidth]{H13.pdf}\includegraphics[width = 0.33\textwidth]{H23.pdf}\includegraphics[width = 0.33\textwidth]{H33.pdf}
\caption{Visual representation of $\mathbb{H}_{1,3}$, $\mathbb{H}_{2,3}$, and $\mathbb{H}_{3,3}$. Blue-colored vertices form minimal resolving sets in their corresponding Hamming graph.}
\label{fig:HamResEx}
\end{figure}
The metric dimension of $\mathbb{H}_{k,a}$, which we denote $\beta(\mathbb{H}_{k,a})$, is defined as the size of a minimal resolving set in this graph~\cite{HarMel76,Sla75}. For instance, $\beta(\mathbb{H}_{1,a})=(a-1)$ because $\mathbb{H}_{1,a}$ is isomorphic to $K_{a-1}$, the complete graph on $(a-1)$ vertices~\cite{chartrand2000resolvability}. Unfortunately, computing the metric dimension of an arbitrary graph is a well-known NP-complete problem~\cite{Coo71,GarJoh79,KhuRagRos96}, and it remains unknown if this complexity persists when restricted to Hamming graphs. In fact, the metric dimension of hypercubes is only known up to dimension $k=10$~\cite{HarMel76}, and values have been conjectured only up to dimension $k=17$~\cite{MlaKraKovEtAl12}---see OEIS sequence A303735 for further details~\cite{OEIS19}.
\newpage
Integer linear programming (ILP) formulations have been used to search for minimal resolving sets~\cite{chartrand2000resolvability,currie2001metric}. In the context of Hamming graphs, a potential resolving set $R$ is encoded by a binary vector $y$ of dimension $a^k$ such that $y_j=1$ if $j\in R$ and $y_j=0$ if $j\in\mathbb{V}\setminus R$. One can then search for a minimal resolving set for $\mathbb{H}_{k,a}$ by solving the ILP~\cite{chartrand2000resolvability}:
\begin{alignat}{2}
&\min\limits_y \; & &\sum_{j\in\mathbb{V}} y_j \notag \\
&\text{subject to} \; & &\sum_{j\in\mathbb{V}} |d(u,j)-d(v,j)| \cdot y_j \ge 1, \; \forall u\ne v\in\mathbb{V} \\
& & & y_j \in \{0,1\}, \; \forall j\in\mathbb{V}. \notag
\label{eq:oldILP}
\end{alignat}
The first constraint ensures that for all pairs of different vertices $u$ and $v$, there is some $j\in R$ such that $|d(u,j)-d(v,j)| > 0$, hence $R$ resolves $\mathbb{H}_{k,a}$. The objective penalizes the size of the resolving set. (A variant due to~\cite{currie2001metric} is similar but stores $a^k$ copies of a binary version of the distance matrix of the graph.) One downside of this formulation is that forming the distance matrix of $\mathbb{H}_{k,a}$ requires $\mathcal{O}(a^{2k})$ storage, as well as significant computation. Moreover, standard approaches to reduce the computation below $\mathcal{O}(a^{2k})$, such as fast multipole methods~\cite{greengard1987fast} and kd-trees~\cite{bentley1975multidimensional}, do not obviously apply. Even if one could compute all pairwise distances between nodes, simply storing the distance matrix is impractical. To fix ideas, the graph $\mathbb{H}_{8,20}$---which is associated with octapeptides (see \cref{sec:protein_representation})---has $20^8$ nodes, so storing the distance matrix with $\log_2(8)=3$ bits per entry and taking advantage of symmetry would require $3{20^8\choose 2}$ bits, or approximately a prohibitive 123 exabytes.
Due to the above difficulties, other efforts have focused on finding small resolving sets rather than minimal ones. When $a^k$ is small, resolving sets for $\mathbb{H}_{k,a}$ may be determined using the so-called Information Content Heuristic (ICH) algorithm~\cite{HauSchVie12}, or a variable neighborhood search algorithm~\cite{MlaKraKovEtAl12}. Both approaches quickly become intractable with increasing $k$. However, the highly symmetric nature of Hamming graphs can be taken advantage of to overcome this problem. Indeed, recent work~\cite{TilLla19} has shown that $\beta(\mathbb{H}_{k,a})\le\beta(\mathbb{H}_{k-1,a})+\lfloor a/2\rfloor$; in particular, $\beta(\mathbb{H}_{k,a})\le(k-1)\lfloor a/2\rfloor+(a-1)$ i.e., just $\mathcal{O}(k)$ nodes are enough to resolve all the $a^k$ nodes in $\mathbb{H}_{k,a}$. Moreover, one can find a resolving set of size $\mathcal{O}(k)$ in only $\mathcal{O}(ak^2)$ time~\cite{TilLla19}.
This manuscript is based on the recent Bachelor's thesis~\cite{Lai19}, and has two overarching goals. First, it aims to develop practical methods for certifying the resolvability, or lack thereof, of subsets of nodes in arbitrary Hamming graphs. So far, this has been addressed for hypercubes in the literature~\cite{Beardon:2013} but remains unexamined for arbitrary values of the parameter $a$. While our work does not directly address the problem of searching for minimal resolving sets, verifying resolvability is a key component of any such search and may shed new light on the precise metric dimension of $\mathbb{H}_{k,a}$ in future investigations. Second, this paper aims also to exploit said characterization to remove unnecessary nodes---if any---in known resolving sets. This problem, which is infeasible by brute force when $a^k$ is large, has not received any attention in the literature despite being crucial for the embedding of $\hbox{$k$-mers}$ into the Euclidean space of a lowest possible dimension.
The paper is organized as follows. Our main theoretical results are presented first in~\cref{sec:main}. \cref{thm:Az=0} provides the foundation from which we address the problem of verifying resolvability in Hamming graphs and implies a new characterization of resolvability of hypercubes (\cref{cor:symp_system}). An illustrative example shows the utility of \cref{thm:Az=0} but raises several practical challenges in its implementation on large Hamming graphs. \Cref{sec:grobner} describes a computationally demanding verification method based on Gr\"obner bases that is nevertheless more efficient than the brute force approach and determines with certainty whether or not a set of nodes in $\mathbb{H}_{k,a}$ is resolving. Computational issues are addressed in~\cref{sec:ILP} with a novel ILP formulation of the problem. This approach is fast but stochastic and hence has the potential to produce false positives or false negatives. \Cref{sec:complexity_experiments} compares the run time of these methods against a brute force approach across small Hamming graphs. Combining the techniques from sections~\ref{sec:grobner} and~\ref{sec:ILP}, \cref{sec:protein_representation} presents a simple approach to discovering and removing redundant nodes in a given resolving set. This approach allows us to improve on previous bounds on the metric dimension of the Hamming graph $\mathbb{H}_{8,20}$. Finally, two appendices provide background information about Gr\"obner bases and linear programming.
All code used in this manuscript is available on GitHub (\url{https://github.com/hamming-graph-resolvability/Hamming_Resolvability}).
\section{Main results}
\label{sec:main}
In what follows ${\hbox{Tr}}(A)$ denotes the trace of a square matrix $A$, $B'$ the transpose of a matrix or vector $B$, and ${\hbox{vec}}(C)$ the column-major ordering of a matrix $C$ i.e. the row vector obtained by appending from left to right the entries in each column of $C$. For instance:
\[{\hbox{vec}}\left(\left[\begin{array}{cc} a & b \\ c & d\end{array}\right]\right)=(a,c,b,d).\]
In addition, $\bar D$ denotes the flip of the entries in a binary matrix (or vector) $D$, that is 0 is mapped to 1, and vice versa.
The one-hot encoding of a $\hbox{$k$-mer}$ $v$ is defined as the binary matrix $V$ of dimension $(a\times k)$ such that $V[i,j]=1$ if and only if $(i-1)=v[j]$ (the offset in $i$ is needed since the reference alphabet is $\{0,...,a-1\}$ instead of $\{1,\ldots,a\}$). Here, $V[i,j]$ denotes the entry in row-$i$ and column-$j$ of the matrix $V$, and similarly $v[j]$ denotes the $j$-th coordinate of the vector $v$. We also follow the convention of capitalizing $\hbox{$k$-mer}$ names to denote their one-hot encodings.
Our first result links one-hot encodings of $\hbox{$k$-mer}$s with their Hamming distance. Note this result applies to any alphabet size, not just binary.
\begin{lemma}\label{lem:UtV}
If $u,v$ are $\hbox{$k$-mers}$ with one-hot encodings $U,V$, respectively, then $d(u,v)=k-{\hbox{Tr}}(U'V)$; in particular, $d(u,v)={\hbox{Tr}}(U'\bar V)$.
\end{lemma}
\begin{proof}
Let $U_i$ and $V_i$ be the $i$-th column of $U$ and $V$, respectively. Clearly, if $u[i]=v[i]$ then $\langle U_i,V_i\rangle = 1$, and if $u[i]\ne v[i]$ then $\langle U_i,V_i\rangle = 0$, because all but one of the entries in $U_i$ and $V_i$ vanish and the non-vanishing entries are equal to 1. As a result, ${\hbox{Tr}}(U'V)=\sum_{i=1}^k\langle U_i,V_i\rangle$ counts the number of positions where $u$ and $v$ are equal; in particular, $d(u,v)=k-{\hbox{Tr}}(U'V)$. Finally, observe that if $1^{a\times k}$ denotes the $(a\times k)$ matrix with all entries equal to 1 then ${\hbox{Tr}}(U'1^{a\times k})=k$ because every row of $U'$ has exactly one 1 and all other entries vanish. As a result, $d(u,v)={\hbox{Tr}}(U'(1^{a\times k}-V))={\hbox{Tr}}(U'\bar V)$, as claimed.
\end{proof}
We can now give a necessary and sufficient condition for a subset of nodes in an arbitrary Hamming graph to be resolving.
\begin{theorem}\label{thm:Az=0}
Let $v_1,\ldots,v_n$ be $n\ge1$ $\hbox{$k$-mers}$ and $V_1,\ldots,V_n$ their one-hot encodings, respectively, and define the $(n\times ak)$ matrix with rows
\begin{equation}
A := \left(\begin{array}{c}
{\hbox{vec}}(V_1) \\
\vdots\\
{\hbox{vec}}(V_n)
\end{array}\right).
\label{def:A}
\end{equation}
Then $R:=\{v_1,\ldots,v_n\}$ resolves $\mathbb{H}_{k,a}$ if and only if $0$ is the only solution to the linear system $Az=0$, with $z$ a column vector of dimension $ak$, satisfying the following constraints: if $z$ is parsed into $k$ consecutive but non-overlapping subvectors of dimension $a$, namely $z=((z_1,\ldots, z_a), (z_{a+1},\ldots, z_{2a}), ... , (z_{(k-1)a+1},\ldots,z_{ka}))'$, then each subvector is the difference of two canonical vectors.
\end{theorem}
\begin{proof}
Before showing the theorem observe that, for any pair of matrices $A$ and $B$ of the same dimension, ${\hbox{Tr}}(A'B)=\langle{\hbox{vec}}(A),{\hbox{vec}}(B)\rangle$, where $\langle\cdot,\cdot\rangle$ is the usual inner product of real vectors.
Consider $\hbox{$k$-mers}$ $x$ and $y$, and let $X$ and $Y$ be their one-hot encodings, respectively. Due to \cref{lem:UtV}, $d(v_i,x)=d(v_i,y)$ if and only if ${\hbox{Tr}}(V_i'(X-Y))=0$ i.e. $\langle{\hbox{vec}}(V_i),{\hbox{vec}}(X-Y)\rangle=0$. As a result, the set $R$ does not resolve $\mathbb{H}_{k,a}$ if and only if there are $\hbox{$k$-mers}$ $x$ and $y$ such that $Az\neq0$, where $z:={\hbox{vec}}(X)-{\hbox{vec}}(Y)$. Note however that each column $X$ and $Y$ equals a canonical vector in $\mathbb{R}^a$; in particular, if we parse ${\hbox{vec}}(X)$ and ${\hbox{vec}}(Y)$ into $k$ subvectors of dimension $a$ as follows: ${\hbox{vec}}(X)=(x_1,\ldots,x_k)$ and ${\hbox{vec}}(Y)=(y_1,\ldots,y_k)$, then $z=(x_1-y_1,\ldots,x_k-y_k)$ with $x_i'$ and $y_i'$ canonical vectors in $\mathbb{R}^a$. This shows the theorem.
\end{proof}
\subsection{Illustrative Example}\label{subsec:Illustrative} In $H_{2,3}$ consider the set of nodes $R_0=\{02,11\}$. From~\cref{thm:Az=0}, $R_0$ resolves $H_{2,3}$ if and only if $A_0z=0$, with
\begin{equation}
A_0 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & 0
\end{bmatrix},
\label{def:A0}
\end{equation}
has no non-trivial solution $z$ which satisfies the other constraints in the theorem when writing $z = \big((z_1,z_2,z_3),(z_4,z_5,z_6)\big)'$. Something useful to note about this decomposition is that if a subvector of $z$ has two identical entries, then all the entries in that subvector must vanish.
Note that $A_0$ is already in its reduced row echelon form~\cite{Olver:2018}, and has two pivots: $z_1 = -z_6$ and $z_2=-z_5$. Seeking non-trivial solutions to the constrained linear system, we examine permissible values for $z_5$ and $z_6$:
\begin{itemize}
\item[(a)] If $z_5=-1$ then we must have $(z_4,z_6)\in\{(0,1),(1,0)\}$. Furthermore, if $z_6=1$ then $(z_1,z_2,z_3)=(-1,1,0)$, but if $z_6=0$ then $(z_1,z_2,z_3)=(0,1,-1)$. Consequently, $z=(-1,1,0,0,-1,1)$ and $z=(0,1,-1,1,-1,0)$ solve the constrained system.
\item[(b)] Similarly, we find that $z=(-1,0,1,-1,0,1)$ and $z=(1,0,-1,1,0,-1)$ solve the constrained system when we assume that $z_5=0$.
\item[(c)] Finally, $z=(1,-1,0,0,1,-1)$ and $z=(0,-1,1,-1,1,0)$ are also found to solve the constrained system when we impose that $z_5=1$.
\end{itemize}
Having found at least one non-trivial solution to the constrained linear system, we conclude that $R_0$ does not resolve $H_{2,3}$. (The found $z$'s are in fact the only non-trivial solutions.)
From the proof of \cref{thm:Az=0}, we can also determine pairs of vertices in $H_{2,3}$ which are not resolved by $R_0$. Indeed, using the non-trivial solutions found above we find that
$12$ and $01$, $21$ and $10$, and $00$ and $22$ are the only pairs of nodes in $H_{2,3}$ which are unresolved by $R_0$. In particular, because the distances between the nodes in each pair and $22$ are different, $R_1:=R_0\cup\{22\}$ resolves $H_{3,2}$.
We can double-check this last assertion noticing that the reduced echelon form of the matrix $A_1$ associated with $R_1$ is
\begin{equation}
\hbox{rref}(A_1) = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1
\end{bmatrix}.
\label{def:A1}
\end{equation}
In particular, $z_1 = -z_6$, $z_2 = -z_5$, and $z_3 = -z_6$. The first and third identity imply that $z_1=z_2$, hence $(z_1,z_2,z_3)=(0,0,0)$. This together with the first and second identity now imply that $(z_4,z_5,z_6)=(0,0,0)$. So, as anticipated, $z=0$ is the only solution to the constrained linear system $A_1z=0$.
In general, if the reduced row echelon form of the matrix given by \cref{thm:Az=0} has $j$ free variables, then there could be up to $3^j$ possible solutions to the associated linear system, each of which would have to be checked for the additional constraints. This exhaustive search could be very time consuming if not impossible. Handling the linear system constraints more systematically and efficiently is the motivation for sections~\ref{sec:grobner} and~\ref{sec:ILP}.
\subsection{Specializations to Hypercubes} In~\cite{Beardon:2013} a necessary and sufficient condition for the resolvability of hypercubes is provided exploiting that $d(u,v)=\|u-v\|_2^2$ when $u$ and $v$ are binary $\hbox{$k$-mers}$. Next, we reproduce this result using our framework of one-hot encodings instead.
\begin{corollary} \cite[Theorem 2.2]{Beardon:2013}
Let $R=\{v_1,\ldots,v_n\}$ be a set of $n\ge1$ binary $\hbox{$k$-mers}$, and define the $(n\times k)$ matrix with rows
\[B := \left[\begin{array}{c}
v_1-\bar{v_1} \\
\vdots \\
v_n-\bar{v_n}
\end{array}\right].\]
Then, $R$ resolves $H_{k,2}$ if and only if $\hbox{ker}(B)\cap\{0,\pm1\}^k=\{0\}$.
\label{cor:mat_system}
\end{corollary}
\begin{proof}
Let
\[A= \left[\begin{array}{ccccc}
\vline & \vline & & \vline & \vline \\
A_1 & A_2 & \ldots & A_{2k-1} & A_{2k} \\
\vline & \vline & & \vline & \vline
\end{array}\right]\]
be the $(n\times 2k)$ matrix with columns $A_1,\ldots,A_{2k}$ given by \cref{thm:Az=0} for $R$. It follows that $R$ resolves $H_{k,2}$ if and only if $Az=0$, with $z=((x_1,y_1),\ldots,(x_k,y_k))'\in\{0,\pm1\}^{2k}$ and $(x_i+y_i)=0$ for each $i=1,\ldots,k$, has only a trivial solution. Note however that $Az=By$, where
\begin{eqnarray*}
B &:=&
\left[\begin{array}{ccc}
\vline & & \vline\\
(A_2-A_1) & \ldots & (A_{2k}-A_{2k-1}) \\
\vline & & \vline
\end{array}\right];\\
y &:=& (y_1,\ldots,y_k)'.
\end{eqnarray*}
Therefore $R$ is resolving if and only if $By=0$, with $y\in\{0,\pm1\}^k$, has only a trivial solution. But recall from~\cref{thm:Az=0} that the rows of $A$ are the column-major orderings of the one-hot encodings of the binary $k$-mers in $R$. In particular, using $\llbracket\cdot\rrbracket$ to denote Iverson brackets, we find that the row in $B$ associated with $v\in R$ is:
\[\Big(\llbracket v[1]=1\rrbracket-\llbracket v[1]=0\rrbracket,\ldots,\llbracket v[k]=1\rrbracket-\llbracket v[k]=0\rrbracket\Big)=v-\bar v,\]
from which the corollary follows.
\end{proof}
We can provide an even simpler characterization of sets of $\hbox{$k$-mers}$ that resolve the hypercube, provided that $1^k:=(1,\ldots,1)$ is one of them. This seemly major assumption is only superficial. Indeed, hypercubes are transitive; that is, given any two binary $\hbox{$k$-mers}$ there is an automorphism (i.e., a distance preserving bijection $\sigma:\{0,1\}^k\to\{0,1\}^k$) that maps one into the other~\cite[\S3.1]{TilLla19}. Hence, given any set $R$ of binary $\hbox{$k$-mers}$ there is an automphism $\sigma$ such that $1^k\in\sigma(R)$. In particular, because $R$ is resolving if and only if $\sigma(R)$ is resolving, one can assume without any loss of generality that $1^k$ is an element of $R$.
\begin{corollary}
Let $R=\{v_1,\ldots,v_n\}$ be a set of $n$ binary $\hbox{$k$-mers}$ such that $1^k\in R$, and define the $(n\times k)$ matrix with rows
\[C := \left[\begin{array}{c}
v_1 \\
\vdots \\
v_n
\end{array}\right].\]
Then, $R$ resolves $H_{k,2}$ if and only if $\hbox{ker}(C)\cap\{0,\pm1\}^k=\{0\}$.
\label{cor:symp_system}
\end{corollary}
\begin{proof}
Note that for all binary $\hbox{$k$-mer}$s $v$: $(v+\bar v)=1^k$; in particular, $(v-\bar v)=(2v-1^k)$. Hence, if $B$ is as given in~\cref{cor:symp_system} and $C$ as defined above then
\[Bz=0\hbox{ if and only if }Cz=\langle 1^k,z\rangle\left[\begin{array}{c}1/2\\\vdots\\1/2\end{array}\right].\]
But, because $1^k\in R$ and $2\cdot1^k-1^k=1^k$, one of the entries in $Bz$ equals $\langle 1^k,z\rangle$. Since the entries in $Cz$ are proportional to $\langle 1^k,z\rangle$, $Bz=0$ if and only if $Cz=0$, from which the corollary follows.
\end{proof}
\section{Polynomial Roots Formulation}
\label{sec:grobner}
In this section, we express the constraints of the linear system in \cref{thm:Az=0} as roots of a multi-variable polynomial system, and we reveal various properties about this system which can drastically reduce the complexity of determining whether a subset of nodes resolves or not $\mathbb{H}_{k,a}$.
In what follows, for any given non-empty set $P$ of polynomials in a possibly multi-variable $z$, $\{P=0\}$ denotes the set of $z$'s such that $p(z)=0$, for each $p\in P$. Unless otherwise stated, we assume that $z$ has dimension $ka$, i.e. $z=(z_1,\ldots,z_{ka})$.
Consider the polynomial sets
\begin{eqnarray}
P_1 &:=& \Big\{z_i^3-z_i,\hbox{ for $i=1,\ldots,ka$}\Big\};\notag \\
P_2 &:=& \left\{\sum_{j=(i-1)a+1}^{ia}z_j,\hbox{ for $i=1,\ldots,k$}\right\};\\
P_3 &:=& \left\{\Big(2-\sum_{j=(i-1)a+1}^{ia}z_j^2\Big)\cdot\sum_{j=(i-1)a+1}^{ia}z_j^2,\hbox{ for $i=1,\ldots,k$}\right\}. \notag
\label{eq:P}
\end{eqnarray}
Our first result characterizes the constraints of the linear system in \cref{thm:Az=0} in terms of the roots of the above polynomials. Ahead, unless otherwise stated:
\begin{equation}
P := (P_1\cup P_2\cup P_3).
\label{def:P}
\end{equation}
\begin{lemma}
$z\in\{P=0\}$ if and only if when parsing $z$ into $k$ consecutive but non-overlapping subvectors of dimension $a$, each subvector is the difference of two canonical vectors.
\label{lem:polsys}
\end{lemma}
\begin{proof}
The polynomials in $P_1$ enforce that each entry in $z$ must be a ${-1}$, $0$, or $1$, while the polynomials in $P_2$ enforce that there is a $(-1)$ for every $1$ in each subvector of $z$. Finally, the polynomials in $P_3$ enforce that each subvector of $z$ has exactly two non-zero entries or no non-zero entries. Altogether, $z\in P$ if and only if each subvector is identically zero, or it has exactly one 1 and one $(-1)$ entry and all other entries vanish, i.e. each subvector of $z$ is the difference of two canonical vectors in $\mathbb{R}^a$.
\end{proof}
The following is now an immediate consequence of this lemma and~\cref{thm:Az=0}.
\begin{corollary}
Let $R$ be a set of nodes in $\mathbb{H}_{k,a}$ and $A$ the matrix given by equation~\cref{def:A}. Then, $R$ resolves $\mathbb{H}_{k,a}$ if and only if $\hbox{ker}(A)\cap\{P=0\}=\{0\}$.
\label{cor:KerCapP=0}
\end{corollary}
Our focus in what remains of this section is to better characterize the non-trivial roots of the polynomial system $\{P=0\}$. To do so, we rely on the concepts of polynomial ideals and (reduced) Gr\"obner bases, and the following fundamental result from algebraic geometry. For a primer to these and other concepts on which our results rely see~\cref{app:1}.
\begin{theorem} (Hilbert's Weak Nullstellensatz~\cite[\S4.1]{Cox_Little_OShea:2015}.)
\label{thm:weak_null}
For any non-empty finite set of polynials $P$, $\{P=0\} = \emptyset$ if and only if $\{1\}$ is the reduced Gr{\"o}bner basis of $I(P)$, the ideal generated by $P$.
\end{theorem}
Define for each $i=1,\ldots,k$:
\begin{eqnarray}
\label{def:Bi}
B_i &:=& \Big\{z_j^3-z_j,\hbox{ for }j=(i-1)a+1,\ldots,ia\Big\}\\
&&\qquad\bigcup\left\{\sum_{j=(i-1)a+1}^{ia}z_j,\Big(2-\sum_{j=(i-1)a+1}^{ia}z_j^2\Big)\cdot\sum_{j=(i-1)a+1}^{ia}z_j^2\right\}.
\notag
\end{eqnarray}
Observe that $B_i$ is a set of polynomials in $(z_{(i-1)a+1},\ldots,z_{ia})$, i.e. the $i$-th subvector of $z$; in particular, each of these polynomials may be regarded as a function of $z$, and $B_1,\ldots,B_k$ partition $P$, i.e. $P=\sqcup_{i=1}^kB_i$. Accordingly, we call $B_i$ the $i$-th \underline{b}lock of $P$, and denote the reduced Gr\"obner basis of $B_i$ as $G_i$. The computational advantage of these observations is revealed by the following results.
\begin{lemma}
$G=\cup_{i=1}^kG_i$ is the reduced Gr\"obner bases of $P$ in equation~\cref{def:P}. Furthermore, $G_i$ may be obtained from $G_1$ using the change of variables:
\begin{equation}
(z_1,\ldots,z_a)\longrightarrow(z_{(i-1)a+1},\ldots,z_{ia}).
\label{ide:varchange}
\end{equation}
\label{lem:groeb_block}
\end{lemma}
\begin{proof}
The case with $k=2$ follows from~\cite[Proposition 2]{Cox_Little_OShea:2015} due to the fact that no variable and hence no polynomial is shared between the blocks of $P$. A straightforward inductive argument in $k\ge2$ then shows that $\cup_{i=1}^kG_i$ is the reduced Gr\"obner bases of $P$. Finally, note that $B_1$ is up to the change of variables in equation~(\ref{ide:varchange}) identical to $B_i$; in particular, since Buchberger's algorithm (\cref{algo:1}) and the Gr\"obner basis reduction algorithm (\cref{algo:2}) build upon polynomial division, the reduced Gr\"obner bases of $B_i$ may be obtained from that of $B_1$ using the same change of variables.
\end{proof}
\begin{lemma}
The reduced Gr\"obner bases of $B_1$ under the lexicographic ordering is
\begin{equation}
G_1=\left\{\sum\limits_{i=1}^a z_i\right\}\bigcup_{2\le i\le a}\{z_i^3-z_i\}\bigcup_{2\le i<j\le a}\{z_i^2z_j+z_iz_j^2\}\bigcup_{2\le i<j<\ell\le a}\{z_iz_jz_\ell\}.
\end{equation}
\label{lem:G1explicit}
\end{lemma}
\begin{proof}
Let $G$ be the set of polynomials on the right-hand side above. Since $G$ depends on $a$ but not on the parameter $k$ of $\mathbb{H}_{k,a}$, and the identity for $a\in\{2,3,4\}$ can be checked using algorithms~\ref{algo:1} and~\ref{algo:2}, without loss of generality we assume in what follows that $a>5$.
Since reduced Gr\"obner basis are unique, it suffices to show that (i) $I(G)=I(B_1)$; and that for all $f,g\in G$: (ii) the reduction of $\hbox{Spoly}(f,g)$ by $G$ is 0; (iii) $LC(f)=1$; and (iv) if $f\in G\setminus\{g\}$ then no monomial of $f$ is divisible by $LM(g)$. We omit the tedious but otherwise straightforward verification of properties (ii) and (iv). Since property (iii) is trivially satisfied, it only remains to verify property (i).
To prove $I(G) = I(B_1)$, it suffices to show that $\{G=0\}=\{B_1=0\}$. Indeed, the polynomials of the form $z_iz_jz_\ell$ imply that if $z\in\{G=0\}$ then $(z_2,\ldots,z_{a-1})$ has at most two non-zero coordinates. In the case two of these coordinates are non-zero, say $z_i$ and $z_j$, the polynomials $z_i^3-z_i=z_i(z_i-1)(z_i+1)$, $z_j^3-z_j=z_j(z_j-1)(z_j+1)$, and $z_i^2z_j+z_iz_j^2=z_iz_j(z_i+z_j)$ imply that $(z_i,z_j)=(1,-1)$ or $(z_i,z_j)=(-1,1)$; in particular, because we must have $\sum_{\ell=1}^az_\ell=0$, $z_1=0$. Instead, if exactly one of the coordinates in $(z_2,\ldots,z_{a-1})$ is non-zero, say $z_j$, then the polynomial $\sum_{\ell=1}^az_\ell$ together with $z_j^3-z_j$ imply that $(z_1,z_j)=(1,-1)$ or $(z_1,z_j)=(-1,1)$. Finally, if $(z_2,\ldots,z_{a-1})=0$ then the polynomial $\sum_{\ell=1}^az_\ell$ implies that $z_1=0$. In all of these three exhaustive cases, it follows that $(z_1,\ldots,z_a)$ is identically zero, or it has exactly one 1 and one (-1) coordinate and all other coordinates vanish; in other words, $(z_1,\ldots,z_a)$ is a difference of two canonical vectors in $\mathbb{R}^a$. Since this is precisely the constraint imposed on this subvector of $z$ by the polynomials in $B_1$, we obtain that $\{G=0\}=\{B_1=0\}$ i.e. $I(G)=I(B_1)$.
\end{proof}
A minor issue for using the Weak Nullstellensatz in our setting is that the polynomials in $P$ have no constant terms; in particular, $0\in\{P=0\}$. To exclude this trivial root, observe that if $z\in P$ then $\sum_{j=(i-1)a+1}^{ia}z_j^2\in\{0,2\}$, for each $i=1,\ldots,k$. As a result, if $z$ is a non-trivial root of $\{P=0\}$ then $\sum_{j=1}^{ka}z_j^2=2i$ for some $i$. This motivates to introduce the auxiliary polynomial:
\begin{equation}
f(z):=\Big(\sum_{j=1}^{ka}z_j^2\Big),
\label{def:f(z)}
\end{equation}
so that $R$ resolves $\mathbb{H}_{k,a}$ if and only if $\hbox{ker}(A)\cap\{P=0\}\cap\{f-2i=0\}=\emptyset$ for all $i=1,\ldots,k$.
\begin{lemma}
Consider a (finite) reduced Gr\"obner basis $G \neq \{1\}$ and a polynomial $f$. If $f \xrightarrow{G} r$ then, for each $c\in\mathbb{R}$, $(f+c) \xrightarrow{G} (r+c)$.
\end{lemma}
\begin{proof}
Let $G=\{g_1,\ldots,g_n\}$. Without loss of generality fix a constant $c\ne0$. Note that $G$ contains no constant polynomial (except for $0$) because $G \neq \{1\}$ hence $1\notin G$. As a result, the leading monomial of each $g_i$ does not divide $c$, hence $c\xrightarrow{G}c$. Since $f \xrightarrow{G} r$, and reductions by a Gr\"obner basis are unique, $(f+c) \xrightarrow{G} (r+c)$ as claimed.
\end{proof}
The following is now a direct consequence of the lemma.
\begin{corollary}
Let $G$ be the reduced Gr\"obner basis of $P$ in equation~(\ref{def:P}). If $f$ is as defined in~\cref{def:f(z)} and $f \xrightarrow{G} r$ then, for each $i = 1,2,\ldots,k$, $(f-2i)\xrightarrow{G} (r-2i)$.
\label{cor:rem}
\end{corollary}
The results from this section allow for a computational method for checking resolvability on $\mathbb{H}_{k,a}$. Lemmas~\ref{lem:groeb_block} and~\ref{lem:G1explicit} are used to construct the reduced Gr\"obner basis $G$ directly, and~\cref{cor:rem} efficiently removes the trivial solution from consideration in the criteria provided by~\cref{thm:weak_null}. Altogether these results significantly reduce the number of polynomial reductions required to assess the resolvability of a set of nodes on $\mathbb{H}_{k,a}$.
\subsection{Illustrative Example (Continuation)} We saw in~\Cref{subsec:Illustrative} that $R_0=\{02,11\}$ does not resolve $H_{2,3}$ whereas $R_1=R_0\cup\{22\}=\{02,11,22\}$ does. We can double-check these assertions using~\cref{cor:KerCapP=0} as follows.
First, recall that for $H_{2,3}$ the variable $z$ is 6-dimensional and should be decomposed in the form $z=\big((z_1,z_2,z_3),(z_4,z_5,z_6)\big)$. Next, the kernel of the matrix given by the corollary for $R_0$ (denoted as $A_0$, see Eq.~\cref{def:A0}) is described by the linear system:
\[\left\{\begin{array}{rcl}
z_1+z_6 &=& 0;\\
z_2+z_5 &=& 0.
\end{array}\right.\]
On the other hand, the roots in $\{P=0\}$ given by~\cref{cor:KerCapP=0} correspond to the polynomial system:
\[\left\{\begin{array}{ccl}
0 &=& z_1^3-z_1;\\
0 &=& z_2^3-z_2;\\
0 &=& z_3^3-z_3;\\
0 &=& z_1 + z_2 + z_3;\\
0 &=& (2-z_1^2-z_2^2-z_3^2)\cdot(z_1^2+z_2^2+z_3^2);\\
\hline
0 &=& z_4^3-z_4;\\
0 &=& z_5^3-z_5;\\
0 &=& z_6^3-z_6;\\
0 &=& z_4 + z_5 + z_6;\\
0 &=& (2-z_4^2-z_5^2-z_6^2)\cdot(z_4^2+z_5^2+z_6^2);
\end{array}\right.\]
where the horizontal line distinguishes between the first and second block of $P$ (see Eq.~(\ref{def:Bi})). Finally, recall the auxiliary polynomial given by equation~\cref{def:f(z)}:
\[f(z)=z_1^2+z_2^2+z_3^2+z_4^2+z_5^2+z_6^2.\]
Assuming the lexicographic order over the monomials, one can determine that the reduced Gr\"obner basis of $\{A_0z\}\cup P\cup\{f-2\}$ is $\{1\}$; in particular, $\hbox{ker}(A_0)\cap\{P=0\}\cap\{f=2\}=\emptyset$. On the other hand, because the reduced Gr\"obner basis of $\{A_0z\}\cup P\cup\{f-2\}$ is $\{z_1+z_6, z_2+z_5, z_3-z_5-z_6, z_4+z_5+z_6, z_5^2+z_5z_6+z_6^2-1, z_6^3 - z_6\}$, it follows that $\hbox{ker}(A_0)\cap\{P=0\}\cap\{f=4\}\ne\emptyset$ i.e. $\hbox{ker}(A_0)\cap\{P=0\}$ has a non-trivial solution. Consequently, $R_0$ does not resolve $H_{2,3}$.
To confirm that $R_1=R_0\cup\{22\}$ does resolve $H_{2,3}$, note that we only need to add the equation $z_3+z_6=0$ to the previous linear system (the full linear system is now described by the matrix $A_1$, see Eq.~\cref{def:A1}). Using our code, we find that $\hbox{ker}(A_1)\cap\{P=0\}\cap\{f-2\}=\emptyset$ and also that $\hbox{ker}(A_1)\cap\{P=0\}\cap\{f-4\}=\emptyset$ because the associated reduced Gr\"obner basis are both equal to $\{1\}$. As a result, $\hbox{ker}(A_1)\cap\{P=0\}$ has no non-trivial solution, i.e. $R_1$ resolves $H_{2,3}$.
\section{Novel Integer Linear Programming Formulation}
\label{sec:ILP}
For some background about Integer Linear Programming (ILP), see~\cref{app:2}.
In contrast to the ILP approaches of \cite{chartrand2000resolvability,currie2001metric}, our ILP formulation checks the resolvability of a given set rather than searching for minimal resolving sets. Furthermore, it does not pre-compute the distance matrix of a Hamming graph. As before, fix $\mathbb{H}_{k,a}$ and a subset of vertices $R$. Letting $z=(z_1,\ldots,z_{ka})$ and using the polynomial set $P$ from equation~(\ref{def:P}), we leverage~\cref{lem:polsys} (with $A$ as in equation~(\ref{def:A}), each row corresponding to a vertex in $R$) to reformulate~\cref{thm:Az=0} as follows:
\begin{equation}
R \text{ does \underline{not} resolve } \mathbb{H}_{k,a} \quad \iff \quad \exists z \neq 0 \;\text{such that}\; z\in\hbox{ker}(A)\cap\{P=0\}.
\label{eq:resolveIff}
\end{equation}
To formulate this as an ILP, we use the following result.
\begin{lemma}
Define
\[\mathcal{I}:= \bigcap_{i=1}^k\left\{z\in\mathbb{Z}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\hbox{ and } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\}.\]
Then $\mathcal{I}$ is the intersection of a closed convex polyhedron with the integer lattice $\mathbb{Z}^{ak}$, and $z\in\{P=0\}$ if and only if $z\in\mathcal{I}$.
\label{lemma:ILP}
\end{lemma}
\begin{proof}
Since the intersection of convex sets is convex, and the intersection of a finite number of polyhedra is a polyhedron, it follows from standard arguments that
\begin{eqnarray*}
\mathcal{J}_1 &:=& \bigcap_{i=1}^k\left\{z\in\mathbb{R}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\right\};\\
\mathcal{J}_2 &:=& \bigcap_{i=1}^k\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\};
\end{eqnarray*}
are convex subsets of $\mathbb{R}^{ak}$, and $\mathcal{J}_1$ is a polyhedron. We claim that $\mathcal{J}_2$ is also a polyhedron, for which it suffices to check that each set in the intersection that defines it is a polyhedron. Without loss of generality, we do so only for the case with $i=1$. Indeed, because $\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a |z_j| \le 2\}$ is invariant under arbitrary coordinate sign flips, we have that
\[\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a |z_j| \le 2\right\}=\bigcap_{w\in\{-1,1\}^{ak}}\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a w_jz_j \le 2\right\},\]
which implies that $\mathcal{J}_2$ is also a polyhedron. Since $\mathcal{I}=(\mathcal{J}_1\cap\mathcal{J}_2\cap\mathbb{Z}^{ak})$, the first part of the lemma follows.
From the proof of~\cref{lem:polsys} it is immediate that $\{P=0\}\subset\mathcal{I}$. To show the converse inclusion, observe that $\{P=0\}=\cap_{i=1}^k\{B_i=0\}$ where the $B_i$'s are as defined in equation~(\ref{def:Bi}). To complete the proof, it suffices therefore to show that $\mathcal{I}_i\subset\{B_i=0\}$, where
\[\mathcal{I}_i:=\left\{z\in\mathbb{Z}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\hbox{ and } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\}.\]
Indeed, if $z\in\mathcal{I}_1$ then, because the coordinates of $z$ are integers, the condition $\sum_{j=1}^a |z_j| \le 2$ implies that $|z_j|\in\{0,1,2\}$ for $j=1,\ldots,a$. If $|z_j|=2$ for some $j$ then $\sum_{j=1}^az_j=\pm2$, which is not possible. Thus $z_j\in\{0,\pm1\}$ for $j=1,\ldots,a$; in particular, $z_j^3-z_j=0$. On the other hand, the condition $\sum_{j=1}^az_j=0$ implies that the number of 1's and (-1)'s in $(z_1,\ldots,z_a)$ balance out; in particular, since $\sum_{j=1}^a |z_j| \le 2$, either $(z_1,\ldots,z_a)$ vanishes, or it has exactly one 1 and one (-1) entry and all other entries vanish; in particular, $(2-\sum_{j=1}^az_j^2)\cdot\sum_{j=1}^az_j^2=0$. Thus, $z\in\{B_1=0\}$. The case for $i>1$ is of course the same.
\end{proof}
\begin{remark}
With current ILP solvers, one can impose that $z\in\{0,\pm1\}^{ak}$ simply as $|z_i|\le1$ for $i=1,\ldots,ak$. On the other hand, while a constraint like $\sum_{j=1}^a |z_j| \le 2$ is clearly polyhedral, it is not in the form of an affine equality or inequality suitable for ILP solvers. Nevertheless, standard reformulation techniques can convert this into a set of affine equalities and inequalities in a higher dimensional space. For example, in the product space with variables $(\tilde{z},w)$, we can write the constraint as $\sum_{j=1}^a w_j \le 2$ and $|\tilde{z}_j| \le w_j$ (i.e., $\tilde{z}_j \le w_j$ and $-\tilde{z}_j \le w_j$), which leads to an equivalent formulation of the original ILP. One may handle such reformulations automatically using the Matlab package \texttt{CVX}~\cite{cvx}.
\end{remark}
It only remains to encode the fact that we look for a \emph{nonzero} root in $\{P=0\}$, which we do via the ILP in the following theorem:
\begin{theorem}
A subset of vertices $R$ is \underline{not} resolving on $\mathbb{H}_{k,a}$ if and only if the solution to the following ILP is less than zero:
\begin{alignat}{2}
\label{eq:newILP}
&\min_{z\in\mathbb{R}^{ak}} \; & &\sum_{j=1}^{ak} 2^j z_j \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I},
\end{alignat}
where $A$ is defined in equation~\cref{def:A}.
\end{theorem}
\begin{proof}
Using equation~\cref{eq:resolveIff} and \cref{lemma:ILP}, it remains to show that the objective function is less than zero if and only if there is a non-zero feasible $z$. Suppose there is not a non-zero feasible $z$. Clearly $z=0$ is feasible, hence it is the only feasible point for the ILP, and the objective value is zero. Now suppose there is some non-zero feasible $z$. Let $j'$ be the largest non-zero coordinate. Then because $\sum_{j=1}^{j'-1} 2^j < 2^{j'}$, and because each entry is bounded $|z_j|\le 1$, the objective value at this $z$ is non-zero. If the objective value is negative, this proves the value of the ILP is negative; if the objective value is positive, then observe that $(-z)$ is also feasible and has a negative objective value, and hence the value of the ILP is negative.
\end{proof}
\begin{remark}
If the solution to the ILP is less than zero and hence $R$ is not a resolving set, then each optimal vector $z$ is the difference of the column-major ordering of the one-hot encodings of two $\hbox{$k$-mers}$ which are not resolved by $R$; in particular, a vector that resolves these $\hbox{$k$-mers}$ needs to be added to $R$ to resolve $\mathbb{H}_{k,a}$.
\end{remark}
\subsection{Practical formulations and roundoff error}
\label{sec:ILP_practical}
When $ak$ is small, it is feasible to directly solve the ILP in equation \cref{eq:newILP}. One issue with larger values of $ak$, besides an obvious increase in run-time, is that the values of $2^j$ in the objective function quickly lead to numerical overflow. A simple fix is to replace each coefficient $c_j = 2^j$ with an independently drawn realization of a standard normal random variable $\mathcal{N}(0,1)$. Since these new coefficients are independent of the feasible set, if the latter is truly larger than $\{0\}$, the probability that the entire feasible set is in the null-space of the linear function $\sum_{j=1}^{ak}c_jz_j$ is zero. Of course, again due to finite machine precision, this otherwise almost surely exact method may only be approximate. Admittedly, when running the ILP with the random coefficients $c_j$'s, finding an undoubtedly negative solution to the ILP would certify that the set $R$ is not resolving. However, if the solution is just slightly negative or vanishes within machine precision, the assessment about $R$ should be taken with a grain of salt. In this case, one should draw a new set of random coefficients and re-run the ILP to reassess the resolvability of $R$.
Another consideration is that the ILP solver wastes time finding a feasible point with the smallest possible objective, when we only care if there is a feasible point with objective smaller than $0$. Thus we could solve the feasiblity problem
\begin{alignat*}{2} \label{eq:feas1}
&\textnormal{Find} \; & &z\in\mathbb{R}^{ak} \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I} \;\textnormal{and}\; \langle c, z \rangle < 0
\end{alignat*}
where $c_j = 2^j$ or $c_j \sim \mathcal{N}(0,1)$ as discussed above. (Feasibility problems can be encoded in software by minimizing the $0$ function.) Unfortunately this is not an ILP because $\{ c \mid \langle c, z \rangle <0 \}$ is not a closed set. We can partially ameliorate this by solving
\begin{alignat}{2} \label{eq:feas2}
&\textnormal{Find} \; & &z\in\mathbb{R}^{ak} \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I} \;\textnormal{and}\; \langle c, z \rangle \le -\delta
\end{alignat}
where $\delta>0$ is a small number (our code uses $\delta=10^{-3}$). Finding a feasible point $z$ is then proof that the set $R$ does not resolve $\mathbb{H}_{k,a}$. If the solver says the above problem is infeasible, it could be that $\delta$ was too large and hence the computation was inconclusive. In this case, one could run the slower program \cref{eq:newILP}.
\section{Computational Complexity Experiments}
\label{sec:complexity_experiments}
The theoretical framework and algorithms proposed in this paper provide a novel way of approaching resolvability on Hamming graphs. To show the computational feasibility and practicality of our methods, we compare the average run-time of both the ILP and Gr\"obner basis algorithms against the brute force approach for checking resolvability. Our experiments use Python 3.7.3 and SymPy version 1.1.1~\cite{SymPy}, and the commercial ILP solver \texttt{gurobi} ver.~7.5.2~\cite{gurobi}.
In the~\cref{tab:k_a_pairs}, we present the average run-time and standard deviation of the algorithms on reference test sets for Hamming graphs of increasing sizes. \cref{fig:runtime} displays the mean run-times as a function of the graph size, and best linear fit for each method. As seen in the table and figure, the brute force approach is faster on only the smallest Hamming graphs (with fewer than $1000$ nodes) whereas the ILP solution is exceptionally fast even as the Hamming graph grows to more than $6000$ nodes. For small problems, the time taken to solve the ILP is likely dominated by the overhead cost of using \texttt{CVX} to recast the ILP into standard form. The run-time results show a promising improvement in computational time over the brute force approach which will only become more pronounced on massive Hamming graphs. Additionally, the brute force approach is infeasible on these larger graphs due to significant memory costs.
The ILP algorithm is exceptionally quick, beating all other methods for Hamming graphs with more than 1000 nodes, but it cannot guarantee that a set is resolving. The Gr\"obner basis algorithm by contrast is much slower on average but is a deterministic method of showing resolvability. ILP can be used to quickly determine possible resolving sets which are then verified by the Gr\"obner basis algorithm. In this way, the two methods are symbiotic and cover each other's weaknesses. We illustrate this in the next section.
\begin{table}
\centering
\tiny
\begin{tabular}{cS[table-format=4]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]}
\toprule
& & \multicolumn{2}{c}{Brute Force} & \multicolumn{2}{c}{Gr\"obner Basis} & \multicolumn{2}{c}{ILP} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
($k,a$) & {$a^k$} & {Mean} & {SD} & {Mean} & {SD} & {Mean} & {SD} \\
\midrule
(2,2) & 4 & 3.88e-05 & 1.51e-06 & 6.79e-03 & 1.06e-03 & 1.28e-01 & 3.53e-03 \\
(2,4) & 16 & 2.47e-04 & 6.83e-05 & 2.25e-02 & 2.59e-03 & 1.16e-01 & 4.84e-03 \\
(3,3) & 27 & 5.02e-4 & 2.45e-4 & 2.83e-2 & 7.92e-3 & 1.21e-01 & 8.12e-3 \\
(5,2) & 32 & 6.61e-04 & 3.29e-04 & 3.14e-02 & 5.27e-03 & 1.28e-01 & 3.91e-03 \\
(3,5) & 125 & 8.98e-03 & 5.38e-03 & 1.12e-01 & 2.91e-02 & 1.37e-01 & 7.02e-03 \\
(5,3) & 243 & 2.78e-2 & 1.96e-2 & 1.22e-1 & 7.88e-2 & 1.20e-1 & 8.12e-3 \\
(8,2) & 256 & 2.85e-02 & 2.21e-02 & 9.87e-02 & 1.96e-02 & 1.17e-01 & 1.59e-03 \\
(4,4) & 256 & 3.13e-02 & 1.97e-02 & 1.27e-01 & 3.90e-02 & 1.37e-01 & 9.58e-03 \\
(5,5) & 3125 & 5.19e+00 & 3.17e+00 & 4.00e+00 & 3.54e+00 & 1.35e-01 & 1.09e-02 \\
(12,2) & 4096 & 6.28e+00 & 5.34e+00 & 2.93e-01 & 7.24e-02 & 1.24e-01 & 2.39e-03 \\
(6,4) & 4096 & 7.78e+00 & 4.65e+00 & 7.73e-01 & 3.67e-01 & 1.52e-01 & 8.99e-03 \\
(8,3) & 6561 & 2.02e+01 & 1.40e+01 & 1.12e+01 & 1.47e+01 & 1.62e-01 & 1.41e-02 \\
\bottomrule
\end{tabular}
\caption{Time in seconds required to determine resolvability for each technique. Fifty resolving and fifty non-resolving sets, selected uniformly at random, were considered for each Hamming graph $\mathbb{H}_{k,a}$. Means and standard deviations consider five replicates per set.}
\label{tab:k_a_pairs}
\end{table}
\begin{figure}
\centering
\includegraphics[width=2.7in]{Runtime.pdf}
\caption{Data from~\cref{tab:k_a_pairs} with lines of best fit (on log-transformed data) for each method.}
\label{fig:runtime}
\end{figure}
\section{Low-dimensional Protein Representations}
\label{sec:protein_representation}
Symbolic information pervades modern data science. With the advent and popularization of high-throughput sequencing assays, this is particularly true in the field of computational biology where large volumes of biological sequence data have become critical for studying and understanding the behavior of cells. Analysis of these sequences, however, presents significant challenges. One major issue is that many powerful analysis techniques deal with numeric vectors, not arbitrary symbols. As a result, biological sequence data is typically mapped to a real space before such methods are applied. Two of the most common mappings use K-mer count~\cite{leslie2002spectrum} and one-hot encodings (also called binary vectors)~\cite{cai2003support}. K-mer count vectors represent symbolic sequences by their counts of each possible K-mer.
Resolving sets can be used to define low-dimensional mappings as well. To fix ideas we focus on octapeptides, that is proteins composed of 8 amino acids. With a total of 20 possible amino acids (which we represent as {\ttfamily {\footnotesize a,r,n,d,c,q,e,g,h,i,l,k,m,f, p,s,t,w,y,v}}) and imposing the Hamming distance across these sequences, we have the Hamming graph $\mathbb{H}_{8,20}$. This graph is massive. It has $25.6$ billion vertices and roughly $1.9$ trillion edges rendering most methods of discovering small resolving sets, including the ICH algorithm, useless. Utilizing a constructive algorithm, a resolving set of size 82, which we call $R$, was discovered for $\mathbb{H}_{8,20}$ in~\cite{TilLla19}. However, it is not known whether $R$ contains a proper subset that is still resolving. Here, we address this problem applying the results of sections~\ref{sec:grobner} and \ref{sec:ILP}.
Starting with lower and upper bounds $L=1$ and $U=82$ respectively, we implement a binary search for $\beta(\mathbb{H}_{8,20})$. With $s=\frac{L+U}{2}$ as the current subset size to check, up to 1000 subsets of $R$ are selected at random. The ILP approach (\cref{sec:ILP}) then provides an efficient method for testing the feasibility problem outlined in \cref{thm:Az=0} for these subsets. If any subset passes this test, the upper bound is set to $s$. Otherwise, $s$ becomes the lower bound. This process is repeated until $L=(U-1)$. Following this procedure, we found the following set of size $77$:
\[
r:=\left\{
{
\scriptsize
\begin{tabular}{lllllll}
aaaraaaa, & arwaaaaa, & ccchhhhh, & ccchhhhi, & ccchhhia, & ccchhiaa, & ccchiaaa,\\
ccciaaaa, & cnsaaaaa, & dddeeeee, & dddeeeeg, & dddeeega, & dddeegaa, & dddegaaa,\\
dddgaaaa, & dhfaaaaa, & eagaaaaa, & eeefaaaa, & eeemfaaa, & eeemmfaa, & eeemmmfa,\\
eeemmmmf, & eeemmmmm, & fffaaaaa, & gggppppp, & gggpppps, & gggpppsa, & gggppsaa,\\
gggpsaaa, & gggsaaaa, & hhhttttt, & hhhttttw, & hhhtttwa, & hhhttwaa, & hhhtwaaa,\\
hhhwaaaa, & hpvaaaaa, & iiivaaaa, & iiiyvaaa, & iiiyyvaa, & iiiyyyva, & iiiyyyyv,\\
iiiyyyyy, & kkkaaaaa, & klqaaaaa, & lllaaaaa, & mkyaaaaa, & mmmaaaaa, & nnnccccc,\\
nnnccccq, & nnncccqa, & nnnccqaa, & nnncqaaa, & nnnqaaaa, & nstaaaaa, & pppaaaaa,\\
qpkaaaaa, & qqqkaaaa, & qqqlkaaa, & qqqllkaa, & qqqlllka, & qqqllllk, & qqqlllll,\\
qyeaaaaa, & rrrdaaaa, & rrrndaaa, & rrrnndaa, & rrrnnnda, & rrrnnnnd, & rrrnnnnn,\\
sisaaaaa, & svtaaaaa, & ttcaaaaa, & vfraaaaa, & wmpaaaaa, & wwdaaaaa, & yglaaaaa
\end{tabular}
}
\right\}.
\]
Since the ILP formulation does not guarantee that this set is resolving, we verified the result using a parallelized version of the Polynomial Roots Formulation (\cref{sec:grobner}) so that the Gr\"obner bases of multiple auxiliary polynomials (Eq.~\cref{def:f(z)}) may be determined simultaneously. Thus, we have found a set $r\subset R$ of size 77 that resolves $\mathbb{H}_{8,20}$; in particular, $\beta(\mathbb{H}_{8,20})\le77$, which improves the bound of~\cite{TilLla19}, and all $25.6$ billion octapeptides may be uniquely represented with only 77 dimensions. In contrast, a $2$-mer count vector representation would require 400 dimensions and a one-hot encoding 160 dimensions.
\begin{remark}
We replicated the verification of $r$ as a resolving set of $H_{8,20}$ using our Polynomial Roots Formulation 10 times across 32 computer cores. Overall, a maximum of approximately 380 megabytes of memory per core (SD $\sim 0.5$ MB) and 6 hours and 20 minutes (SD $\sim142$ s) were required to demonstrate the resolvability of $r$. Memory usage was determined using the Slurm workload manager \verb|sacct| command and \verb|maxRSS| field, while time was measured using Python's \verb|time| module.
\end{remark}
\newpage
|
\section{Introduction}
The Cluster Variation Method (CVM) was introduced by Kikuchi
\cite{Kik51} in 1951, as an approximation technique for the
equilibrium statistical mechanics of lattice (Ising--like) models,
generalizing the Bethe--Peierls \cite{Bet35,Pei36} and
Kramers--Wannier \cite{KraWan1,KraWan2} approximations, an account of
which can be found in several textbooks \cite{PliBer,LavBel}. Apart
from rederiving these methods, Kikuchi proposed a combinatorial
derivation of what today we can call the cube (respectively triangle,
tetrahedron) approximation of the CVM for the Ising model on the
simple cubic (respectively triangular, face centered cubic) lattice.
After the first proposal, many reformulations and applications, mainly
to the computation of phase diagram of lattice models in statistical
physics and material science, appeared, and have been reviewed in
\cite{PTPS}. The main line of activity has dealt with homogeneous,
translation--invariant lattice models with classical, discrete degrees
of freedom, but several other directions have been followed, including
for instance models with continuous degrees of freedom
\cite{KikuchiCont}, free surfaces \cite{MoranLopez,BuzPel}, models of
polymers \cite{Aguilera,LiseMaritan} and quantum models
\cite{MoritaQuant,Danani}. Out of equilibrium properties have also
been studied, in the framework of the path probability method
\cite{Ishii,Ducastelle,WadaKaburagi}, which is the dynamical version
of the CVM. Despite the CVM predicts mean--field like critical
behaviour, the problem of extracting critical behaviour from sequences
of CVM approximations has also been considered by means of different
approaches \cite{CVPAM1,CVPAM2,CVPAM3,CVPAM4,CAM}.
A line of research which is particularly relevant to the present
discussion has considered heterogeneous and random models. Much work
has been devoted in the 80's to applications of the CVM to models with
quenched random interactions (see e.g.\ \cite{SeinoKatsura} and refs.\
therein), mainly aiming to the phase diagram, and related equilibrium
properties, of Ising--like models of spin glasses in the average
case. The most common approach was based on the distribution of the
effective fields, and population dynamics algorithms were developed
and studied for the corresponding integral equations. All this effort
was however limited at the replica--symmetric level. Approaches taking
into account the first step of replica symmetry breaking have been
developed only recently \cite{SPScience}, at the level of the
Bethe--Peierls approximation, in its cavity method formulation, for
models on random graphs in both the single instance and average
case. These approaches have been particularly successful in their
application to combinatorial optimization problems, like
satisfiability \cite{SPSAT} and graph coloring \cite{SPCOL}. Another
interesting approach going in a similar direction has been proposed
recently \cite{Jort}, which relies on the analysis of the time
evolution of message--passing algorithms for the Bethe--Peierls
approximation.
Prompted by the interest in optimization and, more generally,
inference problems, a lot of work on the CVM has been done in recent
years also by researchers working on probabilistic graphical models
\cite{Smy97}, since the relation between the Bethe--Peierls
approximation and the belief propagation method \cite{Pearl} was
recognized \cite{Yed01}. The interaction between the two communities
of researchers working on statistical physics and optimization and
inference algorithms then led to the discovery of several new
algorithms for the CVM variational problem, and to a deeper
understanding of the method itself. There have been applications in
the fields of image restoration
\cite{TanMor,Tan02,Tanetal03,Tanetal04}, computer vision
\cite{FrePasCar}, interference in two--dimensional channels
\cite{Noam}, decoding of error--correcting codes
\cite{Gallager,McEliece,KabSaaLDPCC}, diagnosis \cite{Diagnosis},
unwrapping of phase images \cite{Unwrapping}, bioinformatics
\cite{BurgeKarlin,BioSeqAn,Krogh}, language processing
\cite{Huang,Manning}.
The purpose of the present paper is to give a short account of recent
advances on methodological aspects, and therefore applications will
not be considered in detail. It is not meant to be exhaustive and the
material included reflects in some way the interests of the
author. The plan of the paper is as follows. In \Sref{SMM-PGM} the
basic definitions for statistical mechanics and probabilistic
graphical models are given, and notation is established. In
\Sref{Fundamentals} the CVM is introduced in its modern formulation,
and in \Sref{RegionBased} it is compared with related approximation
techniques. Its properties are then discussed, with particular
emphasis on exact results, in \Sref{Exact}. Finally, the use of the CVM
as an approximation and the algorithms which can be used to solve the
CVM variational problem are illustrated in \Sref{Approx}. Conclusions
are drawn in \Sref{Conclusions}.
\section{Statistical mechanical models and probabilistic graphical
models}
\label{SMM-PGM}
We are interested in dealing with models with discrete degrees of
freedom which will be denoted by $\bi{s} = \{ s_1, s_2, \ldots s_N
\}$. For instance, variables $s_i$ could take values in the set
$\{ 0,1 \}$ (binary variables), $\{ -1, +1 \}$ (Ising spins), or $\{ 1,
2, \ldots q \}$ (Potts variables).
Statistical mechanical models are defined through an energy function,
usually called Hamiltonian, $H = H(\bi{s})$, and the corresponding
probability distribution at thermal equilibrium is the Boltzmann
distribution
\begin{equation}
p(\bi{s}) = \frac{1}{Z} \exp\left[ - H(\bi{s}) \right],
\end{equation}
where the inverse temperature $\beta = (k_B T)^{-1}$ has been absorbed
into the Hamiltonian and
\begin{equation}
Z \equiv \exp(-F) = \sum_{\bi{s}} \exp\left[ - H(\bi{s}) \right]
\end{equation}
is the partition function, with $F$ the free energy.
The Hamiltonian is typically a sum of terms, each involving a small
number of variables. A useful representation is given by the {\it
factor graph} \cite{Kschischang}. A factor graph is a bipartite graph
made of variable nodes $i, j, \ldots$, one for each variable, and {\it
function nodes} $a, b, \ldots$, one for each term of the
Hamiltonian. An edge joins a variable node $i$ and a function node $a$
if and only if $i \in a$, that is the variable $s_i$ appears in $H_a$,
the term of the Hamiltonian associated to $a$. The Hamiltonian can
then be written as
\begin{equation}
H = \sum_a H_a(\bi{s_a}), \qquad \bi{s_a} = \{ s_i, i \in a \}.
\label{HsumHa}
\end{equation}
A simple example of a factor graph is reported in
\Fref{FactorGraph}, and the corresponding Hamiltonian is written as
\begin{equation}
\fl H(s_1,s_2,s_3,s_4,s_5,s_6) = H_a(s_1,s_2) + H_b(s_2,s_3,s_4) +
H_c(s_3,s_4,s_5,s_6).
\end{equation}
\begin{figure}
\begin{center}
\pspicture(-3,-1)(10,3)
\scalebox{0.7}{
\pscircle(0,1){.3}
\pscircle(4,1){.3}
\pscircle(8,0){.3}
\pscircle(8,2){.3}
\pscircle(12,0){.3}
\pscircle(12,2){.3}
\rput(0,1){1}
\rput(4,1){2}
\rput(8,0){4}
\rput(8,2){3}
\rput(12,0){6}
\rput(12,2){5}
\psframe(1.7,.7)(2.3,1.3)
\psframe(5.7,.7)(6.3,1.3)
\psframe(9.7,.7)(10.3,1.3)
\rput(2,1){$a$}
\rput(6,1){$b$}
\rput(10,1){$c$}
\psline(.3,1)(1.7,1)
\psline(2.3,1)(3.7,1)
\psline(4.3,1)(5.7,1)
\psline(6.3,1.15)(7.73,1.87)
\psline(6.3,.85)(7.73,.13)
\psline(8.27,1.87)(9.7,1.15)
\psline(8.27,.13)(9.7,.85)
\psline(10.3,1.15)(11.73,1.87)
\psline(10.3,.85)(11.73,.13)
}
\endpspicture
\end{center}
\caption{\label{FactorGraph}An example of a factor graph: variable and
function nodes are denoted by circles and squares, respectively}
\end{figure}
The factor graph representation is particularly useful for models with
non--pairwise interactions. If the Hamiltonian contains only
1--variable and 2--variable terms, as in the Ising model
\begin{equation}
H = - \sum_i h_i s_i - \sum_{(i,j)} J_{ij} s_i s_j,
\label{Ising}
\end{equation}
then it is customary to draw a simpler graph, where only variable
nodes appear, and edges are drawn between pairs of interacting spins
$(i,j)$. In physical models the interaction strength $J_{ij}$ can
depend on the distance between spins, and interaction is often
restricted to nearest neighbours (NNs), which are denoted by $\langle
i,j \rangle$.
In combinatorial optimization problems, the Hamiltonian plays the role
of a cost function, and one is interested in the low--temperature
limit $T \to 0$, where only minimal energy states (ground states) have
a non--vanishing probability.
Probabilistic graphical models \cite{Smy97,Lauritzen} are usually
defined in a slightly different way. In the case of {\it Markov random
fields}, also called {\it Markov networks}, the joint distribution over
all variables is given by
\begin{equation}
p(\bi{s}) = \frac{1}{Z} \prod_a \psi_a(\bi{s_a}),
\end{equation}
where $\psi_a$ is called {\it potential} (potentials involving only one
variable are often called {\it evidences}) and
\begin{equation}
Z = \sum_{\bi{s}} \prod_a \psi_a(\bi{s_a}).
\end{equation}
Of course, a statistical mechanical model described by the Hamiltonian
(\ref{HsumHa}) corresponds to a probabilistic graphical models with
potentials $\psi_a = \exp(-H_a)$. On the other hand, {\it Bayesian
networks}, which we will not consider here in detail, are defined in
terms of directed graphs and conditional probabilities. It must be
noted, however, that a Bayesian network can always be mapped onto a
Markov network \cite{Smy97}.
\section{Fundamentals of the Cluster Variation Method}
\label{Fundamentals}
The original proposal by Kikuchi \cite{Kik51} was based on an
approximation for the number of configurations of a lattice model with
assigned local expectation values. The formalism was rather involved
to deal with in the general case, and since then many reformulations
came. A first important step was taken by Barker \cite{Bar53}, who
derived a computationally useful expression for the entropy
approximation. This was then rewritten as a cumulant expansion by
Morita \cite{Mor57,Mor72}, and Schlijper \cite{Sch83} noticed that
this expansion could have been written in terms of a M\"obius
inversion. A clear and simple formulation was then eventually set up
by An \cite{An88}, and this is the one we shall follow below.
The CVM can be derived from the
variational principle of equilibrium statistical mechanics, where the
free energy is given by
\begin{equation}
F = - \ln Z = \min_p {\cal F}(p) = \min_p \sum_{\bi{s}}
\left[ p(\bi{s}) H(\bi{s}) + p(\bi{s}) \ln p(\bi{s}) \right]
\label{VarPrin}
\end{equation}
subject to the normalization constraint
\begin{equation}
\sum_{\bi{s}} p(\bi{s}) = 1.
\end{equation}
It is easily verified that the minimum is obtained for the Boltzmann
distribution
\begin{equation}
\hat p(\bi{s}) = \frac{1}{Z} \exp[- H(\bi{s})] = {\rm arg} \,{\rm min}
\, {\cal F}
\end{equation}
and that the variational free energy can be written in the form of a
Kullback--Leibler distance
\begin{equation}
{\cal F}(p) = F + \sum_{\bi{s}} p(\bi{s}) \ln \frac{p(\bi{s})}{\hat
p(\bi{s})}.
\end{equation}
The basic idea underlying the CVM is to treat exactly the first term
(energy) of the variational free energy ${\cal F}(p)$ in
\Eref{VarPrin} and to approximate the second one (entropy) by means of
a truncated cumulant expansion.
We first define a {\it cluster} $\alpha$ as a subset of the factor
graph such that if a factor node belongs to $\alpha$, then all the
variable nodes $i \in a$ also belong to $\alpha$ (while the converse
needs not to be true, otherwise the only legitimate clusters would
be the connected components of the factor graph). Given a cluster we
can define its energy
\begin{equation}
H_\alpha(\bi{s_\alpha}) = \sum_{a \in \alpha} H_a(\bi{s_a}),
\end{equation}
probability distribution
\begin{equation}
p_\alpha(\bi{s_\alpha}) = \sum_{\bi{s} \setminus \bi{s_\alpha}} p(\bi{s})
\end{equation}
and entropy
\begin{equation}
S_\alpha = - \sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) \ln
p_\alpha(\bi{s_\alpha}).
\end{equation}
Then the entropy cumulants are defined by
\begin{equation}
S_\alpha = \sum_{\beta \subseteq \alpha} \tilde S_\beta,
\end{equation}
which can be solved with respect to the cumulants by means of a
M\"obius inversion, which yields
\begin{equation}
\tilde S_\beta = \sum_{\alpha \subseteq \beta}
(-1)^{n_\alpha - n_\beta} S_\alpha,
\end{equation}
where $n_\alpha$ denotes the number of variables in cluster
$\alpha$. The variational free energy can then be written as
\begin{equation}
{\cal F}(p) = \sum_{\bi{s}} p(\bi{s}) H(\bi{s}) - \sum_\beta \tilde
S_\beta,
\end{equation}
where the second summation is over all possible clusters.
The above equation is still an exact one, and here the approximation
enters. A set $R$ of clusters, made of maximal clusters and all their
subclusters, is selected, and the cumulant expansion of the entropy is
truncated retaining only terms corresponding to clusters in $R$. In
order to treat the energy term exactly it is necessary that each
function node is contained in at least one maximal cluster. One gets
\begin{equation}
\sum_\beta \tilde S_\beta \simeq \sum_{\beta \in R} \tilde S_\beta =
\sum_{\alpha \in R} a_\alpha S_\alpha,
\label{CVMapprox}
\end{equation}
where the coefficients $a_\alpha$, sometimes called M\"obius numbers,
satisfy \cite{An88}
\begin{equation}
\sum_{\beta \subseteq \alpha \in R} a_\alpha = 1 \qquad
\forall \beta \in R.
\label{MobiusNumbers}
\end{equation}
The above condition means that every subcluster must be counted
exactly once in the entropy expansion and allows to rewrite also the
energy term as a sum of cluster energies, yielding the approximate
variational free energy
\begin{equation}
{\cal F}(\{p_\alpha, \alpha \in R\}) = \sum_{\alpha \in R} a_\alpha
{\cal F}_\alpha(p_\alpha),
\label{CVMFree}
\end{equation}
where the cluster free energies are given by
\begin{equation}
{\cal F}_\alpha(p_\alpha) = \sum_{\bi{s_\alpha}} \left[
p_\alpha(\bi{s_\alpha)} H_\alpha(\bi{s_\alpha}) +
p_\alpha(\bi{s_\alpha)} \ln p_\alpha(\bi{s_\alpha)} \right].
\label{ClusterFree}
\end{equation}
The CVM then amounts to the minimization of the above variational free
energy with respect to the cluster probability distributions, subject
to the normalization
\begin{equation}
\sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) = 1 \qquad \forall \alpha
\in R
\end{equation}
and compatibility constraints
\begin{equation}
p_\beta(\bi{s_\beta)} = \sum_{\bi{s_{\alpha \setminus \beta}}}
p_\alpha(\bi{s_\alpha}) \qquad
\forall \beta \subset \alpha \in R.
\label{CompConstr}
\end{equation}
It is of great importance to observe that the above constraint set is
approximate, in the sense that there are sets of cluster probability
distributions that satisfy these constraints and nevertheless cannot
be obtained as marginals of a joint probability distribution. An
explicit example will be given in \Sref{Exact}.
The simplest example is the pair approximation for a model with
pairwise interactions, like the Ising model (\ref{Ising}). The maximal
clusters are the pairs of interacting variables, and the other
clusters appearing in $R$ are the variable nodes. The pairs have
M\"obius number 1, while for the variable nodes $a_i = 1 - d_i$, where
$d_i$ is the {\it degree} of node $i$, that is, in the factor graph
representation, the number of function nodes it belongs to.
The quality of the approximation (\ref{CVMapprox}) depends on the value
of the neglected cumulants. In the applications to lattice systems it
is typically assumed that, since cumulants are related to
correlations, they vanish quickly for clusters larger than the
correlation length of the model. In \Fref{Cumulants} the first
cumulants, relative to the site (single variable) entropy, are shown
for the homogeneous ($J_{ij} = J$), zero field ($h_i = 0$), square
lattice Ising model, in the square approximation of the CVM.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{Cumulants.eps}
\end{center}
\caption{\label{Cumulants}Cumulants for the square lattice Ising model}
\end{figure}
It can be seen that the cumulants peak at the (approximate) critical
point and decrease as the cluster size increases. This property is not
however completely general, it may depend on the interaction range. It
has been shown \cite{KappenWiegerinck} that this does not hold for
finite instances of the Sherrington--Kirkpatrick spin--glass model,
which is a fully connected model.
The meaning of cumulants as a measure of correlation can be easily
understood by considering a pair of weakly correlated variables and
writing their joint distribution as
\begin{equation}
p_{12}(s_1,s_2) = p_1(s_1) p_2(s_2) \left[ 1 +
\varepsilon \, q(s_1,s_2) \right], \qquad \varepsilon \ll 1.
\end{equation}
The corresponding cumulant is then
\begin{equation}
\tilde S_{12} = S_{12} - S_1 - S_2 = - \langle \ln \left[ 1 +
\varepsilon \, q(s_1,s_2) \right] \rangle = \Or(\varepsilon).
\end{equation}
\section{Region--based free energy approximations}
\label{RegionBased}
The idea of {\it region--based free energy approximations}, put
forward by Yedidia \cite{Yed04}, is quite useful to elucidate some of
the characteristics of the method, and its relations to other
techniques. A region--based free energy approximation is formally
similar to the CVM, and can be defined through equations (\ref{CVMFree})
and (\ref{ClusterFree}), but the requirements on the coefficients
$a_\alpha$ are weaker. The single counting condition is imposed only
on variable and function nodes, instead of all subclusters:
\begin{eqnarray}
\sum_{\alpha \in R, a \in \alpha} a_\alpha = 1 \qquad \forall a, \\
\sum_{\alpha \in R, i \in \alpha} a_\alpha = 1 \qquad \forall i.
\end{eqnarray}
Interesting particular cases are obtained if $R$ contains only two
types of regions, {\it large regions} and {\it small regions}. The
{\it junction graph} method \cite{Yed04,AjiMc} is obtained if they
form a directed graph, with edges from large to small regions, such
that:
\begin{enumerate}
\item every edge connects a large region with a small region which is
a subset of the former;
\item the subgraph of the regions containing a given node is a
connected tree.
\end{enumerate}
On the other hand, the {\it Bethe--Peierls approximation}, in its most general
formulation, is obtained by taking function nodes (with the associated
variable nodes) as large regions and variable nodes as small
regions. This reduces to the usual statistical physics formulation in
the case of pairwise interactions.
The CVM is a special region--based free energy approximation, with the
property that $R$ is closed under intersection. Indeed, one could
define $R$ for the CVM as the set made of the maximal clusters and all
the clusters which can be obtained by taking all the possible
intersections of (any number of) maximal clusters.
It is easy to verify that the Bethe--Peierls approximation is a
special case of CVM only if no function node shares more than one
variable node with another function node. If this is not the case, one
should be careful when applying the Bethe--Peierls
approximation. Consider a model with the factor graph depicted in
\Fref{BetheNotCVM}, where $s_i = \pm 1$ ($i = 1, 2, 3, 4$), $H = H_a +
H_b$ and
\begin{eqnarray}
H_a(s_1,s_2,s_3) = - h_0 s_1 - \frac{h}{2} (s_2 + s_3) - J s_1 s_2 s_3,
\\
H_b(s_2,s_3,s_4) = - h_0 s_4 - \frac{h}{2} (s_2 + s_3) - J s_2 s_3 s_4.
\end{eqnarray}
\begin{figure}
\begin{center}
\pspicture(-1,-2)(7,2)
\scalebox{0.7}{
\pscircle(0,0){.3}
\pscircle(4,1){.3}
\pscircle(4,-1){.3}
\pscircle(8,0){.3}
\psframe(1.7,-.3)(2.3,.3)
\psframe(5.7,-.3)(6.3,.3)
\rput(0,0){1}
\rput(4,1){2}
\rput(4,-1){3}
\rput(8,0){4}
\rput(2,0){$a$}
\rput(6,0){$b$}
\psline(.3,0)(1.7,0)
\psline(2.3,.15)(3.73,.87)
\psline(2.3,-.15)(3.73,-.87)
\psline(4.23,.87)(5.7,.15)
\psline(4.23,-.87)(5.7,-.15)
\psline(6.3,0)(7.7,0)
}
\endpspicture
\end{center}
\caption{\label{BetheNotCVM}Factor graph of a model for which the
Bethe--Peierls approximation is not a special case of the CVM}
\end{figure}
The CVM, with function nodes as maximal clusters, is exact (notice
that it coincides with the junction graph method), and the corresponding exact
cumulant expansion for the entropy is
\begin{equation}
S = S_a + S_b - S_{23},
\end{equation}
while the Bethe--Peierls entropy is
\begin{equation}
S_{\rm BP} = S_a + S_b - S_2 - S_3.
\end{equation}
The two entropies differ by the cumulant $\tilde S_{23} = S_{23} - S_2
- S_3$, and hence correlations between variable nodes 2 and 3 cannot
be captured by the Bethe--Peierls approximation. In
\Fref{BetheFailure} it is clearly illustrated how the Bethe--Peierls
approximation can fail. At large enough $J$ the exact entropy is
larger (by roughly $\ln 2$) than the Bethe--Peierls one.
\begin{figure}
\begin{center}
\includegraphics*[scale=.38]{BetheFailure.eps}
\end{center}
\caption{\label{BetheFailure}Entropy of the Bethe--Peierls approximation vs the
exact one for a model for which the Bethe--Peierls approximation is not a
special case of the CVM}
\end{figure}
\section{Exactly solvable cases}
\label{Exact}
The CVM is known to be exact in several cases, due to the topology of
the underlying graph, or to the special form of the Hamiltonian. In
the present section we shall first consider cases in which the CVM is
exact due to the graph topology, then proceed to the issue of
realizability and consider cases where the form of the Hamiltonian
makes an exact solution feasible with the CVM.
\subsection{Tree-like graphs}
It is well known that the CVM is exact for models defined on
tree--like graphs. This statement can be made more precise by
referring to the concept of {\it junction tree} \cite{LauSpi,Jensen},
which we shall actually use in its generalized form given by Yedidia,
Freeman and Weiss \cite{Yed04}. A junction tree is a tree--like
junction graph. The corresponding large regions are often called {\it
cliques}, and the small regions {\it separators}. With reference to
\Fref{BetheNotCVM} it is easy to check that the CVM, as described in
the previous section, corresponds to a junction tree with cliques
$(a123)$ and $(b234)$ and separator $(23)$, while the junction graph
corresponding to the Bethe--Peierls approximation is not a tree.
For a model defined on a junction tree the joint probability
distribution factors \cite{Yed04,Cowell} according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\alpha \in R_L}
p_{\alpha}(\bi{s_\alpha})}
{\displaystyle\prod_{\beta \in R_S} p_\beta^{d_\beta-1}(\bi{s_\beta})},
\end{equation}
where $R_L$ and $R_S$ denote the sets of large and small regions,
respectively, and $d_\beta$ is the degree of the small region $\beta$
in the junction tree. Notice that no normalization is needed.
The above factorization of the probability leads to the exact cumulant
expansion
\begin{equation}
S = \sum_{\alpha \in R_L} S_\alpha - \sum_{\beta \in R_S} (d_\beta-1)
S_\beta,
\end{equation}
and therefore the CVM with $R = R_L \cup R_S$ is exact.
As a first example, consider a particular subset of the square
lattice, the strip depicted in \Fref{Strip}, with open boundary
conditions in the horizontal direction, and define on it a model with
pairwise interactions (we do not use the factor graph representation
here).
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(7 0)
\move(0 1) \lvec(7 1)
\move(0 2) \lvec(7 2)
\move(0 3) \lvec(7 3)
\move(0 4) \lvec(7 4)
\move(1 0) \lvec(1 4)
\move(2 0) \lvec(2 4)
\move(3 0) \lvec(3 4)
\move(4 0) \lvec(4 4)
\move(5 0) \lvec(5 4)
\move(6 0) \lvec(6 4)
\textref h:R v:C \htext(-.1 4) {1}
\textref h:R v:C \htext(-.1 3) {2}
\textref h:R v:C \htext(-.1 1.5) {$\vdots$}
\textref h:R v:C \htext(-.1 0) {$N$}
\move(2.5 2) \lellip rx:.75 ry:3
\move(3.5 2) \lellip rx:.75 ry:3
\textref h:C v:C \htext(3.5 -1.5) {$L$}
\move(3 -1.5) \avec(0 -1.5) \move(4 -1.5) \avec(7 -1.5)
\move(9 0) \lvec(9 4)
\move(10 0) \lvec(10 4)
\move(9 0) \lvec(10 0)
\move(9 1) \lvec(10 1)
\move(9 2) \lvec(10 2)
\move(9 3) \lvec(10 3)
\move(9 4) \lvec(10 4)
\move(9 2) \lellip rx:.2 ry:2.5
\textref h:C v:B \htext(9 4.75) {$\bi{s}$}
\textref h:C v:B \htext(10 4.75) {$\bi{s^\prime}$}
\move(10 2) \lellip rx:.2 ry:2.5
\textref h:C v:C \htext(9.5 -1) {II}
\move(12 0) \lvec(12 4)
\move(11.9 0) \lvec(12.1 0)
\move(11.9 1) \lvec(12.1 1)
\move(11.9 2) \lvec(12.1 2)
\move(11.9 3) \lvec(12.1 3)
\move(11.9 4) \lvec(12.1 4)
\textref h:C v:C \htext(12 -1) {I}
}
\caption{\label{Strip}A one--dimensional strip and the clusters used
to solve a pairwise model on it}
\end{figure}
According to the junction tree property, the joint probability factors
as follows:
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\alpha \in {\rm II}}
p_\alpha(\bi{s_\alpha})}
{\displaystyle\prod_{\beta \in {\rm I}} p_\beta(\bi{s_\beta})},
\end{equation}
where I and II denote the sets of chains (except boundary ones) and
ladders, respectively, shown in \Fref{Strip}. As a consequence, the
cumulant expansion
\begin{equation}
S = \sum_{\alpha \in {\rm II}} S_\alpha -
\sum_{\beta \in {\rm I}} S_\beta
\end{equation}
of the entropy is also exact, and the cluster variation method with $R
= {\rm II} \cup {\rm I}$ is exact. For strip width $N = 1$ we obtain
the well--known statement that the Bethe--Peierls approximation is
exact for a one--dimensional chain. Rigorous proofs of this statement
have been given by Brascamp \cite{Bra71} and Percus \cite{Per77}. More
generally, Schlijper has shown \cite{Sch84} that the equilibrium
probability of a $d$--dimensional statistical mechanical model with
finite range interactions is completely determined by its restrictions
(marginals) to $d-1$--dimensional slices of width at least equal to
the interaction range.
In the infinite length limit $L \to \infty$ translational invariance
is recovered
\begin{equation}
\fl \frac{{\cal F}}{L} = \sum_{\bi{s},\bi{s^\prime}} \left([ p_{\rm
II}(\bi{s},\bi{s^\prime}) H_{\rm II}(\bi{s},\bi{s^\prime}) + p_{\rm
II}(\bi{s},\bi{s^\prime}) \ln p_{\rm II}(\bi{s},\bi{s^\prime})\right]
- \sum_{\bi{s}} p_{\rm I}(\bi{s}) \ln p_{\rm I}(\bi{s})
\end{equation}
and solving for $p_{\rm II}$ we obtain the transfer matrix formalism:
\begin{eqnarray}
\frac{F}{L} = - \ln \max_{p_{\rm I}} \left\{
\sum_{\bi{s},\bi{s^\prime}}
p_{\rm I}^{1/2}(\bi{s}) \exp\left[ -
H_{\rm II}(\bi{s},\bi{s^\prime}) \right]
p_{\rm I}^{1/2}(\bi{s^\prime}) \right\} \\
\sum_{\bi{s}} p_{\rm I}(\bi{s}) = 1
\end{eqnarray}
The natural iteration method (see \sref{VarAlg}) in this case reduces to
the power method for finding the largest eigenvalue of the transfer
matrix.
As a second example, consider a tree, like the one depicted in
\Fref{BetheLattice}, and a model with pairwise interactions defined on
it.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(.866 .5)
\move(.866 .5) \lvec(1.732 0)
\move(1.732 0) \lvec(1.732 -.5)
\move(1.732 0) \lvec(2.165 .25)
\move(.866 .5) \lvec(.866 1.5)
\move(.866 1.5) \lvec(1.299 1.75)
\move(.866 1.5) \lvec(.433 1.75)
\move(0 0) \lvec(-.866 .5)
\move(-.866 .5) \lvec(-1.732 0)
\move(-1.732 0) \lvec(-1.732 -.5)
\move(-1.732 0) \lvec(-2.165 .25)
\move(-.866 .5) \lvec(-.866 1.5)
\move(-.866 1.5) \lvec(-1.299 1.75)
\move(-.866 1.5) \lvec(-.433 1.75)
\move(0 0) \lvec(0 -1)
\move(0 -1) \lvec(.866 -1.5)
\move(.866 -1.5) \lvec(.866 -2)
\move(.866 -1.5) \lvec(1.299 -1.25)
\move(0 -1) \lvec(-.866 -1.5)
\move(-.866 -1.5) \lvec(-.866 -2)
\move(-.866 -1.5) \lvec(-1.299 -1.25)
\textref h:C v:B \htext(0 .1) {0}
\textref h:L v:B \htext(.966 .5) {1}
\textref h:R v:B \htext(-.966 .5) {2}
\textref h:L v:B \htext(.1 -1) {$d_0 = 3$}
}
\caption{\label{BetheLattice}A small portion of a tree}
\end{figure}
In this case the probability factors according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\langle i j \rangle} p_{ij}(s_i,s_j)}
{\displaystyle\prod_{i} p_i^{d_i-1}(s_i)},
\end{equation}
where $\langle i j \rangle$ denotes a pair of adjacent nodes. The
cumulant expansion of the entropy is therefore
\begin{equation}
S = \sum_{\langle i j \rangle} S_{ij} - \sum_{i} (d_i - 1) S_i,
\end{equation}
and the pair approximation of the CVM (coinciding with Bethe--Peierls and
junction graph) is exact. Recently this property has been exploited to
study models on finite connectivity random graphs, which strictly
speaking are not tree--like: loops are present, but in the
thermodynamic limit their typical length scales like $\ln N$ \cite{Bollobas}.
As a final example, consider the so--called (square) cactus lattice
(the interior of a Husimi tree), depicted in \Fref{Cactus}.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(-1.75 -1.5) \lvec(-.25 -1.5)
\move(1.75 -1.5) \lvec(.25 -1.5)
\move(-1.75 -.5) \lvec(1.75 -.5)
\move(-1.75 .5) \lvec(1.75 .5)
\move(-1.75 1.5) \lvec(-.25 1.5)
\move(1.75 1.5) \lvec(.25 1.5)
\move(-1.5 -1.75) \lvec(-1.5 -.25)
\move(-1.5 1.75) \lvec(-1.5 .25)
\move(-.5 -1.75) \lvec(-.5 1.75)
\move(.5 -1.75) \lvec(.5 1.75)
\move(1.5 -1.75) \lvec(1.5 -.25)
\move(1.5 1.75) \lvec(1.5 .25)
}
\caption{\label{Cactus}A small portion of a square cactus lattice}
\end{figure}
Here the probability factors according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\opensquare}
p_{\opensquare}(\bi{s_{\opensquare}})}{\displaystyle\prod_i p_i(s_i)},
\end{equation}
the entropy cumulant expansion takes the form
\begin{equation}
S = \sum_{\opensquare} S_{\opensquare} - \sum_{i} S_i,
\end{equation}
and the CVM with $R$ made of square plaquettes and sites is
exact. Again, this coincides with the junction graph method and, if
function nodes are associated to square plaquettes (so that the
corresponding factor graph is tree--like), with Bethe--Peierls.
\subsection{Realizability}
We have seen that when the probability factors in a suitable way, the
CVM can be used to find an exact solution. By analogy, we could ask
whether, as in the mean field approximation, CVM approximations can
yield an estimate of the joint probability distribution as a function
of the cluster distributions, in a factorized form. In the general
case, the answer is negative. One cannot, using a trial factorized
form like
\begin{equation}
\prod_\alpha [ p_\alpha(\bi{s_\alpha}) ]^{a_\alpha}
\label{CVMproduct}
\end{equation}
(which would lead to a free energy like that in Eqs.\
\ref{CVMFree}-\ref{ClusterFree}), obtain a joint probability
distribution which marginalizes down to the cluster probability
distributions used as a starting point. As a consequence, we have no
guarantee that the CVM free energy is an upper bound to the exact free
energy. Moreover, in sufficiently frustrated problems, the cluster
probability distributions cannot even be regarded as marginals of a
joint probability distribution \cite{Sch88}.
It can be easily checked that \Eref{CVMproduct} is not, in the general
case, a probability distribution. It is not normalized and therefore
its marginals do not coincide with the $p_\alpha$'s used to build
it. At best, one can show that
\begin{equation}
\prod_\alpha [ p_\alpha(\bi{s_\alpha}) ]^{a_\alpha} \propto
\exp[-H(\bi{s})],
\label{FactorProp}
\end{equation}
but the normalization constant is unknown. This has been proven in
\cite{WaiJaaWil} at the Bethe--Peierls level, and the proof can be
easily generalized to any CVM approximation.
Let us now focus on a very simple example. Consider three Ising
variables, $s_i = \pm 1$, $i = 1, 2, 3$, with the following node and
pair probabilities:
\begin{eqnarray}
p_i(s_i) = 1/2 \qquad i = 1, 2, 3 \\
p_{ij}(s_i,s_j) = \frac{1 + c s_i
s_j}{4}, \qquad -1 \le c \le 1, \qquad i < j.
\end{eqnarray}
A joint $p(s_1,s_2,s_3)$ marginalizing to the above probabilities
exists for $-1/3 \le c \le 1$, which shows clearly that the constraint
set \Eref{CompConstr} is approximate, and in particular it can be too
loose. For instance, in \cite{PelPre} it has been shown that due to
this problem the Bethe--Peierls approximation for the triangular Ising
antiferromagnet predicts, at low temperature, unphysical results for
the correlations and a negative entropy.
Moreover, the joint probability $p(s_1,s_2,s_3)$ is given by the
CVM--like factorized form
\begin{equation}
\frac{p_{12}(s_1,s_2) p_{13}(s_1,s_3) p_{23}(s_2,s_3)}{p_1(s_1)
p_2(s_2) p_3(s_3)}
\end{equation}
only if $c = 0$, that is if the variables are completely uncorrelated.
As a more interesting case, we shall consider in the next subsection
the square lattice Ising model. In this case it has been shown
\cite{Disorder1,Disorder2} that requiring realizability yields an
exactly solvable case.
\subsection{Disorder points}
For a homogeneous (translation--invariant) model defined on a
square lattice, the square approximation of the CVM, that is
the approximation obtained by taking the elementary square plaquettes
as maximal clusters, entails the following approximate entropy
expansion:
\begin{equation}
S \simeq \sum_{\opensquare} S_{\opensquare} - \sum_{\langle i j \rangle}
S_{ij} + \sum_i S_i.
\label{SquareEntropy}
\end{equation}
The corresponding factorization
\begin{equation}
\prod_{\opensquare}
p_{\opensquare}(\bi{s_{\opensquare}}) \prod_{\langle i j \rangle}
p_{ij}^{-1}(s_i,s_j) \prod_i p_i(s_i)
\label{pDisorder}
\end{equation}
for the probability does not, in general, give an approximation to the
exact equilibrium distribution. Indeed, it does not marginalize to the
cluster distributions and is not even normalized.
One could, however, try to impose that the joint probability given by
the above factorization marginalizes to the cluster
distributions. It turns out that it is sufficient to impose
such a condition on the probability distribution of a $3 \times 3$
square, like the one depicted in \Fref{Square3x3}. It is easy to check
that for an Ising model the CVM--like function
\begin{equation}
\fl
\frac{
p_{4}(s_1,s_2,s_5,s_4)
p_{4}(s_2,s_3,s_6,s_5)
p_{4}(s_4,s_5,s_8,s_7)
p_{4}(s_5,s_6,s_9,s_8)
p_{1}(s_5)}
{p_{2}(s_2,s_5) p_{2}(s_5,s_8)
p_{2}(s_4,s_5) p_{2}(s_5,s_6)}
\end{equation}
marginalizes to the square, pair and site distributions ($p_4$, $p_2$
and $p_1$ respectively) only if odd expectation values vanish and
\begin{equation}
\langle s_i s_k \rangle_{\langle \langle i k \rangle \rangle} =
\langle s_i s_j \rangle_{\langle i j \rangle}^2,
\end{equation}
where the l.h.s.\ is the next nearest neighbour correlation, while the
r.h.s.\ is the square of the nearest neighbour correlation.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(4 0)
\move(0 2) \lvec(4 2)
\move(0 4) \lvec(4 4)
\move(0 0) \lvec(0 4)
\move(2 0) \lvec(2 4)
\move(4 0) \lvec(4 4)
\textref h:L v:B
\htext(.1 .1) {$s_1$}
\htext(2.1 .1) {$s_2$}
\htext(4.1 .1) {$s_3$}
\htext(.1 2.1) {$s_4$}
\htext(2.1 2.1) {$s_5$}
\htext(4.1 2.1) {$s_6$}
\htext(.1 4.1) {$s_7$}
\htext(2.1 4.1) {$s_8$}
\htext(4.1 4.1) {$s_9$}
}
\caption{\label{Square3x3}A $3 \times 3$ square on the square lattice}
\end{figure}
Leaving apart the trivial non--interacting case, the above condition
is satisfied by an Ising model with nearest neighbour, next nearest
neighbour and plaquette interactions, described by the Hamiltonian
\begin{equation}
H = - J_1 \sum_{\langle i j \rangle} s_i s_j
- J_2 \sum_{\langle \langle i j \rangle \rangle} s_i s_j
- J_4 \sum_{\opensquare} s_i s_j s_k s_l,
\end{equation}
if the couplings satisfy the {\it disorder} condition (see
\cite{Disorder1} and refs.\ therein)
\begin{equation}
\cosh (2 J_1) = \frac{e^{4J_2+2J_4}+e^{-4J_2+2J_4}+2 e^{-2J_2}}
{2\left(e^{2J_2}+e^{2J_4}\right)}.
\end{equation}
This defines a variety in the parameter space, lying in the disordered
phase of the model, and in particular in the region where nearest
neighbour and next nearest neighbour interactions compete. In this case
the square approximation of the CVM yields the exact solution,
including the exact free energy density
\begin{equation}
f = - \ln \left[ \exp(-J_4)+\exp(J_4 - 2J_2) \right],
\end{equation}
and the nearest neighbour correlation
\begin{equation}
g = \langle s_i s_j \rangle_{\langle i j \rangle} =
\frac{\exp(-4J_2) - \cosh(2 J_1)}{\sinh(2J_1)}.
\end{equation}
Higher order correlations can be derived from the joint probability
\Eref{pDisorder}, for example the two--body correlation function
$\Gamma(x,y) = \langle s(x_0,y_0) s(x_0+x,y_0+y) \rangle$ (where spin
variables have been identified by their coordinates on the lattice),
which simply reduces to a power of the nearest neighbour correlation:
$\Gamma(x,y) = g^{|x|+|y|}$. For this reason a line of disorder points
is often referred to as a one--dimensional line. Or the plaquette
correlation:
\begin{equation}
q = \langle s_i s_j s_k s_l \rangle_{\opensquare} =
\frac{e^{4J_4}\left(1-e^{8J_2}\right) +
4 e^{2J_2}\left(e^{2J_4}-e^{2J_2}\right)}
{e^{4J_4}\left(1-e^{8J_2}\right) +
4 e^{2J_2}\left(e^{2J_4}+e^{2J_2}\right)}.
\end{equation}
Finally, since all the pair correlations are given simply as powers of
the nearest--neighbour correlation we can easily calculate the
momentum space correlation function, or structure factor. We first
write $\Gamma(x,y) =
\exp\left(-\displaystyle\frac{|x|+|y|}{\xi}\right)$, where $\xi =
-(\ln g)^{-1}$. After a Fourier transform one finds $S(p_x,p_y) =
S_1(p_x) S_1(p_y)$, where
\begin{equation}
S_1(p) = \frac{\sinh(1/\xi)}{\cosh(1/\xi) - \cos p}.
\end{equation}
It can be verified that the structure factors calculated by Sanchez
\cite{Sanchez} and (except for a misprint) Cirillo and coworkers
\cite{Cirillo} reduce to the above expression on the disorder line.
\subsection{Wako--Sait\^o--Mu\~noz--Eaton model of protein folding}
There is at least another case in which the probability factors at the
level of square plaquettes, and the CVM yields the exact solution. It
is the Wako--Sait\^o--Mu\~noz--Eaton model of protein folding
\cite{WakSat1,WakSat2,MunEat1,MunEat2,MunEat3,BruPel1,BruPel2,PelJSTAT}. Here
we will not delve into the details of the model, giving only its
Hamiltonian in the form
\begin{equation}
H = \sum_{i=1}^L \sum_{j=i}^L h_{i,j} x_{i,j}, \qquad x_{i,j} =
\prod_{k=i}^j x_k, \qquad x_k = 0, 1.
\end{equation}
It is a one--dimensional model with arbitrary range multivariable
interactions, but the particular form of these interactions makes an
exact solution possible. A crucial step in this solution was the
mapping to a two--dimensional model \cite{BruPel1}, where the
statistical variables are the $x_{i,j}$'s (see \Fref{MunozEaton} for
an illustration). In terms of these variables the Hamiltonian is
local, and the only price one has to pay is to take into account the
constraints
\begin{equation}
x_{i,j} = x_{i+1,j} x_{i,j-1},
\end{equation}
which can be viewed as local interactions.
\begin{figure}
\begin{center}
\psset{unit=.7cm}
\pspicture(-2,-11)(11,2)
\psline(-1,-.5)(11,-.5)
\psline(.5,-11)(.5,1)
\rput(1,0){1}
\rput(2,0){2}
\rput(3,0){3}
\rput(4,0){4}
\rput(5,0){5}
\rput(6,0){6}
\rput(7,0){7}
\rput(8,0){8}
\rput(9,0){9}
\rput(10,0){10}
\rput(5.5,.5){$j$}
\rput(0,-10){10}
\rput(0,-9){9}
\rput(0,-8){8}
\rput(0,-7){7}
\rput(0,-6){6}
\rput(0,-5){5}
\rput(0,-4){4}
\rput(0,-3){3}
\rput(0,-2){2}
\rput(0,-1){1}
\rput(-.5,-5.5){$i$}
\rput(1,-1){$\circ$}
\rput(2,-1){$\circ$}
\rput(3,-1){$\circ$}
\rput(4,-1){$\circ$}
\rput(5,-1){$\circ$}
\rput(6,-1){$\circ$}
\rput(7,-1){$\circ$}
\rput(8,-1){$\circ$}
\rput(9,-1){$\circ$}
\rput(10,-1){$\circ$}
\rput(2,-2){$\bullet$}
\rput(3,-2){$\bullet$}
\rput(4,-2){$\bullet$}
\rput(5,-2){$\bullet$}
\rput(6,-2){$\circ$}
\rput(7,-2){$\circ$}
\rput(8,-2){$\circ$}
\rput(9,-2){$\circ$}
\rput(10,-2){$\circ$}
\rput(3,-3){$\bullet$}
\rput(4,-3){$\bullet$}
\rput(5,-3){$\bullet$}
\rput(6,-3){$\circ$}
\rput(7,-3){$\circ$}
\rput(8,-3){$\circ$}
\rput(9,-3){$\circ$}
\rput(10,-3){$\circ$}
\rput(4,-4){$\bullet$}
\rput(5,-4){$\bullet$}
\rput(6,-4){$\circ$}
\rput(7,-4){$\circ$}
\rput(8,-4){$\circ$}
\rput(9,-4){$\circ$}
\rput(10,-4){$\circ$}
\rput(5,-5){$\bullet$}
\rput(6,-5){$\circ$}
\rput(7,-5){$\circ$}
\rput(8,-5){$\circ$}
\rput(9,-5){$\circ$}
\rput(10,-5){$\circ$}
\rput(6,-6){$\circ$}
\rput(7,-6){$\circ$}
\rput(8,-6){$\circ$}
\rput(9,-6){$\circ$}
\rput(10,-6){$\circ$}
\rput(7,-7){$\circ$}
\rput(8,-7){$\circ$}
\rput(9,-7){$\circ$}
\rput(10,-7){$\circ$}
\rput(8,-8){$\bullet$}
\rput(9,-8){$\bullet$}
\rput(10,-8){$\bullet$}
\rput(9,-9){$\bullet$}
\rput(10,-9){$\bullet$}
\rput(10,-10){$\bullet$}
\endpspicture
\end{center}
\caption{\label{MunozEaton}A typical configuration of the
Mu\~noz--Eaton model. An empty (resp.\ filled) circle at row $i$ and
column $j$ represents the variable $x_{i,j}$ taking value 0 (resp.\
1).}
\end{figure}
In order to derive the factorization of the probability
\cite{PelJSTAT}, we need first to exploit the locality of
interactions, which allows us to write
\begin{equation}
p(\{x_{i,j}\}) = \frac{p^{(1,2)} p^{(2,3)} \cdots p^{(L-1,L)}}{p^{(2)}
\cdots p^{(L-1)} },
\label{ME-TMfactoring}
\end{equation}
where $p^{(j)}$ denotes the probability of the $j$th row in
\Fref{MunozEaton} and $p^{(j,j+1)}$ denotes the joint probability of
rows $j$ and $j+1$.
As a second step, consider the effect of the constraints. This is best
understood looking at the following example:
\begin{eqnarray}
p^{(j)}(0, \cdots 0_i, 1_{i+1}, \cdots 1) &=& p^{(j)}_{i,i+1}(0,1) =
\nonumber \\
&=& \frac{p^{(j)}_{1,2}(0,0) \cdots p^{(j)}_{i,i+1}(0,1) \cdots
p^{(j)}_{j-1,j}(1,1)}{p^{(j)}_2(0) \cdots p^{(j)}_i(0)
p^{(j)}_{i+1}(1) \cdots p^{(j)}_{j-1}(1)}.
\end{eqnarray}
The CVM--like factorization is possible since every factor in the
numerator, except $p^{(j)}_{i,i+1}(0,1)$, cancels with a factor in the
denominator. A similar result can be obtained for the joint
probability of two adjacent rows, and substituting into
\eref{ME-TMfactoring} one eventually gets
\begin{equation}
p(\{x_{i,j}\}) = \prod_{\alpha \in R} p_\alpha(x_\alpha)^{a_\alpha},
\end{equation}
where $R = \{$square plaquettes, corners (on the diagonal), and their
subclusters$\}$ and $a_\alpha$ is the CVM M\"obius number for cluster
$\alpha$.
\section{Cluster Variation Method as an approximation}
\label{Approx}
In most applications the CVM does not yield exact results, and
hence it is worth investigating its properties as an
approximation.
An important issue is the choice of maximal clusters, and in
particular the existence of sequence of approximations (that is,
sequence of choices of maximal clusters) with some property of
convergence to the exact results. This has been long studied in the
literature regarding applications to lattice, translation invariant,
systems and will be the subject of subsection \ref{Asymptotic}. In
particular, rigorous results concerning sequences of approximations
which converge to the exact solution have been derived by Schlijper
\cite{Sch83,Sch84,Sch85}, providing a sound theoretical basis for the
earlier investigations by Kikuchi and Brush \cite{KikBru}.
Another important issue is related to the practical determination of
the minima of the CVM variational free energy. In the variational
formulation of statistical mechanics the free energy is convex, but
this property here is lost due to the presence of negative $a_\alpha$
coefficients in the entropy expansion. More precisely, it has been
shown \cite{PakAna} that the CVM variational free energy is convex if
\begin{equation}
\forall S \subseteq R \qquad \sum_{\alpha \in R_S} a_\alpha \ge 0
\qquad R_S = \{ \alpha \in R | \exists \beta \subseteq \alpha,
\beta \in S \}.
\end{equation}
Similar conditions have been obtained by McEliece and Yildirim
\cite{McEYil} and Heskes, Albers and Kappen \cite{HAK}.
If this is not the case multiple minima can appear, and their
determination can be nontrivial. Several algorithms have been
developed to deal with this problem, falling mainly in two classes:
message--passing algorithms, which will be discussed in subsection
\ref{MessPassAlg} and variational, provably convergent algorithms,
which will be discussed in subsection \ref{VarAlg}.
\subsection{Asymptotic behaviour}
\label{Asymptotic}
The first studies on the asymptotic behaviour of sequences of CVM
approximations are due to Schlijper \cite{Sch83,Sch84,Sch85}. He
showed that it is possible to build sequences of CVM approximations
(that is, sequences of sets of maximal clusters) such that the
corresponding sequence of free energies converge to the exact one, for
a translation--invariant model in the thermodynamic limit. The most
interesting result, related to the transfer matrix idea, is that for a
$d$--dimensional model the maximal clusters considered have to
increase in $d-1$ dimensions only.
These results provided a theoretical justification for the series of
approximations developed by Kikuchi and Brush \cite{KikBru}, who
introduced the $B_{2L}$ series of approximations for
translation--invariant models on the two--dimensional square lattice,
based on zig--zag maximal clusters, as shown in \Fref{KikBruFig}, and
applied it to the zero field Ising model. Based solely on the results
from the $B_2$ (which is equivalent to the plaquette approximation)
and $B_4$ approximations, Kikuchi and Brush postulated a linear
behaviour for the estimate of the critical temperature as a function
of $(2L+1)^{-1}$.
\begin{figure}[h]
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(-2 -2)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\textref h:C v:B
\htext(-2 -1.9) {$1$}
\htext(0 -1.9) {$3$}
\htext(3 -1.9) {$\ldots$}
\htext(6 -1.9) {$2L+1$}
\textref h:C v:T
\htext(-1 -3.1) {$2$}
\htext(2 -3.1) {$\ldots$}
\htext(5 -3.1) {$2L$}
}
\caption{Maximal cluster for the $B_{2L}$ approximation}
\label{KikBruFig}
\end{figure}
In \Fref{B2L-Tc} we have reported the inverse critical temperature as
a function of $(2L+1)^{-1}$ for $L = 1$ to 6. The extrapolated inverse
critical temperature is $\beta_c \simeq 0.4397$, to be compared with
the exactly known $\beta_c = \frac{1}{2} \ln(1 + \sqrt{2}) \simeq
0.4407$.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{betacvsL.eps}
\end{center}
\caption{\label{B2L-Tc}Inverse critical temperature of the $B_{2L}$
approximation series}
\end{figure}
It is not our purpose here to make a complete finite size scaling
analysis, in the spirit of the coherent anomaly method (see below), of
the CVM approximation series. We limit ourselves to show the finite
size behaviour of the critical magnetization. More precisely, we have
computed the magnetization of the zero field Ising model on the square
lattice at the exactly known inverse critical temperature, again for
$L = 1$ to 6. \Fref{FracBetaNu} shows that the critical magnetization
vanishes as $(2L+1)^{\beta/\nu}$, and the fit gives a very good
estimate for the exponent, consistent with the exact result $\beta/\nu
= 1/8$.
\begin{figure}
\begin{center}
\includegraphics*[trim = 0 0 0 50, scale=.5]{B2L-FSS.eps}
\end{center}
\caption{\label{FracBetaNu}Critical temperature of the $B_{2L}$
approximation series}
\end{figure}
As a further illustration of the asymptotic properties of the $B_{2L}$
series we report in \Fref{TrAFEntropy} the zero temperature entropy
(actually the difference between the extrapolated entropy density and
the $B_{2L}$ estimate) of the Ising triangular antiferromagnet as a
function of $1/L$ \cite{PelPre}. It is clearly seen that
asymptotically $s_L = s_0 - a L^{-\psi}$, and the fit yields the
numerical results $s_0 \approx 0.323126$ (the exact value being
$s \approx 0.323066$) and $\psi \approx 1.7512$ (remarkably close to
$7/4$).
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{TrAFEntropy.eps}
\end{center}
\caption{\label{TrAFEntropy}Zero temeperature entropy of the triangular
Ising antiferromagnet in the $B_{2L}$ approximation series}
\end{figure}
An attempt to extract non--classical critical behaviour from high
precision low and high temperature results from CVM was made by the
present author \cite{CVPAM1,CVPAM2,CVPAM3,CVPAM4}, using Pad\'e and
Adler approximants. This work has led to the development of an 18 ($3
\times 3 \times 2$) site cluster approximation for the simple cubic
lattice Ising model \cite{CVPAM4}, which is probably the largest
cluster ever considered. The results obtained for the Ising model are
still compatible with the most recent estimates \cite{PelVic},
although of lower precision.
It has also been considered the possibility of extracting
non--classical critical behaviour from CVM results by means of the
coherent anomaly method, which applies finite size scaling ideas to
series of mean--field--like approximations. A review of these results
can be found in \cite{CAM}.
\subsection{Message--passing algorithms}
\label{MessPassAlg}
In order to describe this class of algorithms it is useful to start
with the Bethe--Peierls approximation (pair approximation of the CVM)
free energy for the Ising model \Eref{Ising}:
\begin{eqnarray}
\fl {\cal F} = - \sum_i h_i \sum_{s_i} s_i p_i(s_i)
- \sum_{\langle i j \rangle} J_{ij}
\sum_{s_i,s_j} s_i s_j p_{ij}(s_i,s_j) + \nonumber \\
\lo + \sum_{\langle i j \rangle} \sum_{s_i,s_j} p_{ij}(s_i,s_j) \ln
p_{ij}(s_i,s_j)
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_{\langle i j \rangle} \lambda_{ij} \left( \sum_{s_i,s_j}
p_{ij}(s_i,s_j) - 1 \right)
+ \nonumber \\
\lo + \sum_{\langle i j \rangle} \left[ \nu_{i,j} \left(
\sum_{s_i} s_i p_i(s_i) - \sum_{s_i,s_j} s_i p_{ij}(s_i,s_j) \right) +
\right. \nonumber \\
\lo + \left. \nu_{j,i} \left(
\sum_{s_j} s_j p_j(s_j) - \sum_{s_i,s_j} s_j p_{ij}(s_i,s_j) \right)
\right].
\label{BetheIsing}
\end{eqnarray}
One can easily recognize the energy terms, the pair entropy, the site
entropy (with a M\"obius number $-(d_i-1)$, where $d_i$ is the degree
of node $i$), and the Lagrange terms corresponding to the
normalization and pair--site compatibility constraints. Observe that,
due the presence of normalization constraints, it is enough to impose
the consistency between the spin expectation values given by the site
and pair probabilities.
A physical way of deriving message--passing algorithms for the
determination of the stationary points of the above
free energy is through the introduction of the effective field
representation. Consider the interaction $J_{ik}$ and assume that,
whenever this is not taken into account exactly, its effect on
variable $s_i$ can be replaced by an effective field $h_{i,k}$. This
can be made rigorous by observing that the stationarity conditions
\begin{eqnarray}
\frac{\partial {\cal F}}{\partial p_i(s_i)} = 0 \nonumber \\
\frac{\partial {\cal F}}{\partial p_{ij}(s_i,s_j)} = 0
\label{Stationarity}
\end{eqnarray}
can be solved by writing the probabilities as
\begin{eqnarray}
\fl p_i(s_i) &=& \exp\left[ F_i + \left( h_i +
\sum_{k \, {\rm NN} \, i} h_{i,k} \right) s_i \right]
\\
\fl p_{ij}(s_i,s_j) &=& \exp\left[ F_{ij} +
\left( h_i + \sum_{k \, {\rm NN} \, i}^{k \ne j} h_{i,k} \right) s_i
+ \left( h_j + \sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k} \right) s_j +
J_{ij} s_i s_j \right],
\label{p-vs-heff}
\end{eqnarray}
where the effective fields, and the site and pair free energies $F_i$
and $F_{ij}$, are related to the Lagrange multipliers through
\begin{eqnarray}
\lambda_i &=& (d_i - 1)(1 + F_i) \nonumber \\
\lambda_{ij} &=& - 1 - F_{ij} \nonumber \\
\nu_{i,j} &=& h_i + \sum_{k \, {\rm NN} \, i}^{k \ne j} h_{i,k}.
\end{eqnarray}
$F_i$ and $F_{ij}$ are determined by the normalization, but first of
all the effective fields must be computed by imposing the
corresponding compatibility constraints, which can be cast into the
form
\begin{equation}
h_{i,j} = {\rm tanh}^{-1} \left [
{\rm tanh}\left(h_j +
\sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k}\right)
{\rm tanh} J_{ij} \right].
\label{heff-iter}
\end{equation}
This is a set of coupled nonlinear equations which is often solved by
iteration, that is an initial guess is made for the $h_{i,j}$'s,
plugged into the r.h.s.\ of \Eref{heff-iter} which returns a new
estimate, and the procedure is then repeated until a fixed point is
(hopefully) reached. The values of the effective fields at the fixed
point can then be used to compute the probabilities according to
\Eref{p-vs-heff}.
The above equations, and their generalizations at the CVM level, have
been intensively used in the 80's for studying the average behaviour
of models with quenched random interactions, like Ising spin glass
models. This work was started by a paper by Morita \cite{Mor79}, where
an integral equation for the probability distribution of the effective
field, given the probability distributions of the interactions and
fields, was derived. In the general case this integral equation takes
the form
\begin{eqnarray}
\fl p_{i,j}(h_{i,j}) = \int \delta \left( h_{i,j} -
{\rm tanh}^{-1} \left [
{\rm tanh}\left(h_j +
\sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k}\right)
{\rm tanh} J_{ij} \right] \right) \times \nonumber \\
\lo \times P_j(h_j) dh_j P_{ij}(J_{ij}) dJ_{ij}
\prod_{k \, {\rm NN} \, j}^{k \ne i} p_{j,k} (h_{j,k}) dh_{j,k},
\label{IntegralEquation}
\end{eqnarray}
with simplifications occurring if the probability distributions can be
assumed to be site--independent, which is the most studied
case. Typical calculations concerned: the determination of elements of
the phase diagrams of Ising spin glass models, through the calculation
of the instability loci of the paramagnetic solution; results in the
zero temperature limit, where solutions with a discrete support are
found; iterative numerical solutions of the integral equation. A
review of this line of research until 1986 can be found in
\cite{Kat86}. It is important to notice that the results obtained by
this approach are equivalent to those by the replica method, at the
replica symmetric level.
The effective field approach is reminiscent of the message--passing
procedure at the heart of the belief propagation (BP) algorithm, and
indeed the messages
appearing in this algorithm are related, in the Ising case, to the
effective fields by $m_{\langle i j \rangle \to i}(s_i) =
\exp(h_{i,j} s_i)$, where $m_{\langle i j \rangle \to i}(s_i)$ denotes
a message going from the NN pair $\langle i j \rangle$ to node $i$.
In order to derive the BP algorithm consider the Bethe--Peierls
approximation for a model with variable nodes $i$ and factor nodes
$a$. The variables $s_i$ need not be limited to two states
and the Hamiltonian is written in the general form \Eref{HsumHa}.
The CVM free energy, with the normalization and compatibility
constraints, can then be written as
\begin{eqnarray}
\fl {\cal F} = - \sum_a \sum_{\bi{s_a}} H_a(\bi{s_a}) p_a(\bi{s_a})
+ \nonumber \\ \lo
+ \sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) \ln p_a(\bi{s_a})
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) + \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_a \lambda_a \left(
\sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) - 1 \right)
+ \nonumber \\
\lo + \sum_a \sum_{i \in a}
\sum_{s_i} \mu_{a,i}(s_i) \left( p_i(s_i) - \sum_{\bi{s_{a \setminus i}}}
p_a(\bi{s_a}) \right),
\label{BetheFree}
\end{eqnarray}
where $\bi{s_{a \setminus i}}$ denotes the set of variables entering
factor node $a$, except $i$.
The stationarity conditions
\begin{eqnarray}
\frac{\partial {\cal F}}{\partial p_i(s_i)} = 0 \nonumber \\
\frac{\partial {\cal F}}{\partial p_a(\bi{s_a})} = 0
\end{eqnarray}
can then be solved, in
analogy with \Eref{p-vs-heff}, by
\begin{eqnarray}
p_i(s_i) &=& \frac{1}{Z_i} \prod_{i \in a}
m_{a \to i}(s_i) \nonumber \\
p_a(\bi{s_a}) &=& \frac{1}{Z_a} \psi_a(\bi{s_a})
\prod_{k \in a} \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\label{p-vs-mess}
\end{eqnarray}
In the particular case of an Ising model with pairwise interactions,
the previously mentioned relationship between messages and effective
fields is evident from the above equation.
Now $Z_i$ and $Z_a$ take care of normalization, and the messages
$m_{a \to i}(s_i)$ are related to the Lagrange multipliers by
\begin{equation}
\mu_{a,k}(s_k) = \sum_{k \in b}^{b \ne a} \ln m_{b \to k}(s_k).
\end{equation}
Notice that the messages can be regarded as exponentials of a new set
of Lagrange multipliers, with the constraints rewritten as in the
following free energy
\begin{eqnarray}
\fl {\cal F} = - \sum_a \sum_{\bi{s_a}} H_a(\bi{s_a}) p_a(\bi{s_a})
+ \nonumber \\ \lo
+ \sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) \ln p_a(\bi{s_a})
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) + \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_a \lambda_a \left(
\sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) - 1 \right)
+ \nonumber \\
\lo + \sum_a \sum_{i \in a}
\sum_{s_i} \ln m_{a \to i}(s_i) \left( (d_i - 1) p_i(s_i) -
\sum_{i \in b}^{b \ne a} \sum_{\bi{s_{b \setminus i}}}
p_b(\bi{s_b}) \right).
\label{BetheFreeRot}
\end{eqnarray}
Again, imposing compatibility between variable nodes and factor nodes,
one gets a set of coupled equations for the messages which, leaving
apart normalization, read
\begin{equation}
m_{a \to i}(s_i) \propto \sum_{\bi{s_{a \setminus i}}} \psi_a(\bi{s_a})
\prod_{k \in a}^{k \ne i} \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\label{BP-mess-upd}
\end{equation}
The above equations, and their iterative solution, are the core of the
BP algorithm. Also, their structure justifies the name ``Sum-Product''
\cite{Kschischang}, which is often given them in the literature on
probabilistic graphical models, and the corresponding term
``Max-Product'' for their zero temperature limit.
There are several issues which must be considered when discussing the
property of an iterative algorithm based on \Eref{BP-mess-upd}. First
of all, one could ask whether messages have to be updated sequentially
or in parallel. This degree of freedom does not affect the fixed
points of the algorithm, but it affects the dynamics. This issue has
been considered in some depth by Kfir and Kanter \cite{KfirKanter} in
the context of the decoding of error--correcting codes. In that case
they showed that the sequential update results in twice faster
convergence with respect to the parallel update.
Convergence is however not guaranteed if the underlying graph is not
tree--like, that is if the pair approximation of the CVM is not
exact. This issue has been investigated theoretically by Tatikonda and
Jordan \cite{TatiJor}, Mooij and Kappen \cite{MooKap}, Ihler et al
\cite{Ihler}, who derived sufficient conditions for convergence, and
by Heskes \cite{Heskes2004}, who derived sufficient conditions for the
uniqueness of the fixed point. In practice it is typically observed
that the BP algorithm converges if the frustration due to competitive
interactions, like those characteristic of spin--glass or constraint
satisfaction models, is not too large. In some cases, the trick of
damping, or inertia, can help extending the convergence domain. The
trick consists in taking the updated message equal to a weighted
(possibly geometrical) average of the old message and the new one
given by \Eref{BP-mess-upd}. The convergence domain of the BP
algorithm has been determined for several problems, like
satisfiability \cite{SPSAT}, graph colouring \cite{SPCOL}, error
correcting codes \cite{KabSaaLDPCC} and spin glasses
\cite{SG-BP-conv}. Within its convergence domain, the BP algorithm is
indeed very fast, and this is its real strength. See the next
subsection for some performance tests and a comparison with provably
convergent algorithms.
Once a fixed point has been obtained it is worth asking whether this
corresponds to a minimum of the free energy or not. This has been
partially solved by Heskes \cite{Heskes}, who has shown that stable
fixed points of the belief propagation algorithm are minima of the CVM
pair approximation free energy, but the converse is not necessarily
true. Actually, examples can be found of minima of the free energy
which correspond to unstable fixed points of the belief propagation
algorithm.
An important advancement in this topic is the {\em generalized belief
propagation} (GBP) algorithm by Yedidia and coworkers
\cite{Yed01}. The fixed points of the GBP algorithm for a certain
choice of clusters correspond to stationary points of the CVM free
energy at the approximation level corresponding by the same choice of
clusters or, more generally, of a region graph free energy. Actually,
for a given choice of clusters, different GBP algorithms can be
devised. Here only the so--called {\em parent to child} GBP algorithm
\cite{Yed04} will be considered. Other choices are described in
\cite{Yed04}.
In order to better understand this algorithm, notice a few
characteristics of the belief propagation algorithm. First of all,
looking at the probabilities \Eref{p-vs-mess} one can say that a
variable node receives messages from all the factor nodes it belongs
to, while a factor node $a$ receives messages from all the other
factor nodes to which its variable nodes $i \in a$ belong. In
addition, the constraint corresponding to the message $m_{a \to
i}(s_i)$ (see \Eref{BetheFreeRot}) can be written as
\begin{equation}
\sum_{\bi{s_{a \setminus i}}} p_a(\bi{s_a}) =
\sum_{i \in b} \sum_{\bi{s_{b \setminus i}}} p_b(\bi{s_b})
- (d_i - 1) p_i(s_i).
\end{equation}
The parent to child GBP algorithm generalizes these characteristics in
a rather straightforward way. First of all, messages $m_{\alpha \to
\beta}(\bi{s_\beta})$ ($\beta \subset \alpha$) are introduced from
regions (parent regions) to subregions (child regions). Then, the
probability of a region takes into account messages coming from outer
regions to itself and its subregions. Finally, exploiting the property
\Eref{MobiusNumbers} of the M\"obius numbers, the constraint
corresponding to $m_{\alpha \to \beta}(\bi{s_\beta})$ is written in the
form
\begin{equation}
\sum_{\alpha \subseteq \gamma \in R} a_\gamma \sum_{\bi{s_{\gamma \setminus
\beta}}} p_\gamma(\bi{s_\gamma}) = \sum_{\beta \subseteq \gamma \in R}
a_\gamma \sum_{\bi{s_{\gamma \setminus \beta}}}
p_\gamma(\bi{s_\gamma}).
\end{equation}
It can be shown \cite{Yed04} that this new set of constraints is
equivalent to the original one.
To make this more rigorous, consider the free energy given by
Equations (\ref{CVMFree}) and (\ref{ClusterFree}), with the above
compatibility constraints (with Lagrange multipliers $\ln m_{\alpha
\to \beta}(\bi{s_\beta})$) and the usual normalization constraints
(with multipliers $\lambda_\alpha$).
One obtains
\begin{eqnarray}
\fl {\cal F} = \sum_{\gamma \in R} a_\gamma \sum_{\bi{s_\gamma}} \left[
p_\gamma(\bi{s_\gamma}) H_\gamma(\bi{s_\gamma}) + p_\gamma(\bi{s_\gamma}) \ln
p_\gamma(\bi{s_\gamma}) \right] + \sum_{\gamma \in R} \lambda_\gamma
\left[ \sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma}) - 1 \right] +
\nonumber \\
\fl + \sum_{\beta \subset \alpha \in R} \sum_{\bi{s_\beta}}
\ln m_{\alpha \to \beta}(\bi{s_\beta}) \left[
\sum_{\alpha \subseteq \gamma \in R} a_\gamma \sum_{\bi{s_{\gamma \setminus
\beta}}} p_\gamma(\bi{s_\gamma}) - \sum_{\beta \subseteq \gamma \in R}
a_\gamma \sum_{\bi{s_{\gamma \setminus \beta}}}
p_\gamma(\bi{s_\gamma}) \right],
\end{eqnarray}
where it is not necessary to put all the possible $\alpha \to \beta$
compatibility constraints, but it is enough to put those which satisfy
$a_\alpha \ne 0$, $a_\beta \ne 0$ and $\beta$ is a direct subregion of
$\alpha$, that is there is no region $\gamma$ with $a_\gamma \ne 0$
such that $\beta \subset \gamma \subset \alpha$. Notice also that the
Lagrange term corresponding to the $\alpha \to \beta$ constraint can
be written as
\begin{equation}
- \ln m_{\alpha \to \beta}(\bi{s_\beta}) \sum_{\beta \subseteq \gamma
\in R}^{\alpha \nsubseteq \gamma} a_\gamma \sum_{\bi{s_{\gamma
\setminus \beta}}} p_\gamma(\bi{s_\gamma}).
\end{equation}
The stationarity conditions
\begin{equation}
\frac{\partial {\cal F}}{\partial p_\gamma(\bi{s_\gamma})} = 0
\end{equation}
can then be solved, leaving apart normalization, by
\begin{equation}
p_\gamma(\bi{s_\gamma}) \propto \exp\left[ - H_\gamma(\bi{s_\gamma})
\right] \prod_{\beta \subseteq \gamma} \prod_{\beta \subset \alpha
\in R}^{\alpha \nsubseteq \gamma} m_{\alpha \to
\beta}(\bi{s_\beta}),
\label{GBP-p-vs-mess}
\end{equation}
where $\bi{s_\beta}$ denotes the restriction of $\bi{s_\gamma}$ to
subregion $\beta$.
Finally, message update rules can be derived again by the compatibility
constraints, though some care is needed, since in the general case
these constraints are not immediately solved with respect to the
(updated) messages, as it occurs in the derivation of
\Eref{BP-mess-upd}. Here one obtains a coupled set of equations in the
updated messages, which can be solved starting from the constraints
involving the smallest clusters.
An example can be helpful here. Consider a model defined on a regular
square lattice, with periodic boundary conditions, and the CVM square
approximation, that is the approximation obtained by taking the
elementary square plaquettes as maximal clusters. The entropy
expansion contains only terms for square plaquettes (with M\"obius
numbers 1), NN pairs (M\"obius numbers -1) and single nodes (M\"obius
numbers 1), as in \Eref{SquareEntropy}. A minimal set of compatibility
constraints includes node--pair and pair--square constraints, and one
has therefore to deal with square--to--pair and pair--to--node
messages, which will be denoted by $m_{ij,kl}(s_i,s_j)$ and
$m_{i,j}(s_i)$ respectively. With reference to the portion of the
lattice depicted in \Fref{SquareLatticePortion} the probabilities,
according to \Eref{GBP-p-vs-mess}, can be written as
\begin{eqnarray}
\fl p_i(s_i) \propto \exp[-H_i(s_i)] \, m_{i,a}(s_i) \, m_{i,j}(s_i)
\, m_{i,l}(s_i) \, m_{i,h}(s_i), \nonumber \\
\fl p_{ij}(s_i,s_j) \propto \exp[-H_{ij}(s_i,s_j)] \, m_{i,a}(s_i) \,
m_{i,l}(s_i) \, m_{i,h}(s_i) \times \nonumber \\
\lo \times m_{j,b}(s_j) \, m_{j,c}(s_j) \, m_{j,k}(s_j) \,
m_{ij,ab}(s_i,s_j) \,
m_{ij,lk}(s_i,s_j), \nonumber \\
\fl p_{ijkl}(s_i,s_j,s_k,s_l) \propto \exp[-H_{ijkl}(s_i,s_j,s_k,s_l)]
\, m_{i,a}(s_i) \, m_{i,h}(s_i) \, m_{j,b}(s_j) \, m_{j,c}(s_j) \times
\nonumber \\
\lo \times m_{k,d}(s_k) \, m_{k,e}(s_k) \, m_{l,f}(s_l) \,
m_{l,g}(s_l) \times \nonumber \\
\lo \times m_{ij,ab}(s_i,s_j) \,
m_{jk,cd}(s_j,s_k) \, m_{kl,ef}(s_k,s_l) \, m_{lj,gh}(s_l,s_j).
\end{eqnarray}
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 2) \lvec(6 2)
\move(0 4) \lvec(6 4)
\move(2 0) \lvec(2 6)
\move(4 0) \lvec(4 6)
\textref h:L v:B \htext(2.1 2.1) {$i$}
\textref h:R v:B \htext(3.9 2.1) {$j$}
\textref h:L v:T \htext(2.1 3.9) {$l$}
\textref h:R v:T \htext(3.9 3.9) {$k$}
\textref h:C v:T \htext(2 -0.1) {$a$}
\textref h:C v:T \htext(4 -0.1) {$b$}
\textref h:L v:C \htext(6.1 2) {$c$}
\textref h:L v:C \htext(6.1 4) {$d$}
\textref h:C v:B \htext(4 6.1) {$e$}
\textref h:C v:B \htext(2 6.1) {$f$}
\textref h:R v:C \htext(-0.1 4) {$g$}
\textref h:R v:C \htext(-0.1 2) {$h$}
}
\caption{\label{SquareLatticePortion}A portion of the square lattice}
\end{figure}
Imposing node--pair and pair--square constraints one gets equations
like
\begin{eqnarray}
\fl \exp[-H_i(s_i)] \, m_{i,j}(s_i)
\propto \sum_{s_j} \exp[-H_{ij}(s_i,s_j)] \times \nonumber \\
\lo \times m_{j,b}(s_j) \, m_{j,c}(s_j) \,
m_{j,k}(s_j) \, m_{ij,ab}(s_i,s_j) \, m_{ij,lk}(s_i,s_j), \nonumber \\
\fl \exp[-H_{ij}(s_i,s_j)] \, m_{i,f}(s_i) \, m_{j,k}(s_j) \,
m_{ij,lk}(s_i,s_j) \propto \sum_{s_k,s_l}
\exp[-H_{ijkl}(s_i,s_j,s_k,s_l)] \times \nonumber \\
\lo \times m_{k,d}(s_k) \,
m_{k,e}(s_k) \, m_{l,f}(s_l) \, m_{l,g}(s_l) \times \nonumber \\
\lo \times m_{jk,cd}(s_j,s_k) \, m_{kl,ef}(s_k,s_l) \, m_{lj,gh}(s_l,s_j).
\end{eqnarray}
The above equations can be viewed as a set of equations in the updated
messages at iteration $t+1$, appearing in the l.h.s., given the
messages at iteration $t$, appearing in the r.h.s.. It is clear that
one has first to calculate the updated pair--to--site messages
according to the first equation, and then the updated square--to--pair
messages according to the second one, using in the l.h.s.\ the updated
pair--to--site messages just obtained.
GBP (possibly with damping) typically exhibits better convergence
properties (and greater accuracy) than BP, but the empirical rule that
a sufficient amount of frustration can make it not convergent is valid
also for GBP. It is therefore fundamental to look for provably
convergent algorithms, which will be discussed in the next
subsection. A variation of the BP algorithm, the conditioned
probability (CP) algorithm, with improved convergence properties, has
recently been introduced \cite{MP-Prop}. The extension of this
algorithm beyond the BP level is however not straightforward.
We conclude the present subsection by mentioning that
techniques like the Thouless--Anderson--Palmer equations, or the
cavity method, both widely used in the statistical physics of spin
glasses, are strictly related to the Bethe--Peierls approximation.
The Thouless--Anderson--Palmer \cite{TAP} equations can be derived
from the Bethe--Peierls free energy for the Ising model, through the
so-called Plefka expansion \cite{Plefka}. One has first to write the
free energy as a function of magnetizations and nearest--neighbour
correlations through the parameterization
\begin{equation}
p_i(s_i) = \frac{1 + s_i m_i}{2} \qquad
p_{ij}(s_i,s_j) = \frac{1 + s_i m_i + s_j m_j + s_i s_j c_{ij}}{4},
\end{equation}
then to solve analytically the stationarity conditions with respect to
the $c_{ij}$'s and finally to expand to second order in the inverse
temperature.
Finally, the cavity method \cite{MezPar86,MezPar87,MezPar01} is
particularly important since it allows to deal with replica symmetry
breaking. The cavity method, though historically derived in a
different way, can be regarded as an alternative choice of messages
and effective fields in the Bethe--Peierls approximation. With
reference to \Eref{p-vs-mess}, introduce messages $m_{k \to a}(s_k)$
from variable nodes to factor nodes according to
\begin{equation}
m_{k \to a}(s_k) = \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\end{equation}
Then the probabilities \Eref{p-vs-mess} become
\begin{eqnarray}
p_i(s_i) &=& \frac{1}{Z_i} \prod_{i \in a}
m_{a \to i}(s_i) \nonumber \\
p_a(\bi{s_a}) &=& \frac{1}{Z_a} \psi_a(\bi{s_a})
\prod_{k \in a} m_{k \to a}(s_k),
\end{eqnarray}
and the message update equations (\ref{BP-mess-upd}) become
\begin{equation}
m_{a \to i}(s_i) \propto \sum_{\bi{s_{a \setminus i}}} \psi_a(\bi{s_a})
\prod_{k \in a}^{k \ne i} m_{k \to a}(s_k).
\end{equation}
The effective fields corresponding to the factor--to--variable
messages $m_{a \to i}(s_i)$ are usually called cavity biases, while
those corresponding to the variable--to--factor messages $m_{i \to
a}(s_i)$ are called cavity fields. In the Ising example above a factor
node is just a pair of NNs and cavity biases reduce to effective
fields $h_{i,j}$, while cavity fields take the form
$\displaystyle{\sum_{k {\rm NN} i}^{k \ne j} h_{i,k}}$.
The cavity method admits an extension to cases where one step of
replica symmetry breaking occurs \cite{MezPar01,MezPar03}. In such a
case one assumes that there exist many states characterized by
different values of the cavity biases and fields, and introduces the
probability distributions of cavity biases and fields over the
states. From the above message update rules one can then derive
integral equations, similar to \Eref{IntegralEquation}, for the
distributions. These integral equations can in principle be solved by
iterative population dynamics algorithms, but most often one restricts
to the zero temperature case, where these distributions have a
discrete support.
The zero temperature case is particularly relevant for hard
combinatorial optimization problems, where 1--step replica symmetry
breaking corresponds to clustering of solutions. Clustering means that
the space of solutions becomes disconnected, made of subspaces which
cannot be reached from one another by means of local moves, and hence
all local algorithms, like BP or GBP, are bound to fail. The cavity
method has been used to solve these kind of problems in the framework
of the survey propagation algorithm \cite{SPScience}, which has been
shown to be a very powerful tool for constraint satisfaction problems
like satisfiability \cite{SPSAT} and colouring \cite{SPCOL} defined on
finite connectivity random graphs. These graphs are locally tree--like
and therefore all the analysis can be carried out at the
Bethe--Peierls level. A sort of generalized survey propagation capable
of dealing with short loops would really be welcome, but it seems that
realizability issues are crucial here and replica symmetry breaking
can only be introduced when CVM gives an exact solution.
A different approach, still aimed to generalize the BP algorithm to
situations where replica symmetry breaking occurs, has been suggested
by van Mourik \cite{Jort}, and is based on the analysis of the time
evolution of the BP algorithm.
\subsection{Variational algorithms}
\label{VarAlg}
In the present subsection we discuss algorithms which update
probabilities instead of messages. At every iteration a new estimate
of probabilities, and hence of the free energy, is obtained. These
algorithms are typically provably convergent, and the proof is based
on showing that the free energy decreases at each iteration. This is
of course not possible with BP and GBP algorithms, where the
probabilities and the free energy can be evaluated only at the fixed
point. The price one has to pay is that in variational algorithms one
has to solve the compatibility constraints at every iteration, and
therefore these are double loop algorithms, where the outer loop is
used to update probabilities and the inner loop is used to solve the
constraints.
The natural iteration method (NIM) \cite{Kik74,Kik76} is the oldest
algorithm specifically designed to minimize the CVM variational free
energy. It was originally introduced \cite{Kik74} in the context of
homogeneous models, for the pair and tetrahedron (for the fcc lattice)
approximations. In such cases the compatibility constraints are
trivial. Later \cite{Kik76} it was generalized to cases where the
compatibility constraints cannot be solved trivially. An improved
version of the algorithm, with tunable convergence properties,
appeared in \cite{KiKoKa} and its application is described in some
detail also in \cite{3CVM}, where higher order approximations are
considered.
The algorithm is based on a double loop scheme, where the inner loop
is used to solve the compatibility constraints, so that at each
iteration of the outer loop a set of cluster probabilities which
satisfy the constraints is obtained.
Proof of convergence, based on showing that the free energy decreases
at every outer loop iteration, exist in many cases, but it has also
been shown that there are non--convergent cases, like the
four--dimensional Ising model \cite{Pretti} in the hypercube
approximation.
We do not discuss in detail this algorithm since it is rather slow,
and better alternatives have been recently developed.
A first step in this direction was the {\it concave--convex procedure}
(CCCP) by Yuille \cite{Yuille}, who started from the observation that
the non--convergence problems of message--passing algorithms arise
from concave terms in the variational free energy, that is from the
entropy of clusters with negative M\"obius numbers. His idea was then
to split the CVM free energy into a convex and a concave part,
\begin{equation}
{\cal F}(\{p_\alpha\}) = {\cal F}_{\rm vex}(\{p_\alpha\}) +
{\cal F}_{\rm cave}(\{p_\alpha\}),
\label{CCCPsplit}
\end{equation}
and to write the update equations to be iterated to a fixed point
as
\begin{equation}
\nabla {\cal F}_{\rm vex}(\{p_\alpha^{(t+1)}\}) = -
\nabla {\cal F}_{\rm cave}(\{p_\alpha^{(t)}\}),
\label{CCCPiter}
\end{equation}
where $p_\alpha^{(t)}$ and $p_\alpha^{(t+1)}$ are successive
iterates. In order to solve the compatibility constraints, at each
iteration of \Eref{CCCPiter}, the Lagrange multipliers enforcing the
constraints are determined by another iterative algorithm where one
solves for one multiplier at a time, and it can be shown that the free
energy decreases at each outer loop iteration. Therefore we have another double
loop algorithm, which is provably convergent, faster than NIM (as we
shall see below), and allows some freedom in the splitting
between convex and concave parts.
A more general and elegant formalism, which will be described in the
following, has however been put forward by Heskes, Albers and Kappen
(HAK) \cite{HAK}. Their basic idea is to consider a sequence of convex
variational free energies such that the sequence of the corresponding
minima tends to the minimum of the CVM free energy. More precisely, if
the CVM free energy ${\cal F}(\{ p_\alpha, \alpha \in R \})$ is
denoted for simplicity by ${\cal F}(p)$, they consider a function
${\cal F}_{\rm conv}(p,p')$, convex in $p$, with the properties
\begin{eqnarray}
{\cal F}_{\rm conv}(p,p') \ge {\cal F}(p), \nonumber \\
{\cal F}_{\rm conv}(p,p) = {\cal F}(p).
\end{eqnarray}
The algorithm is then defined by the update rule for the probabilities
\begin{equation}
p^{(t+1)} = {\rm arg}\min_{p} {\cal F}_{\rm conv}(p,p^{(t)}),
\label{HAKouter}
\end{equation}
and it is easily proved that the free energy decreases at each
iteration and that a minimum of the CVM free energy is recovered at
the fixed point.
A lot of freedom is left in the definition of ${\cal F}_{\rm conv}$,
and strategies of varying complexity and speed can be obtained. NIM
(when convergent) and CCCP can also be recovered as special cases.
The general framework is based on the following three properties.
\begin{enumerate}
\item If $\beta \subset \alpha$, then
\begin{equation}
- S_\alpha + S_\beta = \sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) \ln
p_\alpha(\bi{s_\alpha}) - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta})
\ln p_\beta(\bi{s_\beta})
\end{equation}
is convex over the constraint set, i.e.\ it is a convex function of
$p_\alpha$ and $p_\beta$ if these satisfy the compatibility constraint
\Eref{CompConstr}.
\item The linear bound
\begin{equation}
S_\beta = - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta}) \ln
p_\beta(\bi{s_\beta}) \le - \sum_{\bi{s_\beta}}
p_\beta(\bi{s_\beta}) \ln p'_\beta(\bi{s_\beta}) = S'_\beta
\end{equation}
holds, with equality only for $p'_\beta = p_\beta$
\item If $\gamma \subset \beta$, and $p_\beta$ and $p_\gamma$
($p'_\beta$ and $p'_\gamma$) satisfy the compatibility constraints,
the bound
\begin{eqnarray}
\fl S_\beta - S_\gamma = - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta}) \ln
p_\beta(\bi{s_\beta}) + \sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma})
\ln p_\gamma(\bi{s_\gamma}) \le \nonumber \\
\lo \le - \sum_{\bi{s_\beta}}
p_\beta(\bi{s_\beta}) \ln p'_\beta(\bi{s_\beta}) +
\sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma}) \ln
p'_\gamma(\bi{s_\gamma}) = S'_\beta - S'_\gamma
\end{eqnarray}
holds, and it is tighter than the previous bound. A tighter bound
typically entail faster convergence.
\end{enumerate}
In order to give an example, consider again the CVM square
approximation for a model on a regular square lattice with periodic
boundary conditions and focus on the entropy part of the free energy,
which according to the entropy expansion
\Eref{SquareEntropy} has the form
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i = \sum_{\opensquare} p_{\opensquare} \ln
p_{\opensquare} - \sum_{\langle i j \rangle} p_{ij} \ln p_{ij} +
\sum_i p_i \ln p_i.
\end{equation}
This contains both convex (from square and site entropy) and concave
terms (from pair entropy). Notice that the numbers of plaquettes is
the same as the number of sites, while there are two pairs (e.g.\
horizontal and vertical) per site. This implies that the free energy
is not convex over the constraint set.
Several bounding schemes are possible to define ${\cal F}_{\rm
conv}$. For instance, one can obtain a function which is just convex
over the constraint set by applying property (iii) to the site terms
and half the pair terms, with the result
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} +
\frac{1}{2} \sum_{\langle i j \rangle} S_{ij} +
\frac{1}{2} \sum_{\langle i j \rangle} S'_{ij} - \sum_i S'_i.
\label{JustConvex}
\end{equation}
In the following the HAK algorithm will always be used with this
bounding scheme.
The NIM can be obtained if, starting from the above expression, one
applies property (ii) to the not yet bounded pair terms, with the
result
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} +
\sum_{\langle i j \rangle} S'_{ij} - \sum_i S'_i.
\end{equation}
This is clearly a looser bound than the previous one, and hence it
leads to a (much) slower algorithm. In the general case, the NIM
(which of course was formulated in a different way) can be obtained by
bounding all entropy terms except those corresponding to the maximal
clusters. This choice does not always lead to a convex bound (though
in most practically relevant cases this happens) and hence convergence
is not always guaranteed.
The CCCP recipe corresponds to bounding every convex ($a_\beta < 0$)
term by
\begin{equation}
- a_\beta S_\beta \le - S_\beta + (1 - a_\beta) S'_\beta,
\end{equation}
using property (ii). In the present case this gives
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} -
\sum_{\langle i j \rangle} S_{ij} +
2 \sum_{\langle i j \rangle} S'_{ij} - \sum_i S_i,
\end{equation}
which is convex independently of the constraints, and hence the bound
is again looser than \Eref{JustConvex}
In all cases one is left with a double loop algorithm, the outer loop
being defined by the update rule for probabilities, and the inner loop
being used for the minimization involved in \Eref{HAKouter}. This
minimization is simpler than the original problem, since the function
to be minimized is convex. In each of the above schemes a particular
technique was proposed for the convex minimization in the inner loop,
and here these will not be covered in detail.
A point which is important to notice here is that the bounding
operation gives a new free energy which is structurally different from
a CVM free energy. It must be minimized with respect to $p$ at fixed
$p'$ and, viewed as a function of $p$, it contains an entropy
expansion with coefficients $\tilde a_\beta$ which do not satisfy
anymore the M\"obius relation (\ref{MobiusNumbers}) (for instance, in
the ``just convex over the constraint set'' scheme, we have
$a_{\opensquare} = 1$, $a_{ij} = -1/2$ and $a_i = 0$). This means that
a message--passing algorithm like parent--to--child GBP, which relies
on the M\"obius property, cannot be applied. In \cite{HAK} a different
message--passing algorithm, which can still be viewed as a GBP
algorithm, is suggested.
Observe also that there are entropy--like terms $S'_\beta$ which are
actually linear in $p_\beta$ and must therefore be absorbed in the
energy terms.
The main reason for investigating these double loop, provably
convergent algorithms, is the non--convergence of BP and GBP in
frustrated cases. Since BP and GBP, when they converge, are the
fastest algorithms for the determination of the minima of the CVM free
energy, it is worth making some performance tests to evaluate the
speed of the various algorithms. The CPU times reported below refer to
an Intel Pentium 4 processor at 3.06 GHz, using g77 under GNU/Linux.
Consider first a chain of $N$ Ising spins, with ferromagnetic
interactions $J>0$ and random bimodal fields $h_i$ independently drawn
from the distribution
\begin{equation}
p(h_i) = \frac{1}{2} \delta(h_i - h_0) + \frac{1}{2} \delta(h_i + h_0).
\end{equation}
The boundary conditions are open, and the model is exactly solved by
the CVM pair approximation. The various algorithms described are run
from a disordered, uncorrelated state and stopped when the distance
between two successive iterations, defined as the sum of the squared
variations of the messages (or the probabilities, or the Lagrange
multipliers, depending on the algorithm and the loop -- outer or inner
-- considered). \Fref{CPU1d} reports the CPU times obtained with
several algorithms, for the case $J = 0.1$, $h_0 = 1$. The HAK
algorithm is not reported since it reduces to BP due to the convexity
of the free energy. It is seen that the CPU time grows linearly with
$N$ for all algorithms except NIM, in which case it goes like
$N^3$. Despite the common linear behaviour, there are order of
magnitude differences between the various algorithms. While BP and CP
converges in 4 and 9 seconds respectively for $N = 10^6$, CCCP takes
15 seconds for $N = 10^4$. For NIM, finally, the fixed point is
reached in 12 seconds for $N = 10^2$.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-1d.eps}
\end{center}
\caption{\label{CPU1d}CPU times (seconds) for the 1d Ising chain with
random fields}
\end{figure}
As a further test, consider, again at the level of the pair
approximation, the two--dimensional Edwards--Anderson spin glass
model, defined by the Hamiltonian \Eref{Ising} with $h_i = 0$ and
random bimodal interactions $J_{ij}$ independently drawn from the
distribution
\begin{equation}
p(J_{ij}) = (1-p) \delta(J_{ij} - J) + p \, \delta(J_{ij} + J).
\end{equation}
Here the frustration effects are even more important and the
non--convergence problem of BP becomes evident. As a rule of
thumb, when the temperature, measured by $J^{-1}$, is small enough and
$p$ (the fraction of antiferromagnetic bonds) is large enough, the BP
algorithm stops converging. The condition for the instability of the
BP fixed point has been computed, in the average case, for Ising spin
glass models with pairwise interactions \cite{SG-BP-conv}. In order to
compare algorithm performances, \Fref{CPU2dP} reports CPU times vs $L$
for $N = L^2$ lattices with periodic boundary conditions, $J = 0.2$
and $p = 1/2$, that is well into the paramagnetic phase of the
model. The initial guess is a ferromagnetic state with $m_i = 0.9,
\forall i$. It is seen that the CPU times scale roughly as $N^{1.1}$
for all the algorithms considered except NIM, which goes like
$N^{1.8}$. Again the algorithms with linear behaviour are separated by
orders of magnitude. For $L = 320$ BP converges in 6 seconds, HAK in
370 seconds and CCCP in 2460 seconds.
CP has not been considered in the present and the following tests,
although empirically it is seen that its behaviour is rather close to
the HAK algorithm. Its performance is however severely limited as soon
as one considers variable with more than two states, due to a sum over
the configurations of the neighbourhood of a NN pair.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-2d-P.eps}
\end{center}
\caption{\label{CPU2dP}CPU times (seconds) for the 2d
Edwards--Anderson model in the paramagnetic phase}
\end{figure}
A similar comparison can be made in the ferromagnetic phase, setting
$J = 0.5$ and $p = 0.1$. Here the CPU times for the BP algorithm
exhibit large fluctuations for different realizations of the disorder,
and the data reported are obtained by averaging over 30 such
realizations. Now all algorithms exhibit comparable scaling
properties, with CPU times growing like $N^{1.5} \div N^{1.7}$. As far
as absolute values are concerned, for $L = 50$ convergence is reached
in 4, 44, 680 and 1535 seconds by BP, HAK, CCCP and NIM
respectively.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-2d-F.eps}
\end{center}
\caption{\label{CPU2dF}CPU times (seconds) for the 2d
Edwards--Anderson model in the ferromagnetic phase}
\end{figure}
A similar scaling analysis was not possible into the glassy phase
(which is unphysically predicted by the pair approximation), due to
non--convergence of BP and too large fluctuations of the convergence
time of the other algorithms.
As a general remark we observe that BP is the fastest algorithm
available whenever it converges. Among the provably convergent
algorithms, the fastest one turns out to be HAK, at least in the
``just convex over the constraints'' \cite{HAK} scheme which was used
here.
\section{Conclusions}
\label{Conclusions}
Some aspects of the cluster variation method have been briefly reviewed.
The emphasis was on recent developments, not yet covered by the 1994
special issue of Progress of Theoretical Supplement \cite{PTPS}, and
the focus was on the methodological aspects rather than on the
applications.
The discussion has been based on what can be considered the modern
formulation of the CVM, due to An \cite{An88}, based on a truncation
of the cumulant expansion of the entropy in the variational principle
of equilibrium statistical mechanics.
The advancements in this last decade were often due to the interaction
between two communities of researchers, working on statistical physics
and, in a broad sense, probabilistic graphical models for inference
and optimization problems. The interest of both communities is
currently on heterogeneous problems, while in the past the CVM was
most often applied to translation invariant lattice models (in this
topic, the only new advancements discussed have been the attempts to
extract information about critical behaviour from CVM results). The
more general point of view that has to be adopted in studying
heterogeneous problems has been crucial to achieve many of the results
discussed.
The formal properties of the CVM have been better understood by
comparing it with other region--based approximations, like the
junction graph method or the most general formulation of the
Bethe--Peierls approximation (the lowest order of the CVM), which can
treat also non--pairwise interactions. Studying realizability, that is
the possibility of reconstructing a global probability distribution
from the marginals predicted by the CVM, has led to the discovery of
non--tree--like models for which the CVM gives the exact solution.
A very important step was made by understanding that belief
propagation, a message--passing algorithm widely used in the
literature on probabilistic graphical models, has fixed points which
correspond to stationary points of the Bethe--Peierls
approximation. The belief propagation can thus be regarded as a
powerful algorithm to solve the CVM variational problem, that is to
find minima of the approximate free energy, at the Bethe--Peierls
level. This opened the way to the formulation of generalized belief
propagation algorithms, whose fixed points correspond to stationary
points of the CVM free energy, at higher level of approximation.
Belief propagation and generalized belief propagation are certainly
the fastest available algorithms for the minimization of the CVM free
energy, but they often fail to converge. Typically this happens when
the problems under consideration are sufficiently frustrated. In order
to overcome this difficulty double loop, provably convergent
algorithms have been devised, for which the free energy can be shown
to decrease at each iteration. These are similar in spirit to the old
natural iteration method by Kikuchi, but orders of magnitude faster,
though not as fast as BP and GBP.
When the frustration due to competitive interactions or constraints is
very strong, like in spin--glass models in the glassy phase or in
constraints satisfaction problems in the hard regime, even double loop
algorithms become useless, since we are faced with the problem of
replica symmetry breaking, corresponding to clustering of
solutions. Very important advancements have been made in recent years
by extending the belief propagation algorithm into this domain. These
results are in a sense at the border of the CVM, since they are at
present confined to the lowest order of the CVM approximation, that is
the Bethe--Peierls approximation.
It will be of particular importance, in view of the applications to
hard optimization problems with non--tree--like structure, to
understand how to generalize these results to higher order
approximations.
\ack
I warmly thank Pierpaolo Bruscolini, Carla Buzano and Marco Pretti,
with whom I have had the opportunity to collaborate and to exchange
ideas about the CVM, Marco Zamparo for a fruitful discussion about
\Eref{FactorProp}, Riccardo Zecchina for many discussions about the
survey propagation algorithm, and the organizers of the Lavin workshop
``Optimization and inference in machine learning and physics'' where I
had the opportunity to discuss an early version of this work.
\section*{References}
|
\section{Introduction}
Members of our group have been involved in long-term studies of abundances in
Galactic halo stars.
These studies have been designed to address a number of important
issues, including: the synthesis mechanisms of
the heavy,
specifically, neutron capture ($n$-capture)
elements, early in the history of the
Galaxy; the identities of the earliest stellar generations,
the progenitors of the
halo stars; the site or sites for the synthesis of the
rapid $n$-capture ({\it i.e.}, $r$-process) material
throughout the Galaxy; the Galactic Chemical Evolution (GCE) of
the elements; and by employing the abundances of the
radioactive elements (Th and U) as
chronometers, the ages of the oldest stars, and hence the lower limit
on the age of the Galaxy and the Universe. (See \citealt{truran02},
\citealt{snedencowan03}, \citealt{cowan04}, and \citealt{sneden08}
for discussions of these related and significant topics.)
In the following paper we review some of the results of our studies,
starting with new stellar abundance determinations arising from
more accurate laboratory atomic data
in \S II, followed by abundance comparisons
of the
lighter and heavier $n$-capture elements in the $r$-process-rich stars
in \S III, with new species detections in the star BD+17{$^{\circ}$} 3248 \
and the ubiquitous nature of the $r$-process throughout the
Galaxy described in
sections \S IV
and \S V, respectively. We end with our Conclusions in \S VI.
\section{Atomic Data Improvements and Abundance Determinations}
Stellar abundance determinations of the $n$-capture elements in
Galactic halo stars have become increasingly more accurate
over the last decade with typical errors now
of less than 10\% \citep{sneden08}.
Much of that improvement in the precision of the stellar abundances
has been due to increasingly more accurate laboratory atomic data.
New measurements of the transition probabilities
have been published for the rare earth elements (REE) and several
others, including:
La~II \citep{lawler01a};
Ce~II (\citealt{palmeri00};
and recently transition probabilities for 921 lines for Ce~II,
\citealt{lawler09});
Pr~II \citep{ivarsson01};
Nd~II (transition probabilities for more than 700
Nd~II lines, \citealt{denhartog03});
Sm~II (\citealt{xu03};
and recently transition probabilities for more
than 900 Sm~II lines, \citealt{lawler06});
Eu~I, II, and III (\citealt{lawler01c}; \citealt{denhartog02});
Gd~II \citep{denhartog06};
Tb~II (\citealt{denhartog01}; \citealt{lawler01b});
Dy~I and II \citep{wickliffe00};
Ho~II \citep{lawler04};
Er~II (transition probabilities for 418
lines of Er II, \citealt{lawler08});
Tm~I and II (\citealt{anderson96}; \citealt{wickliffe97});
Lu~I, II, and III (\citealt{denhartog98}; \citealt{quinet99},
\citealt{fedchak00});
Hf~II \citep{lawler07};
Os~I and II (\citealt{ivarsson03,ivarsson04}; \citealt{quinet06});
Ir~I and II (\citealt{ivarsson03,ivarsson04}; \citealt{xu07});
Pt~I \citep{denhartog05};
Au~I and II (\citealt{fivet06}; \citealt{biemont07});
Pb~I \citep{biemont00};
Th~II \citep{nilsson02a};
U~II \citep{nilsson02b};
and finally in new, more precise solar and stellar abundances of
Pr, Dy, Tm, Yb, and Lu \citep{sneden09}.
These new atomic data have been employed to redetermine the solar
and stellar abundances.
We show in Figure~\ref{f8} (from \citealt{sneden09})
the relative REE, and Hf,
abundances in five $r$-process rich stars: BD+17{$^{\circ}$} 3248, \mbox{CS~22892-052}, \mbox{CS~31082-001},
HD~115444 and HD~221170,
where
the abundance distributions have been scaled to the element Eu for
these comparisons. Also shown in Figure~\ref{f8}
are two Solar System $r$-process-only
abundance predictions from \citet{arlandini99} (based upon a
stellar model calculation) and
\citet{simmerer04} (based upon the ``classical'' $r$-process residual
method) that are also matched to the Eu abundances.
What is clear from the figure is that all of the
REE abundances---as well as Hf, which is a heavier interpeak element---are
in the same relative proportions from
star-to-star and with respect to the solar $r$-process abundances.
This agreement between the heavier $n$-capture elements and the
Solar System $r$-process abundance distribution
has been noted in the past (see, {\it e.g.}, \citealt{sneden03}), but
the overall agreement has become much more
precise, and convincing, as a result of the new atomic laboratory data.
\begin{figure*}
\plotone{f8.eps}
\caption{Recent
abundance determinations in five $r$-process rich
stars, based upon
new atomic lab data, compared with two solar system $r$-process only
predictions. The abundances in each star have been normalized to the
element Eu. After \citet{sneden09}.
Reproduced by permission of the AAS.
}
\label{f8}
\end{figure*}
\section{Abundance Comparisons}
We can also compare more comprehensive---not just the
REE---elemental abundance determinations for the
$r$-process-rich halo stars. This is potentially
a more rewarding enterprise, as it can illuminate the complex
nucleosynthetic
origin of the lightest $n$-capture elements, and can provide new ways of
looking at the age of the Galactic halo.
\subsection{Heavy $n$-capture Elements}
We show in Figure~\ref{compnew4} abundance comparisons with extensive
elemental data for
10 $r$-process-rich stars
(from the top):
filled (red) circles, CS~22892-052 \citep{sneden03,sneden09};
filled (green) squares, HD~115444 \citep{westin00,sneden09,hansen11};
filled (purple) diamonds, BD+17{$^{\circ}$} 3248\ \citep{cowan02,roederer10b};
(black) stars, CS~31082-001 \citep{hill02,plez04};
solid (turquoise) left-pointing triangles, HD~221170 \citep{ivans06,sneden09};
solid (orange) right-pointing triangles, HE~1523-0901 \citep{frebel07};
(green) crosses, CS~22953-003 \citep{francois07};
open (maroon) squares, HE~2327-5642 \citep{mashonkina10};
open (brown) circles, CS~29491-069 \citep{hayek09}; and
open (magenta) triangles, HE~1219-0312 \citep{hayek09}.
The abundances of all the stars except
\mbox{CS~22892-052}\ have been vertically displaced
downwards for display purposes.
In each case the
solid lines are (scaled) solar system $r$-process only predictions from
\citet{simmerer04} that have been matched to the Eu abundances.
The figure indicates that for the ten stars plotted, the
abundances of {\it all} of the heavier stable $n$-capture elements
({\it i.e.}, Ba and above) are
consistent with the relative solar system $r$-process abundance distribution
(see also \citealt{sneden09}).
Earlier work had demonstrated this agreement for
several $r$-process rich stars (where [Eu/Fe] $\simeq$ 1), including \mbox{CS~22892-052},
and the addition of still more such $r$-process-rich stars supports that
conclusion.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=7.00in]{compnew4.ps}
\caption{
Abundance comparisons between 10 $r$-process rich stars and the Solar
System $r$-process values.
See text for references.
Adapted from \citet{sneden11}.
}
\label{compnew4}
\end{figure*}
\subsection{Light $n$-capture Elements}
While the heavier $n$-capture elements appear to be consistent with
the scaled solar system $r$-process curve, the lighter $n$-capture
elements (Z $<$ 56) seem to fall below that same solar curve.
One problem in analyzing this region of interest
is that there have been relatively few stellar observations of these
lighter $n$-capture elements until now.
With the limited amount of data it is not yet clear if the pattern
is the same from star-to-star for the lighter $n$-capture elements in
these $r$-process rich stars.
There has been extensive work on trying to understand the
synthesis of these elements.
Observations of 4 metal-poor $r$-enriched stars
by \citet{crawford98}
suggested that Ag (Z = 47) was produced in rough proportion
to the heavy elements in stars with $-$2.2~$<$~[Fe/H]~$< -$~1.2.
\citet{wasserburg96} and \citet{mcwilliam98} pointed out
that multiple sources of heavy elements (other than the $s$-process)
were required to account for the observed abundances
in the solar system and extremely metal-poor stars, respectively.
\citet{travaglio04} quantized this effect, noting that Sr-Zr
Solar System abundances
could not be totally accounted for from traditional sources, such as
the $r$-process, the (main) $s$-process and the weak $s$-process.
They suggested that
the remaining (missing) abundances---8\% for Sr to 18\%
for Y and Zr---came from
a light element primary process (LEPP).
Travaglio et al.\ also noted,
``The discrepancy in the $r$-fraction of Sr-Y-Zr between the
$r$-residuals method and the \mbox{CS~22892-052}\ abundances
becomes even larger for elements from Ru to Cd: the weak
$s$-process does not contribute to elements from Ru to Cd. As
noted [previously], this discrepancy suggests an even
more complex multisource nucleosynthetic origin for elements
like Ru, Rh, Pd, Ag, and Cd.''
\citet{montes07} extended studies of the LEPP and suggested
that a range of $n$-capture elements, perhaps even including heavier
elements such as Ba, might have a contribution
from this primary process. (Since, however, Ba in $r$-process
rich stars is consistent with
the solar $r$-process abundances, such contributions
for these heavier
elements must be quite small.)
They noted, in particular, that this LEPP might
have been important in synthesizing the abundances in the $r$-process poor
star HD~122563.
Further insight into the (complicated)
origin of the lighter $n$-capture elements is
provided by the detections of Ge (Z = 32) in a few stars.
\citet{cowan05} noted a correlation of Ge with the
iron abundances in the halo
stars with $-$3.0~$\lesssim$~[Fe/H]~$\lesssim -$1.5,
suggesting that the Ge is being produced along with the Fe-group elements
at these low metallicities.
To produce the protons needed to satisfy
such a correlation, a new neutrino ({\it i.e.}, $\nu$-p) process that might
occur in supernovae was suggested \citep{frohlich06}.
We note that for higher ({\it i.e.}, at solar)
metallicities, Ge is considered a neutron-capture element,
synthesized in the $r$-process (52\%) and the $s$-process (48\%)
(Simmerer et al. 2004; Sneden et al. 2008). Thus, there should be
a change in the slope of the Ge abundances from low metallicities to
higher metallicities, a behavior that has not yet been observed.
\begin{figure*}
\centering
\plotone{bdp17.eps}
\caption{$r$-process abundance predictions for light $n$-capture elements
compared with observations of BD+17{$^{\circ}$} 3248 \ from \citet{roederer10b}.
See text for further details.}
\label{bdp17}
\end{figure*}
We show in
Figure~\ref{bdp17} several $r$-process predictions for the
lighter $n$-capture element abundances compared with observations of
those elements in BD+17{$^{\circ}$} 3248 \ from \citet{roederer10b}.
The two Solar System $r$-process models (``classical'' and ``stellar model'')
reproduce some of these elements but begin to diverge from the
observed abundances at Rh (Z = 45).
Also shown in
Figure~\ref{bdp17} are predictions from
a High Entropy Wind (HEW) model, that might be
typical in a core-collapse (or Type II) supernova (\citealt{farouqi09};
K.-L.~Kratz, private communication.)
This model gives a better fit to the abundances, but does not reproduce
the observed odd-even effects in Ag (Z = 47)and Cd (Z = 48) in this star
(resembling a trend discovered in other
$r$-enriched stars by \citealt{johnson02}).
Recent work by \citet{hansen11} to study Pd (Z = 46) and Ag
abundances in stars with $-$3.2~$\lesssim$~[Fe/H]~$\lesssim -$0.6
confirms the divergence between observations and simulation predictions.
These comparisons between calculations and observations do in fact
argue for a combination of processes to reproduce
the observed stellar abundances of some of these light $n$-capture elements.
This combination of processes might include (contributions from)
the main $r$-process, the LEPP, the $\nu$-p process,
charged-particle reactions
accompanied by $\beta$-delayed fission
and the weak $r$-process ({\it e.g.}, \citealt{kratz07}).
(See, {\it e.g.},
\citealt{farouqi09,farouqi10}, \citealt{roederer10a,roederer10b},
and \citealt{arcones11}
for further discussion.)
It may also be that during the synthesis
the main $r$-process and the LEPP are separate processes,
and that the abundance patterns in all metal-poor stars could be
reproduced by mixing their yields \citep{montes07}.
Alternatively, it may be
that the $r$-process and the LEPP
can be produced in the same events, but sometimes
only the lower neutron density components are present
\citep{kratz07,farouqi09}.
It has also been suggested that the
heavier and lighter
$n$-capture elements are synthesized in separate sites (see {\it e.g.},
\citealt{qian08}).
New observations of heavy elements in metal-poor globular cluster
stars reaffirm the abundance patterns seen in field stars.
In the globular cluster M92, \citet{roederer11b} found that the
observed star-to-star dispersion in Y (Z = 39) and Zr (Z = 40)
is the same as for the Fe-group elements ({\it i.e.}, consistent
with observational uncertainty only).
Yet, the Ba \citep{sneden00}, La, Eu, and Ho abundances exhibit
significantly larger star-to-star dispersion that cannot be
attributed to observational uncertainty alone.
Furthermore, the Ba and heavier elements were produced by $r$-process
nucleosynthesis without any $s$-process contributions.
This indicates that, as in the field stars,
these two groups of elements could not have
formed entirely in the same nucleosynthetic process in M92.
\section{New Species Detections}
\citet{roederer10b} reanalyzed near-UV spectra obtained with
HST/STIS of the
star BD+17{$^{\circ}$} 3248.
(See also \citealt{cowan02,cowan05} for earlier HST observations of BD+17{$^{\circ}$} 3248.)
We show in
Figure~\ref{f1} (from \citealt{roederer10b})
spectral regions around Os~II and Cd~I lines in the
stars BD+17{$^{\circ}$} 3248, HD~122563 and HD~115444. There is a clear detection of Os~II
in both BD+17{$^{\circ}$} 3248\ and HD~115444 but not in HD~122563. The star HD~115444 is
similar in metallicity and atmospheric parameters to HD~122563
(see \citealt{westin00}), but much more
$r$-process rich: [Eu/Fe] = 0.7 versus $-$0.5, respectively.
In the lower panel of
Figure~\ref{f1} we see the presence of Cd~I in BD+17{$^{\circ}$} 3248 \ and HD~115444, as well
as a weak detection in HD~122563.
Synthetic fits to these spectra in BD+17{$^{\circ}$} 3248\ and HD~122563
indicate the presence of
Cd~I and Lu~II lines in both stars,
as well as the detection (and upper limit of) Os~II in the same
two stars, respectively.
This work was
the first to detect Cd~I, Lu~II, and Os~II
in metal-poor halo stars.
\begin{figure}
\centering
\plotone{f1.eps}
\vskip0pt
\caption{
HST (near-UV)
spectral regions containing Os~II and Cd~I lines in BD+17{$^{\circ}$} 3248, HD~122563,
and HD~115444 from \citet{roederer10b}.
Reproduced by permission of the AAS.
}
\label{f1}
\end{figure}
In addition to these new detections,
\citet{roederer10b} employed Keck/HIRES spectra
to derive new abundances of Mo I, Ru I and Rh I in this star.
Combining these abundance determinations led to the detection of a total of
32 $n$-capture species ---the most of any metal-poor halo star.
(Previously, CS~22892-052 had the most such detections.)
Further, we note that
the total detections in BD+17{$^{\circ}$} 3248 \ did not count the element Ge.
And while Ge may be
synthesized in proton-rich processes early in the history of the
Galaxy,
it is classified as a $n$-capture element in Solar System material
(see \citealt{simmerer04} and \citealt{cowan05}).
We illustrate this total abundance distribution
in Figure~\ref{bdfourthb} compared with the
two Solar System $r$-process curves from \citet{simmerer04} and
\citet{arlandini99}. We again see the close agreement between the heavier
$n$-capture elements and (both of) the predictions for
the Solar System $r$-process curve, as well as
the deviation between the abundances of the
lighter $n$-capture elements and that
same curve.
\begin{figure*}
\centering
\vskip 0.55in
\includegraphics[width=5.00in]{bdfourthb.eps}
\vskip 0.35in
\caption{
The total observed abundance distribution in BD+17{$^{\circ}$} 3248.
There are a total of 32---not including Ge---detections of $n$-capture elements,
the most in any metal-poor halo star. This distribution is
compared with the
two Solar System $r$-process curves from \citet{simmerer04} and
\citet{arlandini99}.
}
\label{bdfourthb}
\end{figure*}
\section{The $r$-process Throughout the Galaxy}
The results of \citet{roederer10b} also confirm earlier work indicating
significant differences in the abundances between $r$-process rich stars, such
as BD+17{$^{\circ}$} 3248, and $r$-process poor stars, such as HD~122563.
This difference is shown clearly in Figure~\ref{f4}. The abundance
distribution for BD+17{$^{\circ}$} 3248 \ (shown in the top panel) is relatively flat---compare
the abundance of Sr with Ba---and
is consistent with the scaled Solar System $r$-process abundances for
the heavy $n$-capture elements.
In contrast the lower panel of this figure indicates that the abundances
in the $r$-process poor HD~122563 fall off dramatically with increasing
atomic number---again compare the abundance of Sr with Ba.
\begin{figure*}
\centering
\includegraphics[width=5.45in]{f4.eps}
\caption{
Abundance distributions in BD+17{$^{\circ}$} 3248\ and HD~122563 with detections indicated by
filled symbols and upper limits by downward-pointing open triangles.
The new measurements of Os, Cd, and Lu illustrated in Figure~\ref{f1}
are labeled.
In the top panel (BD+17{$^{\circ}$} 3248)
the bold curve is an HEW calculation from \citet{farouqi09}
normalized to Sr, while the solid line is the Solar System $r$-process
curve \citep{sneden08} normalized to Eu.
In the bottom panel (HD~122563) the solar curve is normalized both to Eu
(solid line) and
Sr (dotted line).
Abundances were obtained from \citet{cowan02,cowan05}, \citet{honda06},
\citet{roederer09,roederer10b}, and \citet{sneden09}.
Figure from \citet{roederer10b}.
Reproduced by permission of the AAS.
}
\label{f4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=6.00in]{f11b.eps}
\caption{
Differences between Solar System $r$-process abundances and stellar abundances
for 16 metal-poor stars, normalized to Sr.
The stars are listed in order of descending [Eu/Fe], and that value and
[Sr/Fe] are listed in the box to the right in the figure.
A value for a typical uncertainty is illustrated in the lower left.
Note the difference in the abundance pattern between the $r$-process
rich star \mbox{CS~22892-052}\ and that of HD~122563, with the other stars falling
between those extremes.
Abundance references are as follows:
S.S. $r$-process abundances \citep{sneden08};
\mbox{HE~1523-0901} (\citealt{frebel07} and A.\ Frebel,
2009, private communication);
\mbox{CS~31082-001} \citep{hill02,plez04,sneden09};
\mbox{CS~22892-052} \citep{sneden03,sneden09};
\mbox{HE~1219-0312} \citep{hayek09,roederer09};
\mbox{UMi-COS~82} \citep{aoki07};
\mbox{CS~31078-018} \citep{lai08};
\mbox{CS~30306-132} \citep{honda04};
\mbox{BD$+$17~3248} \citep{cowan02,sneden09,roederer10b};
\mbox{HD~221170} \citep{ivans06,sneden09};
\mbox{HD~115444} \citep{westin00,roederer09,sneden09};
\mbox{HD~175305} \citep{roederer10c};
\mbox{BD$+$29~2356} \citep{roederer10c};
\mbox{BD$+$10~2495} \citep{roederer10c};
\mbox{CS~22891-209} \citep{francois07};
\mbox{HD~128279} \citep{roederer10c};
\mbox{HD~13979} (I.\ Roederer et al., in preparation);
\mbox{CS~29518-051} \citep{francois07};
\mbox{CS~22873-166} \citep{francois07};
\mbox{HD~88609} \citep{honda07};
\mbox{CS~29491-053} \citep{francois07};
\mbox{HD~122563} \citep{honda06,roederer10b}; and
\mbox{CS~22949-037} \citep{depagne02}.
Figure from \citet{roederer10a}.
Reproduced by permission of the AAS.
}
\label{f11b}
\end{figure*}
It is clear from much work
({\it e.g.}, Honda et al. 2006, 2007) that the abundances even in a
star such as HD~122563 do come from the
$r$-process---the source for the $s$-process,
low- or intermediate-mass stars on the AGB with longer evolutionary
timescales, have not had sufficient time to evolve prior to the formation
of this metal-poor halo star (cf.\ \citealt{truran81}).
Instead, one can think of the abundance distribution
in HD~122563, illustrated in Figure~\ref{f4}, as the result of
an ``incomplete $r$-process''---there were not sufficient numbers of
neutrons to form all of the heavier $n$-capture elements,
particularly the ``third-peak'' elements of Os, Ir, and Pt.
In the classical ``waiting point approximation'' the
lighter $n$-capture elements are synthesized from lower neutron number
density (n$_n$)
fluxes, typically 10$^{20}$--10$^{24}$, with the heavier $n$-capture elements
(and the total $r$-process abundance distribution) requiring
values of n$_n$ = 10$^{23}$--10$^{28}$ cm$^{-3}$
(see Figures 5 and 6 of \citealt{kratz07}).
Physically in this ``incomplete'' or ``weak $r$-process,''
the neutron flux was too low to push the $r$-process ``path''
far enough away from the valley of $\beta$-stability
to reach the higher mass numbers after
$\alpha$ and $\beta$-decays back to stable nuclides.
Instead the lower neutron number densities result in the
$r$-process path being too close to the valley of stability
leading to a diminution in the
abundances of the heavier $n$-capture elements.
The lighter $n$-capture elements, such as Sr, in this star
may have formed as a result of this incomplete or
weak $r$-process, or the LEPP, or combinations as described previously for the
$r$-process rich stars.
This analysis was extended to a larger sample by \citet{roederer10a}
and is illustrated in Figure~\ref{f11b}.
We show the differences between the abundance distributions of 16
metal-poor stars, normalized to Sr,
compared with the Solar System $r$-process distribution \citep{sneden08}.
The stars are plotted in order of descending values of
[Eu/Fe], a measure of their $r$-process richness. Thus, we see near the
top \mbox{CS~22892-052}\ with a value of [Eu/Fe] = 1.6 and near the bottom, HD~122563 with
[Eu/Fe] = $-$0.5. The figure illustrates
the relative flatness of the distributions of
the most $r$-process-rich stars ([Eu/Fe] $\simeq$ 1)
with respect to the solar curves, while the
$r$-process poor stars have abundances that fall off sharply with
increasing atomic number.
It is also clear from Figure~\ref{f11b}
that there are a range of abundance distributions
falling between
these two extreme examples.
(We note that Figure~\ref{f11b} should not be taken as an unbiased
distribution of stars at low metallicity.)
We emphasize four important points here.
First, not all of the metal-poor stars have the same abundance pattern
as \mbox{CS~22892-052}, only those that are $r$-process rich.
Second, while the distributions
are different between the $r$-process rich and poor stars there
is no indication of $s$-process synthesis for these elements.
Thus, all of the
elements in these stars were synthesized in the $r$-process, at least for
the heavier $n$-capture elements, and $r$-process material was common in the
early Galaxy.
Third, the approximate downward displacement from the top to the bottom
of the Figure~\ref{f11b} (a measure of the decreasing [Eu/Sr] ratio)
roughly scales as the [Eu/Fe] ratio, listed
in the right-hand panel.
This can be understood as follows: since the abundance patterns are
normalized to Sr, and if Sr is roughly proportional to Fe in these stars
(with a moderate degree of scatter---cf.\ Figure~7 of \citealt{roederer10a}),
then {\it of course} the [Eu/Sr] ratio roughly follows [Eu/Fe].
(See also \citealt{aoki05}.)
Finally, we note that Ba has been detected in all of these stars
and the vast majority of low-metallicity field and globular cluster stars
studied to date.
Only in a few Local Group dwarf galaxies do Ba upper limits
hint that Ba (and, by inference, all heavier elements) may be
extremely deficient or absent
\citep{fulbright04,koch08,frebel10}.
\section{Conclusions}
Extensive studies have demonstrated the presence of
$n$-capture elements in the atmospheres of
metal-poor halo and globular cluster stars.
New detections of the $n$-capture elements Cd~I (Z = 48), Lu~II
(Z = 71) and Os~II (Z = 76),
derived from HST/STIS spectra,
have been made in several metal-poor halo stars.
These were the first detections of these species in such stars.
Supplementing these observations with Keck data
and new measurements of Mo I, Ru I and Rh I,
we reported the detections of 32 $n$-capture
elements in BD+17{$^{\circ}$} 3248. This is currently the most detections of these
elements in any metal-poor halo star, supplanting the previous ``champion''
\mbox{CS~22892-052}.
Comparisons among the most
$r$-process-rich stars ([Eu/Fe] $\simeq$ 1) demonstrate that the heaver
stable elements (from Ba and above) are remarkably consistent from star-to-star
and consistent with the (scaled) solar system $r$-process distribution.
Detailed comparisons of the REE (along with Hf) among
a well-studied group of $r$-process-rich stars, employing new
experimental atomic data,
strongly supports this finding.
The newly determined, and lab-based, stellar abundances are more
precise and show very little scatter from star-to-star and with
respect to the Solar System $r$-process abundances.
This suggests that the $r$-process produced these elements
early in the history of the Galaxy and that the same type(s) of process
was responsible for the synthesis of the $r$-process elements at the
time of the formation of the Solar System.
While the heavier elements appear to have formed from the main
$r$-process and are
apparently consistent with the solar $r$-process abundances,
the lighter $n$-capture element abundances in
these stars do
not conform to the solar pattern.
There have been little data in these stars
until recently, but now with the new detections
of Cd and increasing Pd and Ag detections, some patterns are becoming clear.
First, the main $r$-process alone is not responsible for the synthesis of
these lighter $n$-capture elements. Instead, other processes, alone or in
combination, may have responsible for such formation.
These processes include a so-called
``weak'' $r$-process (with lower values of n$_n$), the LEPP,
the $\nu$-p process, or charged particle
reactions in the HEW of a core-collapse supernova.
It is also not clear whether different processes are responsible for
different mass regions with one for Ge, a different one for Sr-Zr and
still another for Pd, Ag, and Cd.
It is also not clear whether these processes operate separately from
each other or in the same site, or whether different mass ranges of
the $n$-capture elements are synthesized in different sites.
Clearly, much more work needs to be undertaken to understand the
formation of these lighter $n$-capture elements.
The stellar abundance signatures of the heaviest of these elements,
i.e., Ba and above, are consistent with the rapid neutron
capture process, $r$-process, but not the $s$-process in these
old stars. Similar conclusions are found for stars in the ancient globular
clusters
with comparable abundance spreads in the $r$-process elements
(see, {\it e.g.}, \citealt{gratton04}, \citealt{sobeck11},
\citealt{roederer11a}, and \citealt{roederer11b}).
There is also a clear distinction between the abundance patterns of the
$r$-process rich stars such as \mbox{CS~22892-052}\ and the $r$-process poor stars like
HD~122563. The latter seem to have an element pattern that was
formed as a result of a ``weak'' or ``incomplete'' $r$-process.
Most of the old, metal-poor halo stars have abundance distributions
that fall between the extremes of \mbox{CS~22892-052}\ and HD~122563.
However, the very presence of
$n$-capture elements in the spectra of these stars
argues for $r$-process events being a common occurrence
early in the
history of
the Galaxy.
Finally, we note the need for additional stellar observations,
particularly of the UV regions of the spectra {\it only} accessible
using STIS or COS aboard HST.
These observations require
high signal-to-noise ratios and high resolution to identify faint lines
in crowded spectral regions. Also we will require more laboratory
atomic data for
elements that have not been well studied to improve the precision of
the stellar and solar abundances.
Additional experimental nuclear data,
not yet available,
for the heaviest neutron-rich nuclei
that participate in the $r$-process,
will be critical to these studies. Until that time
new, more physically based,
theoretical prescriptions for nuclear masses, half-lives, etc.\
for these $r$-process nuclei will be necessary.
New theoretical models of supernova explosions and detailed synthesis
scenarios, such as might occur in the HEW, will be very important to help to
identify the site or sites
for the $r$-process, a search that has been ongoing since 1957.
\section{Acknowledgments}
We thank our colleagues for all of their contributions and helpful
discussions. We particularly are grateful for all of the
contributions from George W.\ Preston, as he celebrates his
80th birthday. Partial scientific support for this research was
provided by the NSF (grants AST~07-07447 to J.J.C., AST~09-08978 to C.S.,
and AST~09-07732 to J.E.L.).
I.U.R.\ is supported by the Carnegie Institution of Washington
through the Carnegie Observatories Fellowship.
|
\section{Introduction}
General relativity (GR) is currently the most established theory of gravitation. It correctly describes a number of observations, such as planetary orbits in the Solar System, the motion of masses in the Earth's gravitational field \cite{BETA.Will:2014kxa}, the recently discovered gravitational waves \cite{BETA.Ligo} or the $\Lambda$CDM model in cosmology \cite{BETA.Planck}. However successful on these scales, GR itself does not provide sufficient answers to fundamental open questions such as the reason for the accelerated expansion of the universe, the phase of inflation or the nature of dark matter. Further tension arises from the fact that so far no attempt to extend GR to a full quantum theory has succeeded.
GR is expected to be challenged by different upcoming experiments on ground and in space, such as high precision clocks \cite{BETA.cacciapuoti2011atomic} and atom interferometers in Earth orbits, pulsar timing experiments \cite{BETA.Pulsar} and direct observations of black hole shadows \cite{BETA.Goddi,BETA.Broderick}. This plethora of existing and expected experimental data, together with the tension with cosmological observations, motivates studying alternative theories of gravitation \cite{BETA.Nojiri}. In particular, the upcoming experiments are expected to give more stringent constraints on the parameter spaces of such theories or even find violations of GR's predictions.
One class of alternative theories are scalar-tensor theories of gravity (STG) - an extension to GR that contains a scalar degree of freedom in addition to the metric tensor.
The detection of the Higgs proved that scalar particles exist in nature \cite{BETA.higgs} and scalar fields are a popular explanation for inflation \cite{BETA.inflation.guth} and dark energy \cite{BETA.Quintessence.and.the.Rest.of.the.World}.
Further, effective scalar fields can arise, e.g., from compactified extra dimensions \cite{BETA.compactified.extra.dimensions} or string theory \cite{BETA.Damour}.
While the motivation for such alternative theories of gravitation is often related to cosmology, of course any such theory must also pass Solar System tests. The most prominent class of such tests is based on the post-Newtonian limit of the theory under consideration, which is usually discussed in the parametrized post-Newtonian (PPN) framework \cite{BETA.will.book,BETA.Will:2014kxa}.
It allows characterizing theories of gravitation in the weak field limit in terms of a number of parameters, that can be calculated from the field equations of the theory, and will, in general, deviate from the parameters predicted by general relativity. These parameters can be constrained using observational data and experiments \cite{BETA.Fomalont:2009zg,BETA.Bertotti:2003rm,BETA.Hofmann:2010,BETA.Verma:2013ata,BETA.Devi:2011zz}.
In this work we are interested in the parameters $\gamma$ and $\beta$ only, as these are the only parameters that may differ in fully conservative gravity theories, to which also STG belongs~\cite{BETA.will.book}.
The most thoroughly studied standard example of a scalar-tensor theory is Brans-Dicke theory \cite{BETA.Brans-Dicke.1961}, which contains a massless scalar field, whose non-minimal coupling to gravity is determined by a single parameter $\omega$. This theory predicts the PPN Parameter $\gamma = (1+\omega)/(2+\omega)$, in contrast to $\gamma=1$ in GR. Both theories predict $\beta = 1$. Adding a scalar potential gives the scalar field a mass, which means that its linearized field equation assumes the form of a Klein-Gordon equation, which is solved by a Yukawa potential $\sim e^{-m r}/r$ in the case of a point-like source. In this massive scalar field case, the PPN parameter $\gamma$ becomes a function of the radial coordinate $r$ \cite{BETA.Olmo1,*BETA.Olmo2,BETA.Perivolaropoulos}.
Scalar-tensor theories can be expressed in different but equivalent conformal frames. This means that the form of the general scalar-tensor action is invariant under conformal transformations of the metric, which depend on the value of the scalar field. There are two such frames that are most often considered:
In the Jordan frame, test particles move along geodesics of the frame metric while in the Einstein frame, the scalar field is minimally coupled to curvature.
The PPN parameters $\gamma$ and $\beta$ for scalar-tensor theories with a non-constant coupling have been calculated in the Jordan \cite{BETA.HohmannPPN2013,*BETA.HohmannPPN2013E} and in the Einstein frame \cite{BETA.SchaererPPN2014}.
These works consider a spacetime consisting of a point source surrounded by vacuum.
As will be elucidated below, this assumption leads to problems when it comes to the definition and calculation of the PPN parameter $\beta$.
Applying conformal transformations and scalar field redefinitions allows to transform STG actions, field equations and observable quantities between different frames. It is important to note that these different frames are physically equivalent, as they yield the same observable quantities~\cite{BETA.Postma:2014vaa,BETA.Flanagan}. Hence, STG actions which differ only by conformal transformations and field redefinitions should be regarded not as different theories, but as different descriptions of the same underlying theory.
This observation motivates the definition of quantities which are invariant under the aforementioned transformations, and to express observable quantities such as the PPN parameters or the slow roll parameters characteristic for models of inflation fully in terms of these invariants~\cite{BETA.JarvInvariants2015,BETA.KuuskInvariantsMSTG2016,BETA.Jarv:2016sow,BETA.Karam:2017zno}.
The PPN parameters $\gamma$ and $\beta$ were calculated for a point source \cite{BETA.HohmannPPN2013,*BETA.HohmannPPN2013E,BETA.SchaererPPN2014}, and later expressed in terms of invariants~\cite{BETA.JarvInvariants2015,BETA.KuuskInvariantsMSTG2016}. However, the assumption of a point source leads to a number of conceptual problems. The most important of these problems is the fact that, in terms of post-Newtonian potentials, the Newtonian gravitational potential becomes infinite at the location of the source, so that its gravitational self-energy diverges. It is therefore impossible to account for possible observable effects caused by a modified gravitational self-energy of the source in a theory that differs from GR. We therefore conclude that the assumption of a point source is not appropriate for a full application of the PPN formalism to STG. This has been realized earlier in the particular case of STG with screening mechanisms~\cite{BETA.SchaererPPN2014,BETA.Zhang:2016njn}.
The goal of this article is to improve on the previously obtained results for the PPN parameters $\gamma$ and $\beta$ for a general class of scalar-tensor theories, in which the divergent gravitational self-energy has been neglected. Instead of a point mass source, the gravitating mass source we consider in this article is given by a sphere with homogeneous density, pressure and internal energy that is surrounded by vacuum. In this case the gravitational self-energy remains finite, and can therefore be taken into account. During our calculation we do not fix a particular frame, but instead make use of the formalism of invariants mentioned above already from the beginning in order to calculate the effective gravitational constant as well as the PPN parameters $\gamma$ and $\beta$.
The article is structured as follows. In Sec. \ref{sec:theory} we discuss the scalar-tensor theory action, the field equations and the invariants.
The perturbative expansion of relevant terms is outlined in Sec. \ref{sec PPN Expansion} and the expanded field equations are provided in Sec. \ref{sec Expanded field equations}.
Next, in Sec. \ref{sec Massive field and spherical source}, these are solved explicitly for a non-rotating homogeneous sphere and the PPN parameters are derived.
Sec. \ref{sec Comparison to observations} applies our results to observations.
Finally, we conclude with a discussion and outlook in Sec.~\ref{sec Conclusion}.
The main part of our article is supplemented by Appendix~\ref{app coefficients}, in which we list the coefficients appearing in the post-Newtonian field equations and their solutions.
\section{Theory}\label{sec:theory}
We start our discussion with a brief review of the class of scalar-tensor tensor theories we consider. The most general form of the action, in a general frame, is displayed in section~\ref{ssec:action}. We then show the metric and scalar field equations derived from this action in section~\ref{ssec:feqns}. Finally, we provide the definition of the relevant invariant quantities, and express the field equations in terms of these, in section~\ref{ssec:invariants}.
\subsection{Action}\label{ssec:action}
We consider the class of scalar-tensor gravity theories with a single scalar field \(\Phi\) besides the metric tensor \(g_{\mu\nu}\), and no derivative couplings. Its action in a general conformal frame is given by~\cite{BETA.Flanagan}
\ba
\label{BETA.equ: action}
S = \frac{1}{2\kappa^2} \int d^4x \sqrt{-g}
\left\{ \mathcal{A}(\Phi) R - \mathcal{B}(\Phi) g^{\mu\nu} \partial_\mu \Phi \partial_\nu \Phi
- 2 \kappa^2 \mathcal{U}(\Phi)\right\}
+ S_m [e^{2\alpha(\Phi)} g_{\mu\nu},\chi] \,.
\ea
Any particular theory in this class is determined by a choice of the four free functions $\mathcal{A}, \mathcal{B}, \mathcal{U}$ and $\alpha$, each of which depends on the scalar field $\Phi$.
The function $\mathcal{B}$ determines the kinetic energy part of the action. The scalar potential is given by $\mathcal{U}$; a non-vanishing potential may be used to model inflation, a cosmological constant or give a mass to the scalar field.
The last part \(S_m\) is the matter part of the action. The matter fields, which we collectively denote by $\chi$, couple to the so-called Jordan frame metric $e^{2\alpha(\Phi)} g_{\mu\nu}$. It is conformally related to the general frame metric $g_{\mu\nu}$. The latter is used to raise and lower indices and determines the spacetime geometry in terms of its Christoffel symbols, Riemann tensor and further derived quantities.
In general, the scalar field is non-minimally coupled to curvature. This coupling is determined by the function $\mathcal{A}(\Phi)$.
There are different common choices of the conformal frame; see~\cite{BETA.JarvInvariants2015} for an overview. In the Jordan frame, one has \(\alpha = 0\) and the matter fields couple directly to the metric \(g_{\mu\nu}\). By a redefinition of the scalar field one may further set $\mathcal{A} \equiv \Phi$. Typically, one considers the coupling function $\omega(\Phi) \equiv \mathcal{B}(\Phi) \Phi$. This particular choice of the parametrization is also known as Brans-Dicke-Bergmann-Wagoner parametrization.
Another possible choice for the conformal frame is the Einstein frame, in which the field couples minimally to curvature, $\mathcal{A} \equiv 1$. However, in this case the matter fields in general do not couple to the frame metric directly, but through a non-vanishing coupling function $\alpha \neq 0$. In this case one may also choose the canonical parametrization $B \equiv 2$.
We call the scalar field minimally coupled if the Jordan and Einstein frames coincide, i.e., if one can achieve $\mathcal{A} \equiv 1$ and $\alpha \equiv 0$ simultaneously through a conformal transformation of the metric.
\subsection{Field equations}\label{ssec:feqns}
The metric field equations are obtained by varying the action \eqref{BETA.equ: action} with respect to the metric. Written in the trace-reversed form they are
\ba \bs
\label{BETA.equ: tensor field equation trace reversed long}
R_{\mu\nu}
&- \frac{\mathcal{A}'}{\mathcal{A}} \left( \nabla_\mu \nabla_\nu \Phi + \f12 g_{\mu\nu} \square \Phi \right)
- \left( \frac{\mathcal{A}''}{\mathcal{A}} + 2\mathcal{F} - \frac{3 {\mathcal{A}'}^2 }{2 \mathcal{A}^2} \right) \partial_\mu \Phi \partial_\nu \Phi
\\
&- \f12 g_{\mu\nu} \frac{\mathcal{A}''}{\mathcal{A}} g^{\rho\sigma} \partial_\rho \Phi \partial_\sigma \Phi
- \frac{1}{\mathcal{A}} g_{\mu\nu} \kappa^2 \mathcal{U}
= \frac{\kappa^2}{\mathcal{A}} \left( T_{\mu\nu} - \f12 g_{\mu\nu} T \right) \,,
\es \ea
where we use the d'Alembertian $\square X \equiv \nabla^2 X = g^{\mu\nu} \nabla_\mu \nabla_\nu X$ and the notation $X' \equiv \frac{\partial X}{\partial\Phi}$.
Taking the variation with respect to the scalar field gives the scalar field equation
\ba \bs
\label{BETA.equ: scalar field equation}
\mathcal{F} \, \square \Phi
&+ \f12 \left( \mathcal{F}' + 2 \mathcal{F} \frac{\mathcal{A}'}{\mathcal{A}} \right) g^{\mu\nu} \partial_\mu \Phi \partial_\nu \Phi
+ \frac{\mathcal{A}'}{\mathcal{A}^2} \kappa^2 \mathcal{U}
- \frac{1}{2 \mathcal{A}} \kappa^2 \mathcal{U}'
= \kappa^2 \frac{\mathcal{A}' - 2 \mathcal{A} \alpha' }{4 \mathcal{A}^2} T \,.
\es \ea
The function $\mathcal{F}$ introduced on the left hand side is defined by
\ba
\label{BETA.F}
\mathcal{F} \equiv \frac{2 \mathcal{A} \mathcal{B} + 3 {\mathcal{A}'}^2}{4 \mathcal{A}^2} \,.
\ea
Note that these equations simplify significantly in the Einstein frame $\mathcal{A} \equiv 1$ and $\alpha \equiv 0$. We will make use of this fact in the following, when we express the field equations in terms of invariant quantities.
Further, note that the functions $\mathcal{A}$ and $\mathcal{B}$ should be chosen such that $\mathcal{F} > 0$. A negative $\mathcal{F}$ would lead to a positive kinetic term in the Einstein frame, causing a ghost scalar field that should be avoided.
\subsection{Invariants}\label{ssec:invariants}
Given a scalar-tensor theory in a particular frame, it can equivalently be expressed in a different frame by applying a Weyl transformation of the metric tensor $g_{\mu\nu} \rightarrow \bar{g}_{\mu\nu}$
and a reparametrization of the scalar field $\Phi \rightarrow \bar{\Phi}$
\bsub
\label{BETA.equ: transformations}
\ba
\label{BETA.equ: Weyl reparametrization}
g_{\mu\nu} &= e^{2\bar{\gamma}(\bar{\Phi})} \bar{g}_{\mu\nu} \,,
\\
\label{BETA.equ: scalar field redefinition}
\Phi &= \bar{f} (\bar{\Phi}) \,.
\ea
\esub
We defined $\mathcal{F}$ in \eqref{BETA.F} since it transforms as a tensor under scalar field redefinition and is invariant under Weyl transformation,
\ba \bs
\mathcal{F} &= \left( \frac{\partial \bar{\Phi}}{\partial \Phi} \right)^2 \bar{\mathcal{F}} \,.
\es \ea
In order to have a frame independent description, we want to express everything in terms of invariants, i.e., quantities that are invariant under the transformations given above. The matter coupling and the scalar potential can be written in an invariant form by introducing the two invariants~\cite{BETA.JarvInvariants2015}
\bsub
\ba
\mathcal{I}_1(\Phi) = \frac{e^{2\alpha(\Phi)}}{\mathcal{A}(\Phi)} \,,
\\
\mathcal{I}_2(\Phi) = \frac{\mathcal{U}(\Phi)}{\mathcal{A}^2(\Phi)} \,.
\ea
\esub
Given the action in a general frame, we can define the invariant Einstein and Jordan frame metrics by
\bsub
\label{BETA equ: Einstein and Jordan frame metric}
\ba
\label{BETA equ: Einstein frame metric}
g^{\mathfrak{E}}_{\mu\nu} := \mathcal{A}(\Phi) g_{\mu\nu} \,,
\\
\label{BETA equ: Jordan frame metric}
g^{\mathfrak{J}}_{\mu\nu} := e^{2\alpha(\Phi)} g_{\mu\nu}\,,
\ea
\esub
which are related by
\ba
\label{BETA equ: Einstein Jordan frame metric relation}
g^{\mathfrak{J}}_{\mu\nu} = \mathcal{I}_1 g^{\mathfrak{E}}_{\mu\nu} \,.
\ea
Note that if the action is already given in the Einstein frame, the metric coincides with the Einstein frame metric defined above, $g_{\mu\nu} = g^{\mathfrak{E}}_{\mu\nu}$, and the same holds for the Jordan frame.
We define the Einstein frame metric \eqref{BETA equ: Einstein frame metric} as it significantly simplifies the field equations.
The metric field equations reduce to
\ba
\label{equ: full metric field equation E-frame}
R^{\mathfrak{E}}_{\mu\nu} - 2 \mathcal{F} \, \partial_{\mu}\Phi \partial_{\nu} \Phi - \kappa^{2}g^{\mathfrak{E}}_{\mu\nu}\mathcal{I}_2 = \kappa^2 \bar{T}^{\mathfrak{E}}_{\mu\nu}\,,
\ea
where
\ba
\bar{T}^{\mathfrak{E}}_{\mu\nu} = T^{\mathfrak{E}}_{\mu\nu} - \frac{1}{2}g^{\mathfrak{E}}_{\mu\nu}T^{\mathfrak{E}}\,, \quad T^{\mathfrak{E}} = g^{\mathfrak{E}\,\mu\nu}T^{\mathfrak{E}}_{\mu\nu} = \frac{T}{\mathcal{A}^2}\,, \quad T^{\mathfrak{E}}_{\mu\nu} = \frac{T_{\mu\nu}}{\mathcal{A}}\,.
\ea
is the trace-reversed energy-momentum tensor in the Einstein frame. It is invariant under conformal transformations and field redefinitions, since also the left hand side of the field equations~\eqref{equ: full metric field equation E-frame} is invariant. Note that we use the invariant Einstein metric \(g^{\mathfrak{E}}_{\mu\nu}\) for taking the trace and moving indices here, in order to retain the invariance of this tensor. For later use, we also define the invariant Jordan frame energy-momentum tensor
\ba
\bar{T}^{\mathfrak{J}}_{\mu\nu} = T^{\mathfrak{J}}_{\mu\nu} - \frac{1}{2}g^{\mathfrak{J}}_{\mu\nu}T^{\mathfrak{J}}\,, \quad T^{\mathfrak{J}} = g^{\mathfrak{J}\,\mu\nu}T^{\mathfrak{J}}_{\mu\nu} = \frac{T}{e^{4\alpha(\Phi)}}\,, \quad T^{\mathfrak{J}}_{\mu\nu} = \frac{T_{\mu\nu}}{e^{2\alpha(\Phi)}}\,.
\ea
Similarly to the metric field equations, we obtain the scalar field equation \eqref{BETA.equ: scalar field equation}
\ba
\label{equ: full scalar field equation}
\mathcal{F} g^{\mathfrak{E}\,\mu\nu} \partial_{\mu}\partial_{\nu}\Phi
- \mathcal{F} g^{\mathfrak{E}\,\mu\nu} \Gamma^{\mathfrak{E}\,\rho}{}_{\nu\mu}\partial_{\rho}\Phi
+ \frac{\mathcal{F}'}{2} g^{\mathfrak{E}\,\mu\nu} \partial_{\mu}\Phi \partial_{\nu}\Phi
- \frac{\kappa^2}{2}{\mathcal{I}_{2}}'
= -\f14 \kappa^2 {(\ln\mathcal{I}_1)}' T^{\mathfrak{E}}\,.
\ea
These are the field equations we will be working with. In order to solve them in a post-Newtonian approximation, we will perform a perturbative expansion of the dynamical fields around a flat background solution. This will be done in the following section.
\section{PPN formalism and expansion of terms}
\label{sec PPN Expansion}
In the preceding section we have expressed the field equations of scalar-tensor gravity completely in terms of invariant quantities. In order to solve these field equations in a post-Newtonian approximation, we make use of the well known PPN formalism. Since we are dealing with different invariant metrics and their corresponding conformal frames, we briefly review the relevant parts of the PPN formalism for this situation. We start by introducing velocity orders in section~\ref{ssec:velorder}. These are used to define the PPN expansions of the scalar field in section~\ref{ssec:ppnscalar}, the invariant metrics in section~\ref{ssec:ppnmetric}, the energy-momentum tensor in section~\ref{ssec:ppnenmom} and the Ricci tensor in section~\ref{ssec:ppnricci}.
\subsection{Slow-moving source matter and velocity orders}
\label{ssec:velorder}
Starting point of the PPN formalism is the assumption of perfect fluid matter, for which the (Jordan frame) energy-stress tensor is given by
\ba
T^{\mathfrak{J}\,\mu\nu} = \left( \rho + \rho \Pi + p \right) u^\mu u^\nu + p g^{\mathfrak{J}\,\mu\nu} \,.
\ea
Since test particles fall on geodesics of the Jordan frame metric, we consider this as the `physical metric' and we define mass density $\rho$, pressure $p$ and specific internal energy $\Pi$ in this frame.
By $u^\mu$ we denote the four-velocity, normalized such that $u^\mu u_\mu = -1$, where indices are raised and lowered using the Jordan frame metric $g^{\mathfrak{J}}_{\mu\nu}$.
We now consider the PPN framework to expand and solve the field equations up to the first post-Newtonian order. For this purpose we assume that the source matter is slow-moving, $v^i = u^i/u^0 \ll 1$. We use this assumption to expand all dynamical quantities in velocity orders $\mathcal{O}(n) \sim |\vec{v}|^n$.
Note that $\rho$ and $\Pi$ each contribute at order $\mathcal{O}(2)$, while $p$ contributes at $\mathcal{O}(4)$. The velocity terms $v^i$ are, obviously, of order $\mathcal{O}(1)$. We finally assume a quasi-static solution, where any time evolution is caused by the motion of the source matter. Hence, each time derivative $\partial_0 \sim \mathcal{O}(1)$ increases the velocity order of a term by one.
\subsection{PPN expansion of the scalar field}
\label{ssec:ppnscalar}
We now expand the scalar field around its cosmological background value $\Phi_0$ in terms of velocity orders,
\ba
\Phi = \Phi_0 + \phi
= \Phi_0 + \order{\phi}{2} + \order{\phi}{4} + \mathcal{O}{(6)}\,,
\ea
where $\order{\phi}{2}$ is of order $\mathcal{O}{(2)}$ and $\order{\phi}{4}$ is of order $\mathcal{O}{(4)}$. Other velocity orders either vanish due to conservation laws or are not relevant for the PPN calculation.
Any function of the scalar field $\mathcal{X}(\Phi)$ can then be expanded in a Taylor series as
\ba
\bs
\mathcal{X}(\Phi)
&= \mathcal{X}(\Phi_0) + \mathcal{X}'(\Phi_0) \phi + \f12 \mathcal{X}''(\Phi_0) \phi^2 + \mathcal{O}(6)
\\
&= \mathcal{X}(\Phi_0) + \mathcal{X}'(\Phi_0) \order{\phi}{2}
+ \left[ \mathcal{X}'(\Phi_0) \order{\phi}{4} + \f12 \mathcal{X}''(\Phi_0) \order{\phi}{2} \,\, \order{\phi}{2} \right]
+ \mathcal{O}{(6)} \,.
\es
\ea
For convenience, we denote the Taylor expansion coefficients, which are given by the values of the functions and their derivatives evaluated at the background value, in the form
$F \equiv \mathcal{F}(\Phi_0)$,
$F' \equiv \mathcal{F}'(\Phi_0)$,
$I_1 \equiv \mathcal{I}_1(\Phi_0)$,
$I_1' \equiv \mathcal{I}_1'(\Phi_0)$,
$I_1'' \equiv \mathcal{I}_1''(\Phi_0)$,
and similarly for all functions of the scalar field.
\subsection{PPN expansion of the metric tensors}
\label{ssec:ppnmetric}
In the next step, we assume that the Jordan frame metric, which governs the geodesic motion of test masses, is asymptotically flat, and can be expanded around a Minkowski vacuum solution in suitably chosen Cartesian coordinates. The expansion of the Jordan frame metric components up to the first post-Newtonian order is then given by
\begin{subequations}\label{eqn:metricjppn}
\begin{align}
g^{\mathfrak{J}}_{00} &= -1 + \order{h}{2}^{\mathfrak{J}}_{00} + \order{h}{4}^{\mathfrak{J}}_{00} + \mathcal{O}(6)\,,\\
g^{\mathfrak{J}}_{0i} &= \order{h}{3}^{\mathfrak{J}}_{0i} + \mathcal{O}(5)\,,\\
g^{\mathfrak{J}}_{ij} &= \delta_{ij} + \order{h}{2}^{\mathfrak{J}}_{ij} + \mathcal{O}(4)\,.
\end{align}
\end{subequations}
It can be shown that these are all relevant and non-vanishing components.
A similar expansion of the Einstein frame metric \(g^{\mathfrak{E}}_{\mu\nu}\) can be defined as
\begin{subequations}\label{eqn:metriceppn}
\begin{align}
I_1g^{\mathfrak{E}}_{00} &= -1 + \order{h}{2}^{\mathfrak{E}}_{00} + \order{h}{4}^{\mathfrak{E}}_{00} + \mathcal{O}(6)\,,\\
I_1g^{\mathfrak{E}}_{0i} &= \order{h}{3}^{\mathfrak{E}}_{0i} + \mathcal{O}(5)\,,\\
I_1g^{\mathfrak{E}}_{ij} &= \delta_{ij} + \order{h}{2}^{\mathfrak{E}}_{ij} + \mathcal{O}(4)\,.
\end{align}
\end{subequations}
The $I_1$'s on the left sides are required in order to satisfy \eqref{BETA equ: Einstein Jordan frame metric relation}.
The expansion coefficients in the two frames are then related by
\begin{subequations}
\begin{align}
\order{h}{2}^{\mathfrak{E}}_{00} &= \order{h}{2}^{\mathfrak{J}}_{00}
+ \frac{I_{1}'}{I_1}\order{\phi}{2}\,,\\
\order{h}{2}^{\mathfrak{E}}_{ij} &= \order{h}{2}^{\mathfrak{J}}_{ij}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \delta_{ij}\,,\\
\order{h}{3}^{\mathfrak{E}}_{0i} &= \order{h}{3}^{\mathfrak{J}}_{0i}\,,\\
\order{h}{4}^{\mathfrak{E}}_{00} &= \order{h}{4}^{\mathfrak{J}}_{00}
+ \frac{I_{1}'}{I_1}\order{\phi}{4} + \frac{I_1I_{1}'' - 2I_{1}' I_{1}'}{2I_1^2}\order{\phi}{2} \, \order{\phi}{2}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \,\order{h}{2}^{\mathfrak{J}}_{00}\,,
\end{align}
\end{subequations}
as one easily checks.
Conversely, one finds the inverse relations
\begin{subequations}
\label{BETA.equ:metric E to J frame}
\begin{align}
\order{h}{2}^{\mathfrak{J}}_{00} &= \order{h}{2}^{\mathfrak{E}}_{00}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \,,\\
\order{h}{2}^{\mathfrak{J}}_{ij} &= \order{h}{2}^{\mathfrak{E}}_{ij}
+ \frac{I_{1}'}{I_1}\order{\phi}{2} \delta_{ij}\,,\\
\order{h}{3}^{\mathfrak{J}}_{0i} &= \order{h}{3}^{\mathfrak{E}}_{0i}\,,\\
\order{h}{4}^{\mathfrak{J}}_{00} &= \order{h}{4}^{\mathfrak{E}}_{00}
- \frac{I_{1}'}{I_1}\order{\phi}{4} - \frac{I_{1}''}{2I_1}\order{\phi}{2} \, \order{\phi}{2}
+ \frac{I_{1}'}{I_1}\order{\phi}{2} \, \order{h}{2}^{\mathfrak{E}}_{00}\,.
\end{align}
\end{subequations}
\subsection{PPN expansion of the energy-momentum tensors}
\label{ssec:ppnenmom}
We now come to the PPN expansion of the energy-momentum tensors. Here we restrict ourselves to displaying the expansion of the invariant energy-momentum tensor in the Einstein frame, since this is the frame we will be using for solving the field equations. It is related to the invariant Jordan frame energy-momentum tensor by
\(T^{\mathfrak{E}}_{\mu\nu} = \mathcal{I}_1T^{\mathfrak{J}}_{\mu\nu}\).
Its PPN expansion follows from the standard PPN expansion of the energy-momentum tensor in the Jordan frame~\cite{BETA.will.book} and is given by
\begin{subequations}
\begin{align}
T^{\mathfrak{E}}_{00} &= I_1\rho\left(1 + \frac{2 I_{1,A}}{I_1}\order{\phi}{2}^A - \order{h}{2}^{\mathfrak{E}}_{00} + v^2 + \Pi\right) + \mathcal{O}(6)\,,\\
T^{\mathfrak{E}}_{0i} &= - I_1\rho v_i + \mathcal{O}(5)\,,\\
T^{\mathfrak{E}}_{ij} &= I_1(\rho v_iv_j + p\delta_{ij}) + \mathcal{O}(6)\,.
\end{align}
\end{subequations}
Its trace, taken using the Einstein frame metric, has the PPN expansion
\begin{equation}
T^{\mathfrak{E}} = I_1^2\left(-\rho + 3p - \Pi\rho - 2 \frac{I_{1,A}}{I_1}\rho\order{\phi}{2}^A\right) \,.
\end{equation}
Consequently, the trace-reversed energy-momentum tensor is given by
\begin{subequations}
\begin{align}
\bar{T}^{\mathfrak{E}}_{00} &= I_1\rho\left(\frac{1}{2} + \frac{I_{1,A}}{I_1}\order{\phi}{2}^A - \frac{\order{h}{2}^{\mathfrak{E}}_{00}}{2} + v^2 + \frac{\Pi}{2} + \frac{3p}{2\rho}\right) + \mathcal{O}(6)\,,\\
\bar{T}^{\mathfrak{E}}_{0i} &= - I_1\rho v_i + \mathcal{O}(5)\,,\\
\bar{T}^{\mathfrak{E}}_{ij} &= I_1\rho\left[v_iv_j + \frac{\order{h}{2}^{\mathfrak{E}}_{ij}}{2} + \left(\frac{1}{2} + \frac{I_{1,A}}{I_1}\order{\phi}{2}^A + \frac{\Pi}{2} - \frac{p}{2\rho}\right)\delta_{ij}\right] + \mathcal{O}(6)\,.
\end{align}
\end{subequations}
\subsection{Invariant Ricci tensor}
\label{ssec:ppnricci}
Finally, we come to the PPN expansion of the Ricci tensor of the invariant Einstein metric. We will do this in a particular gauge, which is determined by the gauge conditions
\bsub
\ba
h^{\mathfrak{E}}_{ij,j} - h^{\mathfrak{E}}_{0i,0} - \frac{1}{2}h^{\mathfrak{E}}_{jj,i} + \frac{1}{2}h^{\mathfrak{E}}_{00,i} = 0 \,,
\\
h^{\mathfrak{E}}_{ii,0} = 2h^{\mathfrak{E}}_{0i,i} \,,
\ea
\esub
which will simplify the calculation. In this gauge, the components of the Ricci tensor to the orders that will be required are given by
\bsub
\ba
\order{R}{2}^{\mathfrak{E}}_{00} &= -\frac{1}{2}\triangle\order{h}{2}^{\mathfrak{E}}_{00}\,,
\\
\order{R}{2}^{\mathfrak{E}}_{ij} &= -\frac{1}{2}\triangle\order{h}{2}^{\mathfrak{E}}_{ij}\,,
\\
\order{R}{3}^{\mathfrak{E}}_{0i} &= -\frac{1}{2}\left(\triangle\order{h}{3}^{\mathfrak{E}}_{0i} + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,0i} - \order{h}{2}^{\mathfrak{E}}_{ij,0j}\right)\,,
\\
\order{R}{4}^{\mathfrak{E}}_{00} &= -\frac{1}{2}\triangle\order{h}{4}^{\mathfrak{E}}_{00} + \order{h}{3}^{\mathfrak{E}}_{0i,0i} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{ii,00} + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\left(\order{h}{2}^{\mathfrak{E}}_{ij,j} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,i} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\right) + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{ij}\order{h}{2}^{\mathfrak{E}}_{00,ij}\,.
\ea
\esub
We now have expanded all dynamical quantities which appear in the field equations into velocity orders. By inserting these expansions into the field equations, we can perform a similar expansion of the field equations, and decompose them into different velocity orders. This will be done in the next section.
\section{Expanded field equations}
\label{sec Expanded field equations}
We will now make use of the PPN expansions displayed in the previous section and insert them into the field equations. This will yield us a system of equations, which are expressed in terms of the metric and scalar field perturbations that we aim to solve for. We start with the zeroth order field equations in section~\ref{ssec:eqns0}, which are the equations for the Minkowski background, and will give us conditions on the invariant potential \(\mathcal{I}_2\). We then proceed with the second order metric equation in section~\ref{ssec:eqnsh2}, the second order scalar equation in section~\ref{ssec:eqnsp2}, the third order metric equation in section~\ref{ssec:eqnsh3}, the fourth order metric equation in section~\ref{ssec:eqnsh4} and finally the fourth order scalar equation in section~\ref{ssec:eqnsp4}.
\subsection{Zeroth order metric and scalar equations}
\label{ssec:eqns0}
At the zeroth velocity order, the metric equations \eqref{equ: full metric field equation E-frame} are given by
\begin{equation}\label{eqn:h0mn}
-\kappa^2\frac{I_2}{I_1}\eta_{\mu\nu} = 0\,,
\end{equation}
which is satisfied only for \(I_2 = 0\), and hence restricts the choice of the invariant potential \(\mathcal{I}_2\).
At the same velocity order, the scalar equation reads
\begin{equation}\label{eqn:phi0}
-\frac{\kappa^2}{2}I_{2}' = 0 \,,
\end{equation}
and is solved only by \(I_{2}' = 0\), so that we obtain another restriction on the allowed potential $\mathcal{I}_2$. In the following, we will only consider theories in which these conditions on $\mathcal{I}_2$ are satisfied.
\subsection{Second order metric $h^{\mathfrak{E}}_{00}$ and $h^{\mathfrak{E}}_{ij}$}
\label{ssec:eqnsh2}
At the second velocity order we find the $00$-metric field equation
\begin{equation}
\order{R}{2}^{\mathfrak{E}}_{00}
- \kappa^2\frac{I_2}{I_1}\order{h}{2}^{\mathfrak{E}}_{00}
+ \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2}^A
= \frac{\kappa^2}{2}I_1\rho \,.
\end{equation}
Inserting the expansion of the Ricci tensor shown in section~\ref{ssec:ppnricci} and using \(I_2 = 0\) and \(I_{2}' = 0\) we solve for \(\order{h}{2}^{\mathfrak{E}}_{00}\) and find the Poisson equation
\begin{equation}
\label{eqn:h200}
\triangle \order{h}{2}^{\mathfrak{E}}_{00} = -\kappa^2I_1\rho = -8\pi G\rho\,,
\end{equation}
where we introduced the Newtonian gravitational constant
\ba
\label{equ: Newtonian gravitational constant}
G = \frac{\kappa^2I_1}{8\pi}\,.
\ea
The $ij$-equations at the same order are given by
\ba
\order{R}{2}^{\mathfrak{E}}_{ij}
- \kappa^2\frac{I_2}{I_1}\order{h}{2}^{\mathfrak{E}}_{ij}
- \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2}^A\delta_{ij}
= \frac{\kappa^2}{2}I_1\rho\delta_{ij} \,,
\ea
which similarly reduces to
\ba
\label{eqn:h2ij}
\triangle\order{h}{2}^{\mathfrak{E}}_{ij} = -\kappa^2I_1\rho\delta_{ij} = -8\pi G\rho\delta_{ij}\,.
\ea
Note that the diagonal components $i=j$ satisfy the same equation~\eqref{eqn:h200} as \(\order{h}{2}^{\mathfrak{E}}_{00}\).
\subsection{Second order scalar field $\phi^A$}
\label{ssec:eqnsp2}
The second order scalar field equation is given by
\ba
I_1 F \triangle\order{\phi}{2}
- \frac{\kappa^2}{2}I_{2}''\order{\phi}{2}
= \frac{\kappa^2}{4}I_1I_{1}'\rho\,.
\ea
It is convenient to introduce the scalar field mass $m$ by
\ba
\label{equ: scalar mass}
m^2 &\equiv \frac{\kappa^2}{2} \frac{1}{I_1 F} I_{2}''
\ea
and
\ba
k &= \frac{\kappa^2}{4} \frac{1}{F} I_{1}' \,.
\ea
We assume that $m^2 > 0$, since otherwise the scalar field would be a tachyon.
Then, the second order scalar field equation takes the form of a screened Poisson equation,
\ba
\label{eqn:phi2}
\triangle\order{\phi}{2} - m^2 \order{\phi}{2} = k \rho\,.
\ea
We will see that $m$ can be interpreted as the mass of the scalar field, while $k$ is a measure for the non-minimal coupling of the scalar field at the linear level. We finally remark that \(m\) is an invariant, while \(k\) transforms as a tangent vector to the real line of scalar field values~\cite{BETA.JarvInvariants2015}.
\subsection{Third order metric $h^{\mathfrak{E}}_{0i}$}
\label{ssec:eqnsh3}
The third order metric equation reads
\begin{equation}
\order{R}{3}^{\mathfrak{E}}_{0i} - \kappa^2\frac{I_2}{I_1}\order{h}{3}^{\mathfrak{E}}_{0i} = -\kappa^2I_1\rho v_i \,.
\end{equation}
Thus we can solve for the third order metric perturbation and obtain another Poisson equation,
\begin{equation}\label{eqn:h30i}
\triangle\order{h}{3}^{\mathfrak{E}}_{0i} = \order{h}{2}^{\mathfrak{E}}_{ij,0j} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,0i} + 2\kappa^2I_1\rho v_i\,.
\end{equation}
Note that the source terms on the right hand side of this equation are given by time derivatives of other metric components and moving source matter, and hence vanish for static solutions and non-moving sources.
\subsection{Fourth order metric $h^{\mathfrak{E}}_{00}$}
\label{ssec:eqnsh4}
The fourth order metric field equation reads
\begin{equation}
\bs
\order{R}{4}^{\mathfrak{E}}_{00}
- \kappa^2\frac{I_2}{I_1}\order{h}{4}^{\mathfrak{E}}_{00}
+ \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{4}
- \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2} \; \order{h}{2}^{\mathfrak{E}}_{00}
+ \frac{\kappa^2}{2}\frac{I_{2}''}{I_1}\order{\phi}{2} \; \order{\phi}{2}
\\
= \frac{\kappa^2}{2}I_1\rho\left(2\frac{I_{1}'}{I_1}\order{\phi}{2}
- \order{h}{2}^{\mathfrak{E}}_{00}
+ 2v^2 + \Pi + 3\frac{p}{\rho}\right)\,.
\es
\end{equation}
Solving for the fourth order metric perturbation then yields
\begin{equation}\label{eqn:h400}
\begin{split}
\triangle\order{h}{4}^{\mathfrak{E}}_{00}
&= 2\order{h}{3}^{\mathfrak{E}}_{0i,0i}
- \order{h}{2}^{\mathfrak{E}}_{ii,00}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\left(\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,i}
- \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\right)
+ \order{h}{2}^{\mathfrak{E}}_{ij}\order{h}{2}^{\mathfrak{E}}_{00,ij}\\
&\phantom{=}+ \kappa^2\left(\frac{I_{2}''}{I_1}\order{\phi}{2} \; \order{\phi}{2}
- 2I_{1}'\order{\phi}{2} \rho
+ I_1\order{h}{2}^{\mathfrak{E}}_{00}\rho
- 2 I_1 v^2 \rho - I_1 \Pi \rho - 3 I_1 p \right)\,.
\end{split}
\end{equation}
Also this equation has the form of a Poisson equation.
\subsection{Fourth order scalar field $\phi^A$}
\label{ssec:eqnsp4}
Finally, for the scalar field we have the fourth order equation
\begin{multline}
I_1 F \triangle\order{\phi}{4}
- I_1 F \order{\phi}{2}_{,00}
- \frac{\kappa^2}{2}I_{2}''\order{\phi}{4}
- I_1 F \order{\phi}{2}_{,ij}\order{h}{2}^{\mathfrak{E}}_{ij}
+ I_1 F' \triangle\order{\phi}{2} \; \order{\phi}{2}
\\
+ \frac{I_1}{2} F' \order{\phi}{2}_{,i}\order{\phi}{2}_{,i}
+ \frac{I_1}{2} F \order{\phi}{2}_{,i}\left(2\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \order{h}{2}^{\mathfrak{E}}_{jj,i}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\right)
- \frac{\kappa^2}{4}I_{2}'''\order{\phi}{2} \; \order{\phi}{2} \\
= -\frac{\kappa^2}{4}\left[3I_1I_{1,A}p
- I_1 I_1' \Pi\rho
- (I_{1}' I_{1}'
+ I_1 I_1'')\order{\phi}{2} \rho \right] \,.
\end{multline}
Solving for the fourth order scalar perturbation then yields
\begin{equation}\label{eqn:phi4}
\begin{split}
\triangle\order{\phi}{4}
- m^2 \order{\phi}{4}
&= \order{\phi}{2}_{,00}
+ \order{\phi}{2}_{,ij} \order{h}{2}^{\mathfrak{E}}_{ij}
- \frac{1}{2}\order{\phi}{2}_{,i}\left(2\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \order{h}{2}^{\mathfrak{E}}_{jj,i}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\right)
- \frac{F'}{F} \left[ \triangle\order{\phi}{2} \; \order{\phi}{2}
+ \frac{1}{2} \order{\phi}{2}_{,i} \order{\phi}{2}_{,i}\right]\\
&\phantom{=}+ \frac{\kappa^2}{4} \f1F \left[\frac{I_{2}'''}{I_1}
\order{\phi}{2} \; \order{\phi}{2}
- 3 I_{1}' p + I_{1}' \Pi \rho + \left(\frac{({I_1}')^2}{I_1}
+ {I_1}'' \right) \order{\phi}{2} \rho\right]\,.
\end{split}
\end{equation}
This is again a screened Poisson equation, which contains the same mass parameter \(m\) as the second order scalar field equation~\eqref{eqn:phi2}.
These are all necessary equations in order to determine the relevant perturbations of the invariant Einstein frame metric and the scalar field. We will solve them in the next section, under the assumption of a massive scalar field, \(m > 0\), and a static, homogeneous, spherically symmetric source mass.
\section{Massive field and spherical source}
\label{sec Massive field and spherical source}
In the previous section we derived the gravitational field equations up to the required post-Newtonian order. We will now solve these field equations for the special case of a homogeneous, non-rotating spherical mass distribution. This mass distribution, as well as the corresponding ansatz for the PPN metric perturbation and the PPN parameters, are defined in section~\ref{ssec:homosphere}. We then solve the field equations by increasing order. The second order equations for the invariant Einstein frame metric and the scalar field are solved in sections~\ref{ssec:solh2} and~\ref{ssec:solp2}, while the corresponding fourth order equations are solved in sections~\ref{ssec:solh4} and~\ref{ssec:solp4}. From these solutions we read off the effective gravitational constant as well as the PPN parameters \(\gamma\) and \(\beta\) in section~\ref{sec PPN parameters}. A few limiting cases of this result are discussed in section~\ref{ssec:limits}.
\subsection{Ansatz for homogeneous, spherical mass source}
\label{ssec:homosphere}
In the following we consider a static sphere of radius $R$ with homogeneous rest mass density, pressure and specific internal energy, surrounded by vacuum. Its density \(\rho\), pressure \(p\) and specific internal energy \(\Pi\) are then given by
\ba\label{eqn:homosource}
\rho(r) =
\begin{cases}
\rho_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\quad
p(r) =
\begin{cases}
p_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\quad
\Pi(r) =
\begin{cases}
\Pi_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\ea
where \(r\) is the radial coordinate and we use isotropic spherical coordinates. We further assume that the mass source is non-rotating and at rest with respect to our chosen coordinate system, so that the velocity \(v^i\) vanishes.
For the metric perturbation corresponding to this matter distribution, which is likewise spherically symmetric, we now use the ansatz
\begin{subequations}
\label{BETA.equ:PPN metric ansatz}
\begin{align}
\label{BETA.equ:PPN metric ansatz h200}
\order{h}{2}^{\mathfrak{J}}_{00} &= 2 G_\text{eff} U
\,,\\
\label{BETA.equ:PPN metric ansatz h2ij}
\order{h}{2}^{\mathfrak{J}}_{ij} &= 2 \gamma G_\text{eff} U \delta_{ij}
\,,\\
\label{BETA.equ:PPN metric ansatz h30i}
\order{h}{3}^{\mathfrak{J}}_{0i} &= 0
\,,\\
\label{BETA.equ:PPN metric ansatz h400}
\order{h}{4}^{\mathfrak{J}}_{00} &= -2 \beta G_\text{eff}^2 U^2
+ 2 G_\text{eff}^2 (1+3 \gamma-2 \beta) \Phi_2
+ G_\text{eff}(2\Phi_3 +6 \gamma \Phi_4)
\,.
\end{align}
\end{subequations}
Here \(U, \Phi_2, \Phi_3, \Phi_4\) denote the standard PPN potentials, which satisfy the Poisson equations~\cite{BETA.will.book}
\bsub
\label{BETA.equ: Poisson equ potentials}
\ba
\label{BETA.equ: Poisson equ U}
\triangle U &= - 4 \pi \rho \,,
\\
\label{BETA.equ: Poisson equ Phi_2}
\triangle \Phi_2 &= - 4 \pi U \rho \,,
\\
\label{BETA.equ: Poisson equ Phi_3}
\triangle \Phi_3 &= - 4 \pi \rho \Pi \,,
\\
\label{BETA.equ: Poisson equ Phi_4}
\triangle \Phi_4 &= - 4 \pi p \,.
\ea
\esub
For the homogeneous, spherically symmetric mass source we consider they are given by
\bsub
\ba
U(r)
&= \begin{cases}
- \frac{M}{2 R^3}(r^2 - 3 R^2)
& \text{if } r \leq R
\\
\frac{M}{r}
& \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_2 &=
\begin{cases}
\frac{3 M^2}{40 R^6}(r^2 - 5 R^2)^2 & \text{if } r \leq R
\\
\frac{6 M^2}{5 R r} & \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_3 &=
\begin{cases}
-\frac{M \Pi_0}{2 R^3} (r^2 - 3 R^2) & \text{if } r \leq R
\\
\frac{M \Pi_0}{r} & \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_4 &=
\begin{cases}
-\frac{2\pi p_0}{3} (r^2 - 3 R^2) & \text{if } r \leq R
\\
\frac{4 \pi p_0 R^3}{3 r} & \text{if } r > R
\\
\end{cases} \,,
\ea
\esub
where \(M = \frac{4\pi}{3}\rho_0R^3\) is the total mass. The metric ansatz~\eqref{BETA.equ:PPN metric ansatz} further depends on the effective gravitational constant \(G_{\text{eff}}\) and the PPN parameters \(\gamma\) and \(\beta\). These quantities, which are sufficient to describe the post-Newtonian limit of a fully conservative theory, i.e., a theory without preferred location or preferred frame effects, are determined by the particular theory under consideration. Note that these parameters are, in general, not constant, if one considers a massive scalar field, as we will do in the following.
We finally remark that in the ansatz~\eqref{BETA.equ:PPN metric ansatz} we have used the perturbations of the invariant Jordan frame metric \(g^{\mathfrak{J}}_{\mu\nu}\) defined in~\eqref{BETA equ: Jordan frame metric}. This choice is related to the fact that the matter coupling, and hence the geodesic motion of test particles from which the PPN parameters are determined, is given by \(g^{\mathfrak{J}}_{\mu\nu}\).
\subsection{Second order metric}
\label{ssec:solh2}
We start by solving the metric field equations at the second velocity order. Its temporal component~\eqref{eqn:h200} takes the form
\ba
\triangle \order{h}{2}^{\mathfrak{E}}_{00}
= \begin{cases}
-\frac{3 \text{I}_1 \kappa^2 M}{4 \pi R^3}
& \text{if } r \leq R
\\
0
& \text{if } r > R
\\
\end{cases}\,.
\ea
The solution is given by
\ba
\order{h}{2}^{\mathfrak{E}}_{00}
= 2GU
= \begin{cases}
- \frac{I_1 \kappa^2 M}{8 \pi R^3}(r^2 - 3 R^2)
& \text{if } r \leq R
\\
\frac{\text{I}_1 \kappa ^2 M}{4 \pi r}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Since the spatial metric equations~\eqref{eqn:h2ij} at the same order are identical to the temporal equation, except for a Kronecker symbol, their solution immediately follows as
\ba
\order{h}{2}^{\mathfrak{E}}_{ij} = \order{h}{2}^{\mathfrak{E}}_{00} \delta_{ij} \,.
\ea
\subsection{Second order scalar}
\label{ssec:solp2}
We then continue with the scalar field equation~\eqref{eqn:phi2} at the second velocity order, which reads
\ba
\left(\triangle - m^2 \right) \order{\phi}{2}
= \begin{cases}
\frac{3 I_1' \kappa^2 M}{16 \pi F R^3}
& \text{if } r \leq R
\\
0
& \text{if } r > R
\\
\end{cases} \,.
\ea
The solution is then given by.
\ba
\order{\phi}{2}
= \begin{cases}
-\frac{3 I_1' \kappa^2 M}{16 \pi F m^2 R^3}+\frac{3 e^{-m R} I_1' \kappa^2 M \
(1+m R) }{16 \pi F m^3 R^3} \frac{\sinh (m r)}{r}
& \text{if } r \leq R
\\
-\frac{3 \kappa^2 M I_1' \left( e^{-m R}(1+m R) + e^{m R} (-1+m R)\right)}{32 \pi F m^3 R^3} \frac{e^{-m r}}{r}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Note that outside the source, the field is proportional to $\frac{e^{-m r}}{r}$, i.e., it has the form of a Yukawa potential. Therefore, the parameter $m$ can be interpreted as the mass of the scalar field.
\subsection{Fourth order metric}
\label{ssec:solh4}
Since the only third order equations are trivially solved by \(\order{h}{3}^{\mathfrak{E}}_{0i} = 0\) in the case of a static, non-moving matter source, we continue directly with the metric field equation at the fourth velocity order. As it is rather lengthy, we give here only its generic form, while all appearing coefficients are stated explicitly in the appendix. This generic form reads
\ba
\label{BETA equ hE004 equation}
\triangle \order{h}{4}^{\mathfrak{E}}_{00}
= \begin{cases}
A_{h400}^{I1}
+\frac{A_{h400}^{I2}}{r^2}
+A_{h400}^{I3} r^2
+\frac{A_{h400}^{I4} e^{-m r}}{r}
+\frac{A_{h400}^{I5} e^{-2 m r}}{r^2}
+\frac{A_{h400}^{I6} e^{m r}}{r}
+\frac{A_{h400}^{I7} e^{2 m r}}{r^2}
& \text{if } r \leq R
\\
\frac{A_{h400}^{E1}}{r^4}
+\frac{A_{h400}^{E2} e^{-2 m r}}{r^2}
& \text{if } r > R
\\
\end{cases}\,.
\ea
Also its solution is lengthy, and so we proceed in the same fashion to display only its generic form here, which is given by
\bsub
\ba
\bs
\label{BETA equ hE004 solution int}
\order{h}{4}^{\mathfrak{E}}_{00} (r \leq R)
&= B_{h400}^{I1}
+B_{h400}^{I2} r^2
+B_{h400}^{I3} r^4
+\frac{B_{h400}^{I4} e^{-m r}}{r}
+\frac{B_{h400}^{I5} e^{-2 m r}}{r}
+\frac{B_{h400}^{I6} e^{m r}}{r}
\\
&\phantom{=}+\frac{B_{h400}^{I7} e^{2 m r}}{r}
+B_{h400}^{I8} \mathrm{Ei}(-2 m r)
+B_{h400}^{I9} \mathrm{Ei}(2 m r)
+B_{h400}^{I10} \ln\left(\frac{r}{R}\right) \,,
\es
\\
\label{BETA equ hE004 solution ext}
\order{h}{4}^{\mathfrak{E}}_{00} (r > R)
&=\frac{B_{h400}^{E1}}{r}
+\frac{B_{h400}^{E2}}{r^2}
+\frac{B_{h400}^{E3} e^{-2 m r}}{r}
+B_{h400}^{E4} \mathrm{Ei}(-2 m r) \,,
\ea
\esub
where $\mathrm{Ei}$ is the exponential integral defined as
\ba
\mathrm{Ei}(x) = -\fint_{-x}^{\infty}\frac{e^{-t}}{t}dt\,,
\ea
with $\fint$ denoting the Cauchy principal value of the integral. The values of the coefficients can be found in the appendix \ref{app coefficients fourth order metric}.
Note that $B_{h400}^{I8}=B_{h400}^{I9}$ and thus the exponential integral terms can be written more compactly as
\ba
B_{h400}^{I8} \mathrm{Ei}(-2 m r)+B_{h400}^{I9} \mathrm{Ei}(2 m r)
= 2 B_{h400}^{I8} \mathrm{Chi}(2 m r) \,,
\ea
where we used $\mathrm{Chi}$ for the hyperbolic cosine integral
\ba
\mathrm{Chi}(x) = \frac{\mathrm{Ei}(x) + \mathrm{Ei}(-x)}{2} = \upgamma + \ln x + \int_0^x\frac{\cosh t - 1}{t}dt\,,
\ea
and $\upgamma$ is Euler's constant.
\subsection{Fourth order scalar}
\label{ssec:solp4}
The final equation we must solve is the scalar field equation at the fourth velocity order, since the fourth order scalar field \(\order{\phi}{4}\) enters the Jordan frame metric perturbation \(\order{h}{4}^{\mathfrak{J}}_{00}\), from which the PPN parameter \(\beta\) is read off. This equation is similarly lengthy, and so also here we restrict ourselves to displaying only the generic form, which reads
\bsub
\ba
\bs
\label{BETA equ phi4 equation int}
\left( \triangle - m^2 \right) \order{\phi}{4}(r \leq R) &=
A_{\phi 4}^{I1}
+\frac{A_{\phi 4}^{I2}}{r^2}
+\frac{A_{\phi 4}^{I3}}{r^4}
+\frac{A_{\phi 4}^{I4} e^{-m r}}{r}
+\frac{A_{\phi 4}^{I5} e^{-2 m r}}{r^2}
+\frac{A_{\phi 4}^{I6} e^{-2 m r}}{r^3}\\
&\phantom{=}+\frac{A_{\phi 4}^{I7} e^{-2 m r}}{r^4}
+\frac{A_{\phi 4}^{I8} e^{m r}}{r}
+\frac{A_{\phi 4}^{I9} e^{2 m r}}{r^2}
+\frac{A_{\phi 4}^{I10} e^{2 m r}}{r^3}
+\frac{A_{\phi 4}^{I11} e^{2 m r}}{r^4}\\
&\phantom{=}+A_{\phi 4}^{I12} e^{-m r} r
+A_{\phi 4}^{I13} e^{m r} r \,,
\es
\\
\bs
\label{BETA equ phi4 equation ext}
\left( \triangle - m^2 \right) \order{\phi}{4}(r > R) &=
\frac{A_{\phi 4}^{E1} e^{-m r}}{r^2}
+\frac{A_{\phi 4}^{E2} e^{-2 m r}}{r^2}
+\frac{A_{\phi 4}^{E3} e^{-2 m r}}{r^3}
+\frac{A_{\phi 4}^{E4} e^{-2 m r}}{r^4}\,.
\es
\ea
\esub
The generic form of the solution then follows as
\bsub
\ba
\bs
\label{BETA equ phi4 solution int}
\phi_4(r \leq R)
&= B_{\phi 4}^{I1}
+\frac{B_{\phi 4}^{I2}}{r^2}
+\frac{B_{\phi 4}^{I3} e^{-m r}}{r}
+\frac{B_{\phi 4}^{I4} e^{-2 m r}}{r^2}
+\frac{B_{\phi 4}^{I5} e^{m r}}{r}
+\frac{B_{\phi 4}^{I6} e^{2 m r}}{r^2}
+B_{\phi 4}^{I7} e^{-m r}\\
&\phantom{=}+B_{\phi 4}^{I8} e^{-m r} r
+B_{\phi 4}^{I9} e^{-m r} r^2
+B_{\phi 4}^{I10} e^{m r}
+B_{\phi 4}^{I11} e^{m r} r
+B_{\phi 4}^{I12} e^{m r} r^2\\
&\phantom{=}+\frac{B_{\phi 4}^{I13} e^{-m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{I14} e^{m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{I15} e^{m r} \mathrm{Ei}(-3 m r)}{r}\\
&\phantom{=}+\frac{B_{\phi 4}^{I16} e^{m r} \mathrm{Ei}(m r)}{r}
+\frac{B_{\phi 4}^{I17} e^{-m r} \mathrm{Ei}(m r)}{r}
+\frac{B_{\phi 4}^{I18} e^{-m r} \mathrm{Ei}(3 m r)}{r} \,,
\es
\\
\bs
\label{BETA equ phi4 solution ext}
\phi_4(r > R) &=
\frac{B_{\phi 4}^{E1} e^{-m r}}{r}
+\frac{B_{\phi 4}^{E2} e^{-2 m r}}{r^2}
+\frac{B_{\phi 4}^{E3} e^{-m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{E4} e^{m r} \mathrm{Ei}(-2 m r)}{r}\\
&\phantom{=}+\frac{B_{\phi 4}^{E5} e^{m r} \mathrm{Ei}(-3 m r)}{r}
+\frac{B_{\phi 4}^{E6} e^{-m r} \ln\left(\frac{r}{R}\right)}{r} \,.
\es
\ea
\esub
The coefficients can be found in the appendix \ref{app coefficients fourth order scalar}.
\subsection{PPN parameters}
\label{sec PPN parameters}
We now have solved the field equations which determine all terms that enter the Jordan frame metric, and hence contribute to the PPN parameters \(\gamma\) and \(\beta\). The Jordan frame metric is then obtained by inserting the solutions obtained before into the relation~\eqref{BETA.equ:metric E to J frame} between the different invariant metrics.
Using the metric ansatz~\eqref{BETA.equ:PPN metric ansatz h200} we find the effective gravitational `constant'
\ba
G_\text{eff}(r)
= \frac{\order{h}{2}^{\mathfrak{J}}_{00}}{2 U}
= \begin{cases}
G \left[ 1 + 3
\frac{\sinh(mr)(1+m R)e^{-mR} - 2 m r }{(2\omega + 3) m^3 r (r^2 - 3 R^2)} \right]
& \text{if } r \leq R
\\
G \left[ 1 + 3
\frac{ mR\cosh(mR) - \sinh(mR) }{(2\omega + 3) m^3 R^3}e^{-mr} \right]
& \text{if } r > R
\\
\end{cases}\,.
\ea
Here we have introduced the abbreviation
\ba
\omega = 2F\frac{I_1^2}{I_1'^2} - \frac{3}{2}\,,
\ea
which is invariant under reparametrizations of the scalar field and chosen such that it agrees with the parameter $\omega$ in case of the Jordan-Brans-Dicke theory~\cite{BETA.Jordan:1959eg,BETA.Brans:1961sx}. In the next step, the PPN parameter $\gamma$ is obtained from the metric ansatz~\eqref{BETA.equ:PPN metric ansatz h2ij} giving
\ba
\gamma(r) = \frac{\order{h}{2}^{\mathfrak{J}}_{ii}}{2 G_\text{eff} U} =
\begin{cases}
1+\frac{12 \left[2 e^{m (r+R)} m r-\left(-1+e^{2 m r}\right) (1+m R)\right]}{6 \left(-1+e^{2 m r}\right) (1+m R)+2 e^{m (r+R)} m r \left[-6 + (2\omega + 3) m^2 \left(r^2-3 R^2\right)\right]}
& \text{if } r \leq R
\\
1-\left(\frac{1}{2}+\frac{(2\omega + 3) m^3 R^3 e^{m r}}{6 [m R \cosh (m R)-\sinh (m R)]}\right)^{-1}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Finally, the PPN parameter $\beta$ is obtained from the ansatz~\eqref{BETA.equ:PPN metric ansatz h400}, and hence can be obtained from
\ba
\beta(r)
= -\frac{\order{h}{4}^{\mathfrak{J}}_{00}
- 2 G_\text{eff}[ (1+3\gamma)G_\text{eff}\Phi_2 + \Phi_3 + 3 \gamma \Phi_4 ]}{2 G_\text{eff}^2 (U^2 + 2 \Phi_2)} \,.
\ea
Due to the even more lengthy, and practically irrelevant solution for \(\beta\) inside the source, we omit this part of the solution. The solution outside the source, $r>R$, takes the generic form
\ba
\bs
\label{BETA.equ: beta}
\beta(r>R)
&= \left[ \left(\frac{1}{r^2} + \f1r \frac{12}{5R} \right) \left(1 + e^{-m r} \frac{C_{\beta}^{E4}}{2} \right)^2 \right]^{-1}
\Bigg[\frac{C_{\beta}^{E1}}{r}
+ \frac{C_{\beta}^{E2}}{r^2}
+ C_{\beta}^{E3} \frac{e^{-m r}}{r}
+ C_{\beta}^{E4} \frac{e^{-m r}}{r^2} \\
&\qquad\qquad+ C_{\beta}^{E5} \frac{e^{-2 m r}}{r}
+ C_{\beta}^{E6} \frac{e^{-2 m r}}{r^2}
+ C_{\beta}^{E7} \frac{e^{- m r}}{r} \mathrm{Ei}{(-m r)}
+ C_{\beta}^{E8} \frac{e^{m r}}{r} \mathrm{Ei}{(-2 m r)}\\
&\qquad\qquad+ C_{\beta}^{E9} \frac{e^{m r}}{r} \mathrm{Ei}{(-3 m r)}
+ C_{\beta}^{E10} \mathrm{Ei}{(-2 m r)}
+ C_{\beta}^{E11} \frac{e^{-m r}}{r} \ln\left(\frac{r}{R}\right)
\Bigg] \,.
\es
\ea
The values of the coefficients can be found in the appendix \ref{app PPN Beta}, where we further introduce the abbreviations
\ba
\sigma = \frac{2F(I_1I_1'' + I_1'^2) - F'I_1I_1'}{2F^2I_1^2}\,, \quad \mu = \kappa^2I_1'\frac{2FI_2''' - 3F'I_2''}{4F^3I_1^2}\,.
\ea
Both $\gamma$ and $\beta$ depend only on the parameters $m, \omega, \mu, \sigma$ of the theory, which are invariant both under conformal transformations and redefinitions of the scalar field, and on the radius of the sphere $R$. As expected, it is independent of $M,\Pi_0,p_0$, which are absorbed in the metric potentials, and characterize the source only.
\subsection{Limiting cases}
\label{ssec:limits}
We finally discuss a number of physically relevant limiting cases. We start this discussion with the massless limit, i.e., the case of a vanishing potential $\mathcal{U} \rightarrow 0$, corresponding to $\mathcal{I}_2 \rightarrow 0$. This limit is achieved by successively applying to our result the limits $\mu \rightarrow 0$ and $m \rightarrow 0$. For $\gamma$, which does not depend on $\mu$, we obtain the limit
\ba
\gamma(m \rightarrow 0) = \frac{\omega + 1}{\omega + 2} = \frac{4FI_1^2 - I_1'^2}{4FI_1^2 + I_1'^2}\,.
\ea
For $\beta$ we find the limit
\ba
\bs
\beta(\mu \rightarrow 0, m \rightarrow 0) &= \frac{(2\omega + 3)\sigma - 8}{16(\omega + 2)^2}\\
&= 1 - \left( 1 + \frac{1}{4 F} \frac{I_1'^2}{I_1^2} \right)^{-2}
\left( \frac{F'}{32 F^3} \frac{I_1'^3}{I_1^3} + \frac{1}{16 F^2} \frac{I_1'^4}{I_1^4}
- \frac{1}{16 F^2} \frac{I_1'^2 I_1''}{I_1^3} \right) \,.
\es
\ea
These limits correspond to the result found in \cite{BETA.KuuskInvariantsMSTG2016}, if reduced to the case of a single scalar field.
Another interesting case is given by the large interaction distance limit. In the limit $r \rightarrow \infty$ we obtain
\ba
\gamma(r \rightarrow \infty) = 1 \,,
\ea
and
\ba
\bs
\beta(r \rightarrow \infty)
&= \frac{5 C_{\beta}^{E1} R}{12 I_1^2} \\
&= 1 + 5\frac{\left[39+m^2 R^2 (20 m R - 33)\right] -3 (1+m R) [13+m R (13+2 m R)]e^{-2 m R} }{16 (2\omega + 3) m^5 R^5} \,.\label{eqn:betainf}
\es
\ea
Note that it does not take the GR value $1$ as one might expect. This is due to the fact that the finite self-energy of the extended mass source influences $\beta$. If in addition we take the limit $R\rightarrow \infty$, we find that indeed $\beta$ goes to the GR value $1$,
\ba
\beta(r \rightarrow \infty, R \rightarrow \infty) = 1 \,.
\ea
Note that first we have to take the limit $r \rightarrow \infty$, since the solution we used here is valid only in the exterior region $r > R$, and the limit \(R \to \infty\) would otherwise be invalid. We finally remark that the same limit \(\beta \to 1\) is also obtained for \(m \to \infty\), which becomes clear from the fact that \(m\) always appears multiplied by either \(r\) or \(R\).
This concludes our section on the static spherically symmetric solution we discussed here. We can now use our results for \(\beta\) and \(\gamma\) and compare them to Solar System measurements of these PPN parameters. We will do so in the next section.
\section{Comparison to observations}
\label{sec Comparison to observations}
In the preceding sections we have derived expressions for the PPN parameters \(\beta\) and \(\gamma\). We have seen that they depend on the radius \(R\) of the gravitating source mass, the interaction distance \(r\) and constant parameters \(m, \omega, \mu, \sigma\), which characterize the particular scalar-tensor theory under consideration and are invariant under both conformal transformations of the metric and redefinitions of the scalar field. We now compare our results to observations of the PPN parameters in the Solar System, in order to obtain bounds on the theory parameters.
In the following we will not consider the parameters \(\mu\) and \(\sigma\), and set them to \(0\) in our calculations, as they correspond to higher derivatives of the invariant functions \(\mathcal{I}_1\) and \(\mathcal{I}_2\). Restricting our discussion to the parameters \(m\) and \(\omega\) will further allow us to plot exclusion regions which we can compare to previous results~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E,BETA.SchaererPPN2014}. To be compatible with the plots shown in these articles, we display the rescaled mass \(\tilde{m} = m/\sqrt{2\omega + 3}\) measured in inverse astronomical units \(m_{\mathrm{AU}} = 1\mathrm{AU}^{-1}\) on the horizontal axis. Regions that are excluded at a confidence level of \(2\sigma\) are shown in gray. In particular, we consider the following experiments:
\begin{itemize}
\item
The deflection of pulsar signals by the Sun has been measured using very long baseline interferometry (VLBI)~\cite{BETA.Fomalont:2009zg}. From this \(\gamma\) has been determined to satisfy \(\gamma - 1 = (-2 \pm 3) \cdot 10^{-4}\). The radar signals were passing by the Sun at an elongation angle of 3\textdegree, and so we will assume a gravitational interaction distance of \(r \approx 5.23 \cdot 10^{-2}\mathrm{AU}\). The region excluded from this measurement is shown in Fig.~\ref{fig:vlbi}.
\item
The most precise value for \(\gamma\) has been obtained from the time delay of radar signals sent between Earth and the Cassini spacecraft on its way to Saturn~\cite{BETA.Bertotti:2003rm}. The experiment yielded the value \(\gamma - 1 = (2.1 \pm 2.3) \cdot 10^{-5}\). The radio signals were passing by the Sun at a distance of \(1.6\) solar radii or \(r \approx 7.44 \cdot 10^{-3}\mathrm{AU}\). The excluded region, shown in Fig.~\ref{fig:cassini}, agrees with our previous findings~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E}.
\item
The classical test of the parameter \(\beta\) is the perihelion precession of Mercury~\cite{BETA.Will:2014kxa}. Its precision is limited by other contributions to the perihelion precession, most importantly the solar quadrupole moment \(J_2\). The current bound is \(\beta - 1 = (-4.1 \pm 7.8) \cdot 10^{-5}\). As the gravitational interaction distance we take the semi-major axis of Mercury, which is \(r \approx 0.387\mathrm{AU}\). We obtain the excluded region shown in Fig.~\ref{fig:mercury}. Note that for small values of \(\omega\) we obtain a tighter bound on the scalar field mass than from the Cassini tracking experiment, despite the larger interaction distance \(r\). This can be explained by the fact that the main contribution to \(\beta\) comes from a modification of the gravitational self-energy of the source mass, which is independent of the interaction distance, and depends only on the radius of the gravitating body.
\item
A combined bound on \(\beta\) and \(\gamma\) has been obtained from lunar laser ranging experiments searching for the Nordtvedt effect, which would cause a different acceleration of the Earth and the Moon in the solar gravitational field~\cite{BETA.Hofmann:2010}. In fully conservative theories with no preferred frame effects, such as scalar-tensor gravity, the Nordtvedt effect depends only on the PPN parameters \(\beta\) and \(\gamma\). The current bound is \(4\beta - \gamma - 3 = (0.6 \pm 5.2) \cdot 10^{-4}\). Since the effect is measured using the solar gravitational field, the interaction distance is \(r = 1\mathrm{AU}\). The excluded region is shown in Fig.~\ref{fig:llr}.
\item
A more recent measurement of both \(\beta\) and \(\gamma\) with higher precision has been obtained using combined ephemeris data and the Mercury flybys of the Messenger spacecraft in the INPOP13a data set~\cite{BETA.Verma:2013ata}. From these observations, combined bounds in the two-dimensional parameter space spanned by \(\beta\) and \(\gamma\) can be obtained, as well as bounds on the individual parameters by fixing one of them to its GR value. Since we have determined both parameters in our calculation, we do not perform such a fixing here, and use the full parameter space instead. From the 25\% residuals one finds a bounding region that can be approximated as
\begin{equation}
\left[(\beta - 1) - 0.2 \cdot 10^{-5}\right]^2 + \left[(\gamma - 1) + 0.3 \cdot 10^{-5}\right]^2 \leq \left(2.5 \cdot 10^{-5}\right)^2\,.
\end{equation}
Note that in this case one cannot easily define an interaction distance \(r\), since ephemeris from objects across the Solar System has been used. However, we may use the fact that for \(mr \gg 1\) the PPN parameters approach their limiting values \(\gamma \to 1\) and~\eqref{eqn:betainf}, so that the dominant effect is determined by the modified gravitational self-energy of the Sun. The excluded region under this assumption is shown in Fig.~\ref{fig:inpop}. One can see that for small values of \(\omega\) one obtains a bound on the scalar field mass which is approximately twice as large as the bound obtained from Cassini tracking and lunar laser ranging.
\end{itemize}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{vlbi.png}
\caption{Region excluded by VLBI measurements.}
\label{fig:vlbi}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{cassini.png}
\caption{Region excluded by Cassini tracking.}
\label{fig:cassini}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{mercury.png}
\caption{Region excluded by the perihelion shift of Mercury.}
\label{fig:mercury}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{llr.png}
\caption{Region excluded by lunar laser ranging.}
\label{fig:llr}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{inpop.png}
\caption{Region excluded by the ephemeris data set INPOP13a.}
\label{fig:inpop}
\end{figure}
Our results must be taken with care, since they are based on a number of assumptions and simplifications. Most importantly, we have calculated the PPN parameters under the assumption of a homogeneous, non-rotating, spherical gravitational source. This is only a very crude approximation for the Sun, whose density decreases with growing distance from its center. A full treatment of the post-Newtonian limit of a non-homogeneous body would be required to improve on this assumption. However, since a larger amount of matter is located closer to the center of the Sun, hence increasing its gravitational self-energy and decreasing the effective radius \(R\), one might expect that the effect on \(\beta\) will be even larger in such a full treatment.
As another simplification we have assumed that experiments based on electromagnetic waves passing by the Sun can be described by a single effective interaction distance. A rigorous treatment would involve an explicit calculation of the wave trajectory~\cite{BETA.Devi:2011zz,BETA.Deng:2016moh}. However, this affects only the VLBI and Cassini measurements of \(\gamma\), while the measurements of \(\beta\), which are less dependent on the interaction distance, are unaffected.
\section{Conclusion}
\label{sec Conclusion}
We have calculated the PPN parameters \(\gamma\) and \(\beta\) of scalar-tensor gravity with a general potential for a homogeneous, spherical mass source. For our calculation we have used a formalism which is manifestly invariant under both conformal transformations of the metric and redefinitions of the scalar field. The result we have obtained depends on four constant parameters of the theory under consideration, which are derived from the invariant functions that characterize the theory. Further, the result also depends on the radius \(R\) of the gravitating mass source and the interaction distance \(r\) at which the PPN parameters are measured. We have finally compared our results to a number of measurements in the Solar System and derived bounds on two of the four constant theory parameters.
Our results improve on previous work in which we assumed a point-like mass source~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E,BETA.SchaererPPN2014}. We have seen that \(\gamma\) receives a correction which depends on the source mass radius, but retains the large distance limit \(\gamma \to 1\) for \(r \to \infty\). In contrast, \(\beta\) receives a modification also in the large distance limit. This is explained by a modified gravitational self-energy of the source mass, which influences its gravitational effects also at large distances, and which has been neglected for the point mass. As a result, measurements of \(\beta\) at an interaction distance which is large compared to the radius of the source mass, \(r \gg R\), are significantly more sensitive to modifications of GR by a massive scalar field than measurements of \(\gamma\) at the same interaction distance. We have shown this in particular for measurements of \(\beta\) using lunar laser ranging and planetary ephemeris, where the interaction distance is in the order of astronomical units, and which yield bounds on the scalar field mass comparable to or even better that the bound obtained from the Cassini tracking experiment, with an interaction distance in the order of the solar radius. Our work suggests that measurements of \(\beta\) in the gravitational field of smaller, more compact objects could yield even stricter bounds.
Of course also our assumption of a spherically symmetric and homogeneous source mass is still only an approximation. Further improvement of our results would require a weakening of this assumption, and considering the density profile of the gravitating source mass. Such a calculation would have to be done numerically. While we have provided all necessary equations in this article, we leave performing such a calculation for future work.
Finally, it is also possible to extend our results to more general or related theories. A straightforward generalization is given by considering multiple scalar fields, for which \(\gamma\) has been calculated for a point mass source~\cite{BETA.HohmannGamma2016}, or by allowing derivative coupling as in Horndeski gravity, where it would lead to a similar improvement on previous calculations of \(\gamma\) and \(\beta\)~\cite{BETA.Hohmann:2015kra}. Another possibility is to consider massive bimetric gravity, where GR is augmented by a massive tensor degree of freedom instead of a scalar field, and a similar result on \(\gamma\) for a point mass can be improved and extended to \(\beta\)~\cite{BETA.Hohmann:2017uxe}.
\section*{Acknowledgments}
The authors thank Sofya Labazova for pointing out an error in a previous calculation.
MH gratefully acknowledges the full financial support of the Estonian Research Council through the Startup Research Grant PUT790 and the European Regional Development Fund through the Center of Excellence TK133 ``The Dark Side of the Universe''.
MH and AS acknowledge support from the Swiss National Science Foundation.
This article is based upon work from COST Action CANTATA, supported by COST (European Cooperation in Science and Technology).
\pagebreak
|
\section{Introduction}
Subgraph detection and graph partitioning are fundamental problems in network analysis, each typically framed in terms of identifying a group or groups of vertices of the graph so that the vertices in a shared group are well connected or ``similar'' to each other in their connection patterns while the vertices in different groups (or the complement group) are ``dissimilar''. The specific notion of connectedness or similarity is a modeling choice, but one often assumes that edges connect similar vertices, so that in general the detected subgraph is dense and the ``communities'' identified in graph partitioning are very often more connected within groups than between groups (``assortative communities'').
The identification of subgraphs with particular properties is a long-standing pursuit of network analysis with various applications. Dense subgraphs as assortative communities might represent coordinating regions of interest in the brain \cite{Meunier_2009,Bassett_2011} or social cliques in a social network \cite{Moody_2001}. In biology, subgraph detection plays a role in discovering DNA motifs and in gene annotation \cite{fratkin2006motifcut}. In cybersecurity, dense subgraphs might represent anomalous patterns to be highlighted and investigated (e.g., \cite{Yan_2021}). See \cite{ma2020efficient} for a recent survey and a discussion of alternative computational methods. As noted there, some of the existing algorithms apply to directed graphs, but most do not.
In the corresponding computer science literature, much of the focus has been on approximation algorithms since the dense $k$-subgraph is NP-hard to solve exactly (a fact easily seen by a reduction from the $k$-clique problem). An algorithm that on any input $(G, k)$ returns a subgraph of order $k$ (that is, $k$ vertices or ``nodes''; note, we will sometimes refer to the ``size'' of a graph or subgraph to be the number of vertices, not the number of edges) with average degree within a factor of at most $n^{1/3-\delta}$ from the optimum solution, where $n$ is the order of graph $G$ and $\delta\approx 1/60$ was proposed in \cite{feige2001dense}. This approximation ratio was the best known for almost a decade until a log-density based approach yielded $n^{1/4+\varepsilon}$
for any $\varepsilon > 0$ \cite{bhaskara2010}. This remains the state-of-the-art approximation algorithm. On the negative side it has been shown \cite{manurangsi2017}, assuming the exponential time hypothesis, that there is no polynomial-time algorithm that approximates to within an $n^{1/(\log\log n)^c}$ factor of the optimum. Variations of the problem where the target subgraph has size at most $k$ or at least $k$ have also been considered \cite{andersen2009}.
Depending on the application of interest, one might seek one or more dense subgraphs within the larger network, a collection of subgraphs to partition the network (i.e., assign a community label to each node), or a set of potentially overlapping subgraphs (see, e.g., \cite{Wilson_2014}). While the literature on ``community detection'' is enormous (see, e.g., \cite{fortunato_community_2010,fortunato_community_2016,fortunato_community_2022,porter_communities_2009,shai_case_2017} as reviews), a number of common thematic choices have emerged. Many variants of the graph partitioning problem can be formalized as a (possibly constrained) optimization problem. One popular choice minimizes the total weight of the cut edges while making the components roughly equal in size \cite{shi2000normalized}. Another common choice maximizes the total within-community weight relative to that expected at random in some model \cite{newman_finding_2004}. Other proposed objective functions include ratio cut weight \cite{bresson2013adaptive}, and approximate ``surprise'' (improbability) under a cumulative hypergeometric distribution \cite{Traag_Aldecoa_Delvenne_2015}.
However, most of these objectives are NP-hard to optimize, leading to the development of a variety of heuristic methods for approximate partitioning (see the reviews cited above for many different approaches). Some of the methods that have been studied are based on the Fielder eigenvector \cite{fiedler1973algebraic}, multicommunity flows \cite{leighton1989approximate}, semidefinite programming \cite{arora2004expander,arora2008geometry,arora2009expander}, expander flows \cite{arora2010logn}, single commodity flows \cite{khandekar2009graph}, or Dirichlet partitions \cite{osting2014minimal,osting2017consistency,wang2019diffusion}.
Whichever choice is made for the objective and heuristic, the identified communities can be used to describe the mesoscale structure of the graph and can be important in a variety of applications (see, e.g., the case studies considered in \cite{shai_case_2017}). Subgraphs and communities can also be important inputs to solving problems like graph traversal, finding paths, trees, and flows; while partitioning large networks is often an important sub-problem for complexity reduction or parallel processing in problems such as graph eigenvalue computations \cite{BDR13}, breadth-first search \cite{BM13}, triangle listing \cite{CC11}, PageRank \cite{SW13} and Personalized PageRank~\cite{andersen2007algorithms}.
In the present work, we consider a different formulation of the subgraph detection problem, wherein we aim to identify a subgraph with a long mean exit time---that is, the expected time for a random walker to escape the subgraph and hit its complement. Importantly, this formulation inherently respects the possibly directed nature of the edges. This formulation is distinct from either maximizing the total or average edge weight in a dense subgraph and minimizing the edge cut (as a count or suitably normalized) that is necessary to separate a subgraph from its complement. Furthermore, explicitly optimizing for the mean exit time to identify subgraphs may in some applications be preferred as a more natural quantity of interest. For example, in studying the spread of information or a disease on a network, working in terms of exit times is more immediately dynamically relevant than structural measurements of subgraph densities or cuts. Similarly, the development of respondent-driven sampling in the social survey context (see, e.g., \cite{Mouw_2012,Verdery_2015}) is primarily motivated by there being subpopulations that are difficult to reach (so we expect they often also have high exit times on the directed network with edges reversed). We thus argue that the identification of subgraphs with large exit times is at least as interesting---and typically related to---those subgraphs with large density and or small cut. Indeed, random walker diffusion on a network and assortative communities are directly related in that the modularity quality function used in many community detection algorithms can be recovered as a low-order truncation of a ``Markov stability'' auto-correlation measurement of random walks staying in communities \cite{Lambiotte_Delvenne_Barahona_2014}. However, the directed nature of the edges is fully respected in our escape time formulation of subgraph detection presented here (cf.\ random walkers moving either forward or backward along edges in the Markov stability calculation \cite{Mucha_Richardson_Macon_Porter_Onnela_2010} that rederives modularity for a directed network \cite{Leicht_Newman_2008}).
From an optimization point of view, the method presented here can be viewed as a rearrangemnet method or a Merriman-Bence-Osher (MBO) scheme \cite{MBO1993} as applied to Poisson solves on a graph. Convergence of MBO schemes is an active area of research in a variety of other scenarios: see \cite{chambolle2006convergence,ishii2005optimal} in the case of continuum mean curvature flows, \cite{budd2020graph,van_Gennip_2014} in a graph Allen-Cahn type problem, and \cite{jacobs2018auction} for a volume constrained MBO scheme on undirected networks.
Similarly, proving convergence rates for our algorithm by determining quantitative bounds on the number of interior iterations required for a given $\epsilon$ is an important question for the numerical method and its applications to large data sets. Importantly, the method for subgraph detection that we develop and explore, and then extend to a partitioner, is inherently capable of working on directed graphs without any modification.
Also, searching for related graph problems where this type of rearrangement algorithm for optimization can be applied will be an important endeavor.
\subsection{A New Formulation in Graphs}
Let $G = (V,E)$ be a (strongly) connected graph (undirected or directed; we use the term ``graph'' throughout to include graphs that are possibly directed), with adjacency matrix $A$ with element $A_{ij}$ indicating presence/absence (and possible weight) of an edge from $i$ to $j$. We define the (out-)degree matrix $D$ to be diagonal with values $D_{ii}=\sum_j A_{ij}$. For weighted edges in $A$ this weighted degree is typically referred to as ``strength'' but we will continue to use the word ``degree'' throughout to be this weighted quantity. Consider the discrete time Markov chain $M_n$ for the random walk described by the (row stochastic) \emph{probability transition matrix}, $P := D^{-1} A$. The \emph{exit time from $S\subset V$} is the stopping time $T_S = \inf\{n\geq 0: M_n\in S^c\}$.
The \emph{mean exit time from $S$ of a node $i$} is defined by $\mathbb{E}_i T_S$ (where $\mathbb{E}_i$ is the expectation if the walker starts at node $i$) and is given by $v_i$, where $v$ is the solution to the system of equations
\begin{subequations}
\label{ht_def}
\begin{align}
\label{ht_defa}
(I-P)_{SS} v_S &= 1_S \\
v_{S^c} &= 0\,,
\end{align}
\end{subequations}
where the subscript $S$ represents restriction of a vector or matrix to the indices in $S$.
The \emph{average mean escape time (MET) from $S$} is then
\begin{equation} \label{e:MET}
\tau (S) = \frac{1}{|V|} \sum_{i \in V} v_{i},
\end{equation}
representing the mean exit time from $S$ of a node chosen uniformly at random in the graph (noting that $v_i=0$ for $i\in S^c$).
We are interested in finding vertex sets (of fixed size) having large MET, as these correspond to sets that a random walker would remain in for a long time. Thus, for fixed $k\in \mathbb N$, we consider the \emph{subgraph detection problem},
\begin{equation}
\label{e:subgraphDetection}
\max_{\substack{S\subset V \\ |S| =k}} \tau(S).
\end{equation}
Multiplying~\eqref{ht_defa} on the left by $D$, we obtain the equivalent system,
\begin{subequations}
\label{e:Poisson}
\begin{align}
& L v = d
\textrm{ on } S , \\
& v = 0 \textrm{ on } S^c\,,
\end{align}
\end{subequations}
where $L = D-A$ is the (unnormalized, out-degree) graph Laplacian, and $d = D 1$ is the out-degree vector. We denote the solution to \eqref{e:Poisson} by $v = v(S)$.
For $\varepsilon> 0$, we will also consider the approximation to \eqref{e:Poisson},
\begin{equation}
\label{e:relax}
\left[ L + \varepsilon^{-1} (1-\phi) \right] u = d
\end{equation}
where $\phi$ is a vector and action by $(1-\phi)$ on the left is interpreted as multiplication by the diagonal matrix $I - {\rm diag} (\phi)$. We denote the solution $u = u_\varepsilon$.
Formally, for $\phi = \chi_S$, the characteristic function of $S$, as $\varepsilon \to 0$, the vector $u_\varepsilon \to v_S$ where $v_S$ satisfies \eqref{e:Poisson}.
We can also define an associated approximate MET
\begin{equation}
\label{e:l1energy}
E_\varepsilon (\phi) := \frac{1}{|V|} \| u_\varepsilon \|_{
\ell^1(V)} = \frac{1}{|V|} \left\| \left[ L + \varepsilon^{-1} (1-\phi) \right]^{-1} d \right\|_{\ell^1 (V)},
\end{equation}
where as $\varepsilon \to 0$, we have that $E_\varepsilon(\chi_S) \to \frac{1}{|V|} \| v_S \|_{\ell^1(V)} = \tau(S)$. We then arrive at the following \emph{relaxed subgraph detection problem}
\begin{equation}
\label{exp:l1_opt}
\max_{\substack{0 \leq \phi \leq 1 \\ \langle \phi, 1 \rangle = k}} E_\epsilon (\phi),
\end{equation}
which we solve and study in this paper.
For small $\varepsilon>0$, we will study the relationship between the subgraph detection problem \eqref{e:subgraphDetection} and its relaxation
\eqref{exp:l1_opt}.
We are also interested in finding node partitions with high MET in the following sense: Given a vertex subset $S \subset V$, a random walker that starts in $S$ should have difficulty escaping to $S^c$ and a random walker that starts in $S^c$ should have difficulty escaping to $S$. This leads to the problem $\max_{V = S \amalg S^c} \,\tau(S) + \tau(S^c)$.
More generally, for a vertex partition, $V = \amalg_{\ell \in [K]} S_\ell$, we can consider
\begin{equation} \label{e:minEscape}
\max_{V = \amalg_{\ell \in [K]} S_\ell} \ \sum_{\ell \in [K]} \ \tau(S_\ell).
\end{equation}
The solution embodies the idea that in a good partition a random walker will transition between partition components very infrequently.
An approximation to \eqref{e:minEscape} is
\begin{equation}
\label{e:Opt}
\max_{V = \amalg_{\ell \in [K]} S_\ell} \ \sum_{\ell \in [K]} \ E_\varepsilon (\chi_{S_\ell}).
\end{equation}
We can make an additional approximation by relaxing the constraint set.
Define the admissible class
$$
\mathcal A_K = \left\{ \{\phi_\ell\}_{\ell\in [K]} \colon
\phi_\ell \in \mathbb R^{|V|}_{+}
\text{ and }
\sum_{\ell \in [K] } \phi_\ell = 1 \right\}.
$$
Observe that the collection of indicator functions for any $K$-partition of the vertices is a member of $\mathcal A_K$. Furthermore, we can see that $\mathcal A_K \cong (\Delta_K)^{|V|}$, where $\Delta_K$ is the unit simplex in $K$ dimensions. Thus, the extremal points of $\mathcal A_K$ are precisely the collection of indicator functions for a $K$-partition of the vertices.
For $\varepsilon >0$, a modified relaxed version of the graph partitioning problem \eqref{e:minEscape} can be formulated as
\begin{equation}
\label{e:minEscapeRelax_alt}
\min_{ \{\phi_\ell\}_{\ell \in [K]} \in \mathcal A_K } \tilde{E}_{\epsilon} \left( \{\phi_\ell\}_{\ell \in [K]} \right), \quad \textrm{where} \quad \tilde{E}_{\epsilon} \left( \{\phi_\ell\}_{\ell \in [K]} \right) = \sum_{i = 1}^K [1+ \epsilon |V| E_\epsilon(\phi_i)]^{-1}.
\end{equation}
For small $\varepsilon>0$, we will study the relationship between the graph partitioning problem \eqref{e:minEscape} and its relaxation
\eqref{e:minEscapeRelax_alt}. An important feature of \eqref{e:minEscapeRelax_alt} is that it can be optimized using fast rearrangement methods that effectively introduces a volume normalization for the partition sets, while optimization of \eqref{e:minEscape} resulted in favoring one partition being full volume. We will discuss this further in Section \ref{sec:gp} below.
\subsection{Outline of the Paper}
In \Cref{s:Analysis}, we lay the analytic foundation for rearrangement methods for both the subgraph detection and partitioning problems.
We prove the convergence of the methods to local optimizers of our energy functionals in both cases and establish the fact that our fast numerical methods increase the energy.
To begin, we establish properties of the gradient and Hessian of the functionals $E_\epsilon (\phi)$ for vectors $0 \leq \phi \leq 1$. Then, using those properties, we introduce rearrangement methods for finding optimizers and prove that our optimization schemes reduce the energy.
Then, we discuss how to adapt these results to the partitioning problem.
Lastly, we demonstrate how one can easily add a semi-supervised component to our algorithm.
In \Cref{s:NumRes}, we apply our methods to a variety of model graphs, as well as some empirical data sets to assess their performance. In the subgraph setting, we consider how well we do detecting communities in a family of model graphs related to stochastic block models, made up of a number of random Erd\H{o}s-R\'enyi (ER) communities of various sizes and on various scales. The model graphs are designed such that the overall degree distribution is relatively similar throughout. We demonstrate community detectability and algorithm efficacy thresholds by varying a number of parameters in the graph models. We also consider directed graph models of cycles connected to Erd\H{o}s-R{\'e}nyi graphs, on which our methods perform quite well. For the partitioners, we also consider related performance studies over our model graph families, as well as on a large variety of clustering data sets.
We conclude in \Cref{s:disc} with a discussion including possible future directions and applications of these methods.
\section{Analysis of our proposed methods}
\label{s:Analysis}
In this section, we first analyze the relaxed subgraph detection problem \cref{exp:l1_opt} and the relaxed graph partitioning problem \Cref{e:minEscapeRelax_alt}. Then, we propose and analyse computaitonal methods for the problems. As noted above, we assume throughout that the graph is (strongly) connected.
\subsection{Analysis of the relaxed subgraph detection problem and the relaxed graph partitioning problem}
For fixed $\epsilon > 0$ and
$\phi \in [0,1]^{|V|}$, denote the operator on the RHS of \Cref{e:relax} by
$L_\phi := D-A + \frac{1}{\epsilon} (1-\phi)$.
\begin{lemma}[Discrete maximum principle]
\label{lem:max}
Given the regularized operator $L_\phi$ and a vector $f > 0$, we have $(L_{\phi}^{-1} f)_v > 0$ for all $v \in V$. Without strong connectivity, this result still holds (with $>$ replaced by $\ge$) as long as there are no leaf nodes.
\end{lemma}
\begin{proof}
Writing $L_\phi = \left( D + \frac{1}{\epsilon} (1-\phi) \right) - A$, we observe that
\begin{align*}
L_\phi^{-1} & = \left( \left( D + \frac{1}{\epsilon} (1-\phi) \right) \left( I - \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right) \right)^{-1} \\
& = \left( I - \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right)^{-1} \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} \\
& = \sum_{n=0}^\infty \left[ \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right]^n \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1}.
\end{align*}
Since all entries in the corresponding matrices are positive (by strong connectivity), the result holds.
\end{proof}
For simplicity, in the following we consider simply setting the potential $X : = \epsilon^{-1} ( 1- \phi)$ and we use $X$ and ${\rm diag}\ X$ interchangeably for graph Schr\"odinger operators of the form $L_X := D-A + X$ and solutions of the Poisson equation $L_X u = d$. We can then consider the related energy functional
\begin{equation}
\label{e:l1energy_alt}
E (X) : = \left\| \left[ L + X \right]^{-1} d \right\|_{\ell^1 (V)} = \| u\|_{
\ell^1(V)} .
\end{equation}
\begin{lemma}
\label{diff:lem}
The gradient of $E(X)$ with respect to $X$ is given by
\begin{equation}
\label{Jgrad}
\nabla E = - u \odot v
\end{equation}
where $\odot$ denotes the Hadamard product and
\begin{equation}
\label{uvdef}
u = (L+X)^{-1} d, \ \ v = (L+X)^{-T} e.
\end{equation}
Here $e$ is the all-ones vector. The Hessian of $E(X)$ with respect to $X$ is then given by
\begin{equation}
\label{Jhess}
H = \nabla^2 E = (L + X)^{-1} \odot W + (L + X)^{-T} \odot W^T
\end{equation}
where
\begin{equation*}
W := u \otimes v
\end{equation*}
where $\otimes$ is the Kronecker (or outer) product.
\end{lemma}
\begin{proof}
Write $e_j$ as the indicator vector for the $j$th entry. First, differentiating~\cref{uvdef} with respect to $X_j$, we compute
$$
(L + X) \frac{\partial u}{ \partial X_j} = - e_j \odot u
\qquad \implies \qquad
\frac{\partial u}{ \partial X_j} = - \langle e_j, u \rangle (L + X)^{-1} e_j.
$$
Taking the second derivative, we obtain
\begin{align*}
(L + X) \frac{\partial^2 u}{ \partial X_j \partial X_k}
&= - e_j \left\langle e_j, \frac{\partial u}{ \partial X_k} \right\rangle
- e_k \left\langle e_k , \frac{\partial u}{ \partial X_j} \right\rangle \\
&= e_j \langle e_k, u \rangle \left \langle e_j, (L + X)^{-1} e_k \right\rangle
+ e_k \langle e_j, u \rangle \left \langle e_k, (L + X)^{-1} e_j \right\rangle,
\end{align*}
which implies that
$$
\frac{\partial^2 u}{ \partial X_j \partial X_k} =
\left \langle e_j, (L + X)^{-1} e_k \right\rangle \langle e_k, u \rangle (L+X)^{-1}e_j + \left \langle e_k, (L + X)^{-1} e_j \right\rangle
\langle e_j, u \rangle (L+X)^{-1}e_k.
$$
By the maximum principle (Lemma~\ref{lem:max}), $u$ is positive and we can write
$ E(X) = \| u \|_1 = \langle e, u \rangle$. Thus, the gradient is
\begin{align*}
\frac{\partial E}{ \partial X_j}
&= \left \langle e, \frac{\partial u}{ \partial X_j} \right \rangle \\
& = - \langle (L+X)^{-T} e, e_j \rangle \langle u, e_j \rangle,
\end{align*}
or in other words
\[
\nabla_X E = u \odot v
\]
for $u$ and $v$ as in \eqref{uvdef}.
For the Hessian, we have
\begin{align*}
&\frac{\partial^2 E}{ \partial X_j \partial X_k}
= \left \langle e, \frac{\partial^2 u}{ \partial X_j \partial X_k} \right \rangle \\
& \hspace{.2cm} = \left \langle e_k, (L + X)^{-1} e_j \right\rangle
\langle u, e_j \rangle \left \langle e_k , v\right \rangle + \left \langle e_j, (L + X)^{-1} e_k \right\rangle \langle v, e_j \rangle \left \langle e_k,u \right \rangle.
\end{align*}
Thus, the Hessian can be written
$$
H = \nabla^2 E = (L + X)^{-1} \odot W + (L + X)^{-T} \odot W^T
$$
where
\begin{equation*}
W := u \otimes v.
\end{equation*}
as claimed.
\end{proof}
\begin{remark}
If $L$ is symmetric, the above statements can be simplified greatly to give
$$
H = \nabla^2 E = (L + X)^{-1} \odot (W+W^T)
$$
where
\begin{equation*}
W + W^T := u \otimes v + v \otimes u = \frac{1}{2} (u + v) \otimes (u+w) - \frac{1}{2} (u - v) \otimes (u- v) .
\end{equation*}
\end{remark}
\begin{proposition}
\label{p:Covexity}
For $f > 0$ fixed, let $u$ satisfy $
(L+X) u = f$.
The mapping
$X \mapsto E(X) = \| u \|_1$ is strongly convex on $\{X \geq 0, \ X \neq 0 \}$.
\end{proposition}
\begin{proof}
We wish to show that
\[E(X) = e^T (L +X)^{-1} d \]
is convex on $[0,X_{\infty}]^n$ for fixed constant $X_{\infty}$.
Replacing $D+X$ with $X$, this is equivalent to
\[e^T (X-A)^{-1} d\]
being convex on $\{V : d_i + X_{\infty} \ge X_i \ge d_i\}$.
Expanding, we have
\[e^T \left( I - X^{-1} A \right)^{-1} X^{-1} d = e^T \sum_{k=0}^{\infty} \left(X^{-1} A \right)^k X^{-1} d.\]
So it is enough to show that
\[e^T \left(X^{-1} A \right)^k X^{-1} d\]
is convex for each $ k > 0 $.
This is true as long as
\[ f(x) = \prod_i x_i^{-\alpha_i}\]
is convex for any $\alpha = (\alpha_1,\cdots,\alpha_n)$.
Computing second derivatives gives
\[f_{X_iX_i}(X) = f(X) \alpha_i (\alpha_i + 1) X_i^{-2}\]
and
\[f_{X_i X_j}(X) = f(X) \alpha_i \alpha_j X_i^{-1} X_j^{-1}.\]
So the Hessian of $f$ is
\[f(X) \left[ (\alpha X^{-1})^T (\alpha X^{-1}) + \textrm{diag}(\alpha X^{-2})\right],\] which is clearly positive semi-definite, being the sum of positive semi-definite matrices.
To observe strong convexity, recognize that the $k = 0$ term contributes a term to the Hessian of the form $DX^{-2}$, which is positive definite on the domain in question.
\end{proof}
Proposition \ref{p:Covexity} gives that $\phi \to E_\varepsilon(\phi)$ is strongly convex on $\mathbb R^{|V|}_+$, so $\{\phi_\ell\}_{\ell \in [K]} \mapsto \mathcal{E}^\varepsilon\left( \{\phi_\ell\}_{\ell \in [K]} \right) $ is also convex on $\mathcal{A}_K$. The following corollary is then immediate.
\begin{corollary}[Bang-bang solutions] \label{c:BangBang}
Every maximizer of \Cref{exp:l1_opt} is an extreme point of $\{ \phi \in [0,1]^{|V|} \colon \langle \phi,1\rangle =k \}$, {\it i.e.}, an indicator function for some vertex set $S \subset V$ with $|S| =k$.
\end{corollary}
Thus, in the language of control theory,
\Cref{c:BangBang} shows that
\Cref{exp:l1_opt}
is a bang-bang relaxation of
\eqref{e:subgraphDetection}
and that
\eqref{e:minEscapeRelax_alt}
is a bang-bang relaxation of
\eqref{e:minEscape}.
\bigskip
\begin{corollary}
\label{Hess:cor}
Since the set of values $(x_1,\dots,x_n) \in \mathbb{R}^n_+$ with which we are concerned is convex and $E$ is $C^2$ in $X$, the resulting Hessian matrix $H$ is positive definite.
\end{corollary}
\begin{remark}
Note that though the Hadamard product of two positive definite matrices is positive definite, Corollary \ref{Hess:cor} is not obvious from the structure of the Hessian, given that the matrix $W$ is indefinite when $u$ and $w$ are linearly independent. As a result, this positive definiteness is strongly related to the structure of the $L+X$ matrix and its eigenvectors.
\end{remark}
\subsection{Optimization scheme}
\subsubsection{Subgraph detector}
We solve~\Cref{exp:l1_opt} using rearrangement ideas as follows. After initializing $S$ (randomly in our experiments), we use the gradient~\Cref{Jgrad} to find the locally optimal next choice of $S$, and then iterate until convergence (typically $<10$ iterations in our experiments). More explicitly, we follow these steps:
\begin{align}
L u + \epsilon^{-1} (1- \chi_{S^0}) u & = d, \label{eq:grad_comp1} \\
L^T v + \epsilon^{-1} (1- \chi_{S^0}) v & = 1.
\label{eq:grad_comp}
\end{align}
The update, $S^1$, then contains those nodes $\ell$ that maximize $u_{\ell}v_{\ell}$.
\begin{algorithm}[t]
\caption{Subgraph detector}
\label{alg:subgraph}
\begin{algorithmic}
\State Input $S^0 \subset V$.
\While{$S^t \ne S^{t-1}$}
\State Solve~\Cref{eq:grad_comp1} and~\Cref{eq:grad_comp} for $u$ and $v$.
\State Assign vertex $\ell$ to subgraph $S^1$ if $\nabla_\phi E$ is optimized. That is, solve the following sub-problem.
\begin{equation}
\max_{|S| = k} \ \sum_{\ell \in S} u(\ell) \cdot v (\ell) .
\label{subgraph_inner}
\end{equation}
(Note that~\Cref{subgraph_inner} is easily solved by taking the $k$ indices corresponding to the largest values of $u(\ell) \cdot v(\ell)$, breaking ties randomly if needed.)
\State Reset now, building on $ S^1 \subset V$ accordingly and repeat until $S^n = S^{n-1}$.
\EndWhile
\end{algorithmic}
\end{algorithm}
Pseudocode for this approach is given in~\Cref{alg:subgraph}, which has the following ascent guarantee:
\begin{proposition}
\label{prop:sgascent}
Every nonstationary iteration of~\cref{alg:subgraph} strictly increases the energy $E_\epsilon$. \Cref{alg:subgraph} terminates in a finite number of iterations.
\end{proposition}
\begin{proof}
Let $S^0$ and $S^1$ be the vertex subsets for successive iterations of the method.
Define $W = \chi_{S^1} - \chi_{S^0}$. Assuming $W \neq 0$, by strong convexity (Theorem~\ref{p:Covexity}) and the formula for the gradient \eqref{Jgrad}, we compute
\begin{subequations}
\begin{align}
E_\epsilon (\chi_{S^1}) &> E_\epsilon (\chi_{S^0}) + \frac{1}{\epsilon} \langle W, uv \rangle \\
&= E_\epsilon (\chi_{S^0}) + \frac{1}{\epsilon} \left( \sum_{i \in S_1} u_i v_i - \sum_{i \in S_0} u_i v_i \right) \\
& \geq E_\epsilon (\chi_{S^0}).
\end{align}
\end{subequations}
Thus, the energy is strictly increasing on non-stationary iterates.
Since we assume that $V$ is a finite size vertex set and the rearrangement method increases the energy, it cannot cycle and hence must terminate in a finite number of iterations.
\end{proof}
To avoid hand-selection of $\epsilon$, we always set $\epsilon = C/\lambda_F$, where $\lambda_F$ is the Frobenius norm of the graph Laplacian and $C >1$ is typically set at $C=50$ to make sure $\epsilon$ allows communication between graph vertices. If $C$ is chosen to take a different value below, we will highlight those cases.
\subsubsection{Graph partitioner}
\label{sec:gp}
Given the success of the energy \eqref{e:l1energy}, one might naively consider partitioning the graph by maximizing an energy of the form
\begin{equation}
\label{part:energy_bad}
(S_1, S_2, \dots, S_K) \mapsto \sum_{i = 1}^K [E_\epsilon(\chi_{S_i})].
\end{equation}
However, it can be computed that this energy does not properly constrain the volumes of each partition in a reasonable fashion and the optimizer of this nice problem merely results in putting all the vertices in a single box.
The partition energy we initially worked to minimize instead is of the form
\begin{equation}
\label{part:energy}
(S_1, S_2, \dots, S_K) \mapsto \sum_{i = 1}^K [ |V| E_\epsilon( \chi_{S_i} )]^{-1},
\end{equation}
since the inverses penalize putting all nodes into the same partition by making the resulting empty classes highly costly. Intuitively, this energy functional provides an effective volume normalization of the relative gradients (similar to a K-means type scheme). However, while in practice this functional appears to work reasonably well on all graph models considered here, we were unable to prove, upon analysis of the Hessian, that rearrangements based on such an algorithm are bang-bang like the subgraph detector.
As an alternative, we instead consider
\begin{equation}
\label{part:energyalt}
\tilde{E}_{\delta, \epsilon} (S_1, S_2, \dots, S_K) = \sum_{i = 1}^K [1+ \delta |V| E_\epsilon( \chi_{S_i})]^{-1}.
\end{equation}
Applied to functions, $0 \leq \phi_j \leq 1$, instead of indicator functions, we consider
\begin{equation}
\label{part:energyalt_phi}
\tilde{E}_{\delta,\epsilon} (\phi_1, \phi_2, \dots, \phi_K) = \sum_{i = 1}^K [1+ \delta |V| E_\epsilon(\phi_i)]^{-1}.
\end{equation}
We then have that
\begin{equation}
\label{Grad:pe}
\nabla_{\phi_j} \tilde{E} = - \frac{\delta}{[1+ \delta |V| E_\epsilon(\phi_i)]^{2}} \nabla_{\phi_j} ( |V| E_\epsilon (\phi_j))
\end{equation}
making the Hessian consist of blocks of the form
\begin{align}
\label{Hess:pe}
\nabla^2_{\phi_j} \tilde{E} & = - \frac{\delta}{[1+ \delta |V| E_\epsilon(\phi_i)]^{2}} \nabla^2_{\phi_j} (|V| E_\epsilon (\phi_j)) \\
& \hspace{.5cm} + 2 \frac{\delta^2}{[1+ \delta |V| E_\epsilon(\phi_i)]^{3}} (\nabla_{\phi_j} ( |V| E_\epsilon (\phi_j)) ) ( \nabla_{\phi_j} ( |V| E_\epsilon (\phi_j)) )^T. \notag
\end{align}
Note, this is the sum of a negative definite operator and a rank one matrix, meaning that for $\delta$ sufficiently small, the Hessian will prove that $\tilde E$ is concave with respect to each component. In practice, we find that taking $\delta=\epsilon$ is sufficient both for having a negative definite Hessian and generating good results with respect to our rearrangement scheme. As such, we will generically take $\delta = \epsilon$ henceforward.
Our approach to the node partitioner is largely analogous to that of the subgraph detector, with the exception that we use class-wise $\ell^1$ normalization when comparing which values of $u \cdot v$ at each node. In detail, the algorithm is presented in Algorithm \ref{alg:partitioner}. It is a relatively straightforward exercise applying the gradient computation for $E_\epsilon (S_i)$ from Proposition \ref{p:Covexity} to prove that the energy functional \eqref{part:energy} will decrease with each iteration of our algorithm as in Proposition \ref{prop:sgascent}.
\begin{algorithm}[t]
\caption{Graph Partitioner}
\label{alg:partitioner}
\begin{algorithmic}
\State Input $\vec{S} = \{ S_1^0, \dots, S_K^0 \}$ a $K$ partition of $V$.
\While{${\vec S}^t \ne {\vec S}^{t-1}$}
\State For $j = 1, \dots, K$, solve the equations
\begin{align*}
L u_j + \epsilon^{-1} (1- \chi_{S_j^0}) u_j & = d, \\
L^T v_j + \epsilon^{-1} (1- \chi_{S_j^0}) v_j & = 1.
\end{align*}
\State Normalize $u_j = \frac{u_j}{(1 + \epsilon \| u_j \|_{\ell^1})^2}$, $v_j = v_j$.
\State Assign vertex $v$ to ${\vec S}^{t+1}_j$ where
\[
j=\mathrm{argmax} \{ u_1 \cdot v_1 (v) , \dots, u_K\cdot v_K (v) \}
\]
(that is, optimize $\nabla_\phi E$)
breaking ties randomly if needed.
\State Set $t = t+1$.
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsubsection{Semi-supervised learning}
In cases where we have a labeled set of nodes $T$ with labels $\hat\phi_v \in \{ 0,1 \}$ indicating whether we want node $i$ to be in the subgraph ($\hat\phi_v = 1$) or its complement ($\hat\phi_v = 0$), we can incorporate this information into our approach as follows.
For the subgraph detector, we use
$E_{\epsilon,\lambda,T}(\phi) = E_{\epsilon}(\phi) + \lambda \sum_{v \in T} \left( \phi_v - (1-\hat\phi_v) \right)^2$. Then the rearrangement algorithm needs to be modified at step 3 to become: Assign vertex $\ell$ to subgraph $S^1$ if $\nabla_\phi E$ is optimized
\[
\max_{|S| = k} \ \frac1\epsilon \sum_{\ell \in S} u(\ell) \cdot v (\ell) + 2 \lambda \sum_{v\in T}[\chi_S(v) - (1-\hat\phi_v)],
\]
where $\chi$ is the binary-valued indicator function. This again is solved by picking the largest elements
(we break ties by picking the lowest-index maximizers if needed).
Since the energy is still convex, the energy still increases at each iteration.
For the $K$-partitioner, we have a labeled set of nodes $T_i$ with labels $\hat\phi_{i,v} \in \{ 0,1 \}$, for $i = 1,\dots, K$ indicating whether we want node $v$ to be in partition element $i$, with $\sum_i \hat\phi_{i,v} = 1$ for $v\in \cup_i T_i$. We can incorporate this information into our approach by modifying the energy to be the concave functional
\begin{equation}
\label{eqn:parEssl}
\tilde{E}_{\epsilon,\lambda}(\phi_1,\dots,\phi_K) = \tilde{E}_{\epsilon}(\phi_1,\dots,\phi_K) - \lambda \sum_{v \in T} \sum_{j=1}^K (\phi_{j,v} - (1 - \hat \phi_{j,v}))^2
\end{equation}
with the gradient rearrangement being appropriately modified.
\section{Numerical Results}
\label{s:NumRes}
We test the performance of these algorithms both on synthetic graphs and an assortment of ``real-world" graphs.
For the synthetic tests, we use a particular set of undirected stochastic block models which we call the MultIsCale $K$-block Escape Ensemble (MICKEE), designed to illustrate some of the data features which our algorithms handle.
A MICKEE graph consists of $N$ nodes partitioned into $K+1$ groups of sizes $N_1$, $\ldots$, $N_K$, and $N_{K+1} = N-\sum_{j=1}^K N_j$, where $N_1<N_2<\ldots<N_K<N_{K+1}$ (see the 2-MICKEE schematic in~\cref{fig:lopsided}).
The nodes in the first $K$ groups induce densely connected Erd\H{o}s--R\'enyi (ER) subgraphs (from which we will study escape times) while the last group forms a sparsely connected ER background graph.
Each of the $K$ dense subgraphs is sparsely connected to the larger background graph.
The goal is to recover one of the planted subgraphs, generally the smallest.
A na\"ive spectral approach will often find one of the planted graphs, but we know of no way to control which subgraph is recovered. Our subgraph detector method, in contrast, can be directed to look at the correct scale to recover a specific subgraph, as we will demonstrate in the 2-MICKEE example (i.e., with two planted subgraphs).
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{lopsided.pdf}
\caption{Schematic of a 2-MICKEE graph, with three dense subgraphs that are randomly connected to each other. Our subgraph detectors can identify the target subgraph, ignoring other planted subgraphs at different scales. Our partitioner correctly identifies each subgraph as a partition, regardless of the scale.}
\label{fig:lopsided}
\end{figure}
We explore a number of variations on the basic MICKEE theme, including (1) making the large subgraph have a power law degree distribution (with edges drawn using a loopy, multi-edged configuration model), (2) adding more planted subgraphs with sizes ranging across several scales, (3) adding uniformly random noise edges across the entire graph or specifically between subgraphs, and (4) varying the edge weights of the various types of connections. For brevity, we refer to a MICKEE graph with $K$ planted subgraphs (not including the largest one) as a $K$-MICKEE graph.
\subsection{Subgraph Detection}
We explore the performance of~\Cref{alg:subgraph} using four benchmarks, which emphasize (1) noise tolerance, (2) multiscale detection,
(3) robustness to heavy-tailed degree distributions, and
(4) effective use of directed edges, respectively. In each of these tests, the target subgraph is the smallest planted subgraph.
\subsubsection*{Robustness to noise.} In~\cref{fig:3earnonlocal_sg} we visualize results from~\Cref{alg:subgraph} on $3$-MICKEE graphs, varying the amount and type of noise. While it is possible to get a bad initialization and thus find a bad local optimum the subgraph detector usually finds the target exactly, except in the noisiest regime (which occurs roughly at the point where the number of noise edges is equal to the number of signal edges).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_sg_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_sg_max_alt-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Accuracy of~\Cref{alg:subgraph} as a function of mean inter-subgraph degree (the mean taken over the nodes of the target subgraph) and mean weight of the inter-component edges (not including non-edges) for $3$-MICKEE graphs with planted subgraphs of sizes $80$, $160$, and $240$ nodes, with a total of $1,000$ nodes in the entire graph. The expected in-subgraph-degree is fixed at $20.8$ (with intra-component edge weights given by $1$). Inter-group edge weights are drawn from a normal distribution with maximum ranging from $.01$-$.25$. As long as the noise level is not too high, the subgraph detector finds the smallest planted subgraph despite the presence of ``decoy'' subgraphs at larger scales. This may be contrasted with spectral clustering, which is attracted to the larger scales.}
\label{fig:3earnonlocal_sg}
\end{figure}
\subsubsection*{Range of scales.}
We generated $2$-MICKEE graphs with varying sizes of the subgraphs relative to each other and the total mass. We take $1500<N<2500$ for the total size and vary the percentage of smallest planted subgraph as $.02N\leq N_1 \leq .15N$ with $N_2 = 2 N_1$. Here, the inter-edge density was set to $.01$ (in-subgraph-degree values between $(1-3p)*N*.01$ for $.02<p<.15$) with mean inter-edge weight $.05$ compared to intra-group edge weights of $1$. We used this framework to assess the detectability limits of sizes of the smallest components, and numerically we observe that small communities are quite detectable using our algorithm. Using the best result over $5$ initializations, we were able to detect the smallest ear over the entire range and we did so reliably on average as well. Since the resulting figure would thus not be terribly informative for this range, we forego including a similar heat plot over this range of parameters.
\subsubsection*{Heavy-tailed degree distributions.} For the results in \cref{fig:3earpowerlaw_sg}, we use a power law degree distribution in the largest component of $3$-MICKEE graphs with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$. Surprisingly (at least to us), smaller power-law exponents (corresponding to more skewed degree distributions) actually make the problem much easier (whereas adding noise edges had little effect). We conjecture that this is because, in the presence of very high-degree nodes, it is difficult to have a randomly occurring subgraph with high mean escape time, since connections into and out of the hubs are difficult to avoid.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_sg-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_sg_max_alt1-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Accuracy of~\Cref{alg:subgraph} on a $3$-MICKEE graph with a power law distribution as a function of the power law exponent and inter-cluster edge density. We observe a robustness to both the exponent and density (especially in the right panel) up to a sharp cutoff around 3.4. Note the low exponents (typically considered to be the harder cases) are actually easier in this problem.}
\label{fig:3earpowerlaw_sg}
\end{figure}
\subsubsection*{Directed edge utilization.} In~\cref{fig:ERcycle_sg} we consider the problem of detecting a directed cycle appended to an ER graph. The graph weights have been arranged so that the expected degree of all nodes is roughly equal. There are many edges leading from the ER graph into the cycle, with only one edge leading back into the ER graph. This makes the directed cycle a very salient dynamical feature, but not readily detectable by undirected (e.g.\ spectral) methods. We considered a large number of cycle sizes relative to the ER graph and with a proper choice of $\epsilon$, we were able to detect the cycle in all cases. Thus, this detector finds directed components very robustly due to the nature of the escape time.
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{er_plus_cycle.pdf}
\caption{A directed ER graph with a directed cycle appended. Note that there is only one edge (in the upper left) leading from the cycle to the ER graph, with many edges going the other direction from the ER graph to the cycle. The cycle nodes have the same expected degree as the ER nodes, yet a random walker would naturally get stuck in the cycle for a long time. Detecting such a dynamical trap is a challenge for undirected algorithms, but~\Cref{alg:subgraph} detects it consistently over a wide range of cycle lengths and ER graph sizes.}
\label{fig:ERcycle_sg}
\end{figure}
\subsubsection*{Variation over choice of $N_1$.} In Figure \ref{fig:EK}, we consider how the Mean Exit Time as well as the regularized energy in \eqref{e:l1energy} behaves as we vary the constrained volume of our algorithm. We considered a $2$-MICKEE graph with $N_1 = 50$, $N_2 = 100$ and $N = 1000$. We took the baseline ER density $.03$ and the inter-edge density was set to $.025$ with mean inter-edge weight $.1$.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{EvsKsweep_MET-eps-converted-to.pdf}
\caption{True mean exit time
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{EvsKsweep-eps-converted-to.pdf}
\caption{Regularized energy
\end{subfigure}
\caption{The score of the optimal sub-graph found with~\cref{alg:subgraph}. Both plots have clear shifts near $k = 50$ corresponding to the smallest component and $k = 100$ corresponding to the second smallest component. This suggests that the size of natural subgraphs within a given graph can be detected from breaks in the subgraph scores as the size of the target in~\cref{alg:subgraph} varies.}
\label{fig:EK}
\end{figure}
In summary, we find that the subgraph detector is able to robustly recover planted communities in synthetic graphs and is robust to a range of application-relevant factors.
\subsection{\texorpdfstring{$K$}{K}-partition method}
We will now consider the performance of \cref{alg:partitioner} in a variety of settings. Throughout, we will give heat plots over the variation of the parameters to visualize the purity measure of our detected communities from our ground-truth smallest component of the graph, over $5$ iterations of the algorithm. The purity measure is
\[
\frac{1}{N} \sum_{k=1}^K \max_{1 \leq l \leq K} N_k^l
\]
for $N_k^l$ the number of data samples in cluster $k$ that are in ground truth class $l$.
In \cref{fig:3earnonlocal} we consider a $\rho-\Delta$ heat plot of the purity measure for a 4-partition of a $3$-MICKEE graph using delocalized connections with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$, varying the density of the inter-community edge connections ($0 < \rho < .1$) and the mean weight of the inter-component edges ($0<\Delta<.125$). We vary over number and strength of connecting edges between components and consider the purity measure as output.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_max-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{The purity measure for~\cref{alg:partitioner} on $3$-MICKEE graphs. We vary the density of the inter-region edges and their edge weights. We observe robust (usually perfect) detection over a range of these parameters, with a sharp cutoff (especially in the left panel) when the noise levels grow too high, suggesting that detection is still possible beyond this cutoff, but the energy landscape has more bad local optima beyond this point.}
\label{fig:3earnonlocal}
\end{figure}
In addition, we have tested \cref{alg:partitioner} on MICKEE graphs with varying sizes of the components relative to each other and the total mass where the connections between ER graphs include more random edges with weak connection weights. \Cref{fig:2earnonlocal} shows results from testing the algorithm on $2$-MICKEE graphs with varying sizes of the components relative to each other and the total mass. We take $1500<N<2500$ for the total size and vary the percentage of smallest planted subgraph as $.02N\leq N_1 \leq .15N$ with $N_2 = 2 N_1$. Here, the inter-edge density was set to $.025$ with mean inter-edge weight $.05$. The question addressed in this experiment is how small can we get the components and still detect them. We heat map the average purity measure varying the number of vertices in the graph and the relative size of the smallest sub-graph (i.e., $N_1/N$).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{2ear_paperfig_N_percentsmallestear_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{2ear_paperfig_N_percentsmallestear_max-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{The purity measure for the partitioner acting on a $2$-MICKEE graph with the fraction of nodes in the smaller planted subgraph varying, along with the size of the graph. We observe a generally robust partitioning.}
\label{fig:2earnonlocal}
\end{figure}
We similarly consider the partitioning problem on a version of the $3$-MICKEE graph with power-law degree distribution in the largest component, using delocalized connections with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$. \Cref{fig:3earpowerlaw} provides a $\rho-q$ plot for results from varying the density ($.001<\rho<.03$) of the edge-density of connections between the components of the graph, using a power law degree distribution for the largest component with exponent ($2.1 \leq q \leq 4$).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_ave_alt-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_max_alt-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Purity achieved by~\cref{alg:partitioner} on $3$-MICKEE graphs with a power law degree distribution, varying the exponent of the power law and inter-subgraph edge density. We observe generally robust partitioning (especially in the right panel).}
\label{fig:3earpowerlaw}
\end{figure}
\subsubsection{Graph clustering examples}
We consider the family of examples as in \cite{yang2012clustering} and compare the best presented purity measures from that paper to a number of settings using our algorithms. Since some of these examples are by their nature actually directed data sets, we throughout computed both the directed and undirected adjacency matrix representations as appropriate to test against. We ran the $K$-partitioner over a variety of scenarios for both cases. In all these runs, we chose the value of $K$ to agree with the metadata (we avoid term "
ground truth", as the node labels themselves may be noisy or not the only good interpretation of the data). However, we note that our algorithm also does a good job in a variety of settings selecting the number of partitions to fill without precisely providing this correct number \emph{a priori}.
For our study, we consider a number of various options for the algorithm. First, the initial seeding sets were chosen either uniformly at random or using $K$-means on the first $K$-eigenvectors of the graph Laplacian. We consider the best result over $10$ outcomes. In addition, we considered a range of values of $\epsilon$, all of which were a multiplicative factor of the inverse of the Frobenius norm of the graph Laplacian, denoted $\| L \|_{{\rm Fro}}$, which sets a natural scaling for separation in the underlying graph. See for instance the related choice in \cite{osting2014minimal}. We computed a family of partitions for $\epsilon = {50 \nu}/{\| L \|_{{\rm Fro}}}$, where $\nu = e^{.2 \ell}$ with $-50 < \ell < 50$. Finally, we also considered the impact of semi-supervised learning by toggling between $\lambda = 0$ and $\lambda = 10^6$ in \cref{eqn:parEssl} with $10$\% of the nodes being included in the learning set. Clearly, there are many ways we might improve the outcomes, by for instance increasing the number and method of initialization and refining our choices of $\epsilon$ or $\lambda$; nevertheless, we see under our current choices that our fast algorithm performs well over a range of such parameters, as reported in Table \ref{tab:purmeas}.
For each data set in Table \ref{tab:purmeas}, we report the best outcome using directed adjacency matrices to build the Graph Laplacian using both the $K$-means and random initializations but with no semi-supervised learning (Directed); the best outcome using symmetrized adjacency matrices to build the Graph Laplacian using both the $K$-means and random initializations but with no semi-supervised learning (Undirected); the best outcome when Semi-Supervised Learning is turned on over any configuration (Semi-supervision), the $K$-means only outcome ($K$-means only) and the best data from all the experiments reported in \cite{yang2012clustering} (Best from \cite{yang2012clustering}). Our results promisingly demonstrate that our fast algorithm is very successful in many cases in discovering large amounts of community structure that agrees with the metadata in these explicit data sets. Given that our communities are all built around random walks in the graph, it is not clear that all ground-truth designated communities would align well with our methods. For example, we note that our results do not align well with the metadata in the {\rm POLBLOGS} data set. A major takeaway from the table, however, is that in several examples we see that using the directed nature of the data provides better agreement with the metadata (as indicated by the green cells). Perhaps most striking in the table is that the best run of our fast algorithm, even without semi-supervised learning, provides better agreement with the metadata than \cite{yang2012clustering} for many of the data sets.
As a statistical summary of our findings, we had in total
$39$ directed datasets and $6$ undirected data sets that came from a variety of domains (image, social, biological, physical, etc.).
The networks are sized between $35$ nodes and $98,528$ nodes, having $2$-$65$ classes per network. Among directed networks,
$21$ data sets gave highest purity with the metadata with semi-supervised learning turned on, while $13$ have the best result from \cite{yang2012clustering}, and $3$ have $K$-means only best. For $9$ total data sets (green in the table), the directed version of our algorithm is better than the symmetrized undirected version, while $5$ are tied (yellow) and for $25$ the undirected method is better (orange). When \cite{yang2012clustering} is best, the median gap from our result with semi-supervised learning is $.05$.
When our algorithm with semi-supervised learning is best, the median gap from \cite{yang2012clustering} is $.05$. There is no clear relationship between data domain and performance or node count and performance. However, semi-supervision generally did improve the results the most with a smaller class count (median $3$) versus \cite{yang2012clustering} (median $20$).
When the directed algorithm is better than the undirected version, the median gap is $0.03$. Interestingly, $5$ of the datasets where directed was better were image or sensor data, with the two largest gaps ($.07$ and $.11$) being digit datasets.
When undirected was better, the median gap was $0.06$, with the largest gap being $.29$, for the 20NEWS dataset. When semi-supervision improves over our method (max of directed and undirected performance), the median improvement is $.06$, and the max improvements were $.22$ and $.20$. There is no obvious relationship between edge density and algorithm performance.
\begin{footnotesize}
\begin{table}
\centering
\sisetup{detect-weight,mode=text}
\renewrobustcmd{\bfseries}{\fontseries{b}\selectfont}
\begin{tabular}{llr>{\raggedleft}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}}
\rot{Network} & \rot{Domain} & \rot{Vertices} & \rot{Density} & \rot{Classes} & \rot{Directed} & \rot{Undirected} & \rot{Semi-supervision} & \rot{$K$-means only} & \rot{Best from \cite{yang2012clustering}} \\
\midrule
\multicolumn{8}{l}{\bf Directed data}\\
MNIST & Digit & 70,000 & 0.00 & 10& \colorbox{green}{0.85} & \colorbox{green}{0.78} & \textbf{0.98} & 0.84 & 0.97\\
VOWEL & Audio & 990 & 0.01 & 11& \colorbox{green}{0.35} & \colorbox{green}{0.32} & \textbf{0.44} & 0.34 & 0.37 \\
FAULTS & Materials & 1,941 & 0.00 & 7 & \colorbox{green}{0.44} & \colorbox{green}{0.42} & \textbf{0.49} & 0.39 & 0.41 \\
SEISMIC & Sensor & 98,528 & 0.00 & 3 & \colorbox{green}{0.60} & \colorbox{green}{0.59} & \textbf{0.66} & 0.58 & 0.59 \\
7Sectors & Text & 4,556 & 0.00 & 7 & \colorbox{green}{0.27} & \colorbox{green}{0.26} & \textbf{0.39} & 0.26 & 0.34 \\
PROTEIN & Protein & 17,766 & 0.00 & 3 & \colorbox{green}{0.47} & \colorbox{green}{0.46} & \textbf{0.51} & 0.46 & 0.50 \\
KHAN & Gene & 83 & 0.06 & 4 & \colorbox{yellow}{0.59} & \colorbox{yellow}{0.59} & \textbf{0.61} & 0.59 & 0.60 \\
ROSETTA & Gene & 300 & 0.02 & 5 & \colorbox{yellow}{0.78} & \colorbox{yellow}{0.78} & \textbf{0.81} & 0.77 & 0.77 \\
WDBC & Medical & 683 & 0.01 & 2 & \colorbox{yellow}{0.65} & \colorbox{yellow}{0.65} & \textbf{0.70} & 0.65 & 0.65 \\
POLBLOGS & Social & 1,224 & 0.01 & 2 & \colorbox{yellow}{0.55} & \colorbox{yellow}{0.55} & \textbf{0.59} & 0.51 & NA \\
CITESEER & Citation & 3,312 & 0.00 & 6 & \colorbox{orange}{0.28} & \colorbox{orange}{0.29} & \textbf{0.49} & 0.25 & 0.44\\
SPECT & Astronomy & 267 & 0.02 & 3 & \colorbox{orange}{0.79} & \colorbox{orange}{0.80} & \textbf{0.84} & 0.79 & 0.79 \\
DIABETES & Medical & 768 & 0.01 & 2 & \colorbox{orange}{0.65} & \colorbox{orange}{0.67} & \textbf{0.74} & 0.65 & 0.65 \\
DUKE & Medical & 44 & 0.11 & 2 & \colorbox{orange}{0.64} & \colorbox{orange}{0.68} & \textbf{0.73} & 0.52 & 0.70 \\
IRIS & Biology & 150 & 0.03 & 3 & \colorbox{orange}{0.87} & \colorbox{orange}{0.90} & \textbf{0.97} & 0.67 & 0.93 \\
RCV1 & Text & 9,625 & 0.00 & 4 & \colorbox{orange}{0.35} & \colorbox{orange}{0.40} & \textbf{0.62} & 0.32 & 0.54 \\
CORA & Citation & 2,708 & 0.00 & 7 & \colorbox{orange}{0.33} & \colorbox{orange}{0.39} & \textbf{0.50} & 0.32 & 0.47 \\
CURETGREY & Image & 5,612 & 0.00 & 61& \colorbox{orange}{0.23} & \colorbox{orange}{0.29} & \textbf{0.33} & 0.22 & 0.28\\
SPAM & Email & 4,601 & 0.00 & 2 & \colorbox{orange}{0.64} & \colorbox{orange}{0.70} & \textbf{0.73} & 0.61 & 0.69 \\
GISETTE & Digit & 7,000 & 0.00 & 2 & \colorbox{orange}{0.87} & \colorbox{orange}{0.94} & \textbf{0.97} & 0.81 & 0.94 \\
WEBKB4 & Text & 4,196 & 0.00 & 4 & \colorbox{orange}{0.42} & \colorbox{orange}{0.53} & \textbf{0.66} & 0.40 & 0.63 \\
CANCER & Medical & 198 & 0.03 & 14& \colorbox{orange}{0.49} & \colorbox{orange}{\textbf{0.55}} & 0.54 & 0.45 & 0.54 \\
YALEB & Image & 1,292 & 0.00 & 38& \colorbox{orange}{0.44} & \colorbox{orange}{\textbf{0.54}} & 0.52 & 0.41 & 0.51 \\
COIL-20 & Image & 1,440 & 0.00 & 20& \colorbox{orange}{0.74} & \colorbox{orange}{\textbf{0.85}} & 0.78 & 0.82 & 0.81 \\
ECOLI & Protein & 327 & 0.02 & 5 & \colorbox{orange}{0.79} & \colorbox{orange}{\textbf{0.83}} & 0.81 & 0.81 & \textbf{0.83} \\
YEAST & Biology & 1,484 & 0.00 & 10& \colorbox{orange}{0.46} & \colorbox{orange}{0.53} & 0.54 & 0.47 & \textbf{0.55} \\
20NEWS & Text & 19,938 & 0.00 & 20& \colorbox{orange}{0.20} & \colorbox{orange}{0.49} & 0.62 & 0.16 & \textbf{0.63} \\
MED & Text & 1,033 & 0.00 & 31& \colorbox{orange}{0.50} & \colorbox{orange}{0.54} & 0.54 & 0.48 & \textbf{0.56} \\
REUTERS & Text & 8,293 & 0.00 & 65& \colorbox{orange}{0.60} & \colorbox{orange}{0.69} & 0.75 & 0.60 & \textbf{0.77} \\
ALPHADIGS & Digit & 1,404 & 0.00 & 6 & \colorbox{orange}{0.42} & \colorbox{orange}{0.48} & 0.48 & 0.46 & \textbf{0.51} \\
ORL & Face & 400 & 0.01 & 40& \colorbox{orange}{0.76} & \colorbox{orange}{0.82} & 0.76 & 0.78 & \textbf{0.83} \\
OPTDIGIT & Digit & 5,620 & 0.00 & 10& \colorbox{orange}{0.90} & \colorbox{orange}{0.93} & 0.91 & 0.90 & \textbf{0.98} \\
PIE & Face & 1,166 & 0.00 & 53& \colorbox{orange}{0.53} & \colorbox{orange}{0.66} & 0.62 & 0.51 & \textbf{0.74}\\
SEG & Image & 2,310 & 0.00 & 7 & \colorbox{orange}{0.54} & \colorbox{orange}{0.64} & 0.59 & 0.51 & \textbf{0.73} \\
UMIST & Face & 575 & 0.01 & 20& \colorbox{green}{0.74} & \colorbox{green}{0.71} & 0.67 & 0.67 & \textbf{0.74} \\
PENDIGITS & Digit & 10,992 & 0.00 & 10& \colorbox{green}{0.82} & \colorbox{green}{0.73} & 0.82 & 0.83 & \textbf{0.87}\\
SEMEION & Digit & 1,593 & 0.00 & 10& \colorbox{green}{0.86} & \colorbox{green}{0.82} & 0.77 & 0.81 & \textbf{0.94} \\
AMLALL & Medical & 38 & 0.13 & 2 & \colorbox{orange}{0.92} & \colorbox{orange}{\textbf{0.95}} & 0.94 & \textbf{0.95} & 0.92 \\
IONOSPHERE & Radar & 351 & 0.01 & 2 & \colorbox{yellow}{0.77} & \colorbox{yellow}{0.77} & \textbf{0.85} & \textbf{0.85} & 0.70 \\
\multicolumn{8}{l}{\bf Undirected data} \\
POLBOOKS & Social & 105 & 0.08 & 3 & 0.83 & \textbf{0.85} & \textbf{0.85} & 0.82 & 0.83 \\
KOREA & Social & 35 & 0.11 & 2 & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} & 0.71 & \textbf{1.00} \\
FOOTBALL & Sports & 115 & 0.09 & 12 & \textbf{0.94} & 0.93 & 0.90 & 0.93 & 0.93 \\
MIREX & Music & 3,090 & 0.00 & 10 & 0.21 & 0.24 & 0.27 & 0.12 & \textbf{0.43} \\
HIGHSCHOOL & Social & 60 & 0.10 & 5 & 0.82 & 0.85 & 0.83 & 0.82 & \textbf{0.95} \\
\bottomrule
\end{tabular}
\caption{Purity Measure Table}
\label{tab:purmeas}
\end{table}
\end{footnotesize}
\begin{comment}
\begin{small}
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
Network & Size & Classes & RaUMET & RDMET & UnKUMET & DiKDMET & UnKDMET \\ \hline \hline
7Sectors & 4556 & 7 & 0.26 & 0.26 & 0.25 & 0.27 & 0.28\\
20NEWS & 19938 & 20 & 0.23 & 0.23 & 0.21 & 0.18 & 0.15\\
ADS & 2359 & 2 & 0.84 & 0.84 & 0.84 & 0.84 & 0.84\\
ALPHADIGS & 1404 & 36 & 0.21 & 0.21 & 0.47 & 0.44 & 0.43\\
AMLALL &38 & 2 & 0.92 & 0.92 & 0.92 & 0.82 & 0.82\\
CANCER & 198 & 14 & 0.41 & 0.41 & 0.52 & 0.52 & 0.55\\
CITESEER & 3312 & 6 & 0.3 & 0.3 & 0.22 & 0.22 & 0.21\\
COIL-20 & 1440 & 20 & 0.78 & 0.78 & 0.63 & 0.46 & 0.57\\
CORA & 2708 & 7 & 0.33 & 0.33 & 0.33 & 0.32 & 0.32\\
CURETGREY & 5612 & 61 & 0.031 & 0.11 & 0.22 & 0.27 & 0.27\\
DIABETES & 768 & 2 & 0.65 & 0.65 & 0.65 & 0.65 & 0.65\\
DUKE & 44 & 2 & 0.64 & 0.64 & 0.55 & 0.57 & 0.57\\
ECOLI & 327 & 5 & 0.78 & 0.72 & 0.79 & 0.62 & 0.81\\
FAULTS & 1941 & 7 & 0.4 & 0.42 & 0.4 & 0.35 & 0.39\\
FOOTBALL & 115 & 12 & 0.47 & 0.46 & 0.91 & 0.9 & 0.83\\
GISETTE & 7000 & 2 & 0.95 & 0.71 & 0.9 & 0.9 & 0.9\\
HIGHSCHOOL & 60 & 5 & 0.68 & 0.7 & 0.63 & 0.83 & 0.82\\
IONOSPHERE & 351 & 2 & 0.69 & 0.85 & 0.69 & 0.8 & 0.8\\
IRIS & 150 & 3 & 0.89 & 0.62 & 0.91 & 0.61 & 0.61\\
KHAN & 83 & 4 & 0.52 & 0.55 & 0.59 & 0.54 & 0.54\\
KOREA & 35 & 2 & 0.71 & 0.66 & 0.66 & 0.94 & 0.66\\
MED & 1033 & 31 & 0.37 & 0.38 & 0.54 & 0.46 & 0.47\\
MIREX & 3090 & 10 & 0.13 & 0.12 & 0.28 & 0.16 & 0.16\\
MNIST & 70000 & 10 & 0.11 & 0.82 & 0.56 & 0.65 & 0.47\\
OPTDIGITS & 5620 & 10 & 0.8 & 0.9 & 0.78 & 0.61 & 0.59\\
ORL & 400 & 40 & 0.34 & 0.53 & 0.77 & 0.64 & 0.6\\
PENDIGITS & 10992 & 10 & 0.81 & 0.66 & 0.8 & 0.58 & 0.63\\
PIE & 1166 & 53 & 0.42 & 0.49 & 0.5 & 0.56 & 0.55\\
POLBOOKS & 105 & 3 & 0.83 & 0.8 & 0.82 & 0.82 & 0.82\\
PROTEIN & 17766 & 3 & 0.46 & 0.46 & 0.46 & 0.46 & 0.46\\
RCV1 & 9625 & 4 & 0.31 & 0.3 & 0.3 & 0.33 & 0.33\\
REUTERS & 8293 & 65 & 0.67 & 0.62 & 0.69 & 0.63 & 0.62\\
ROSETTA & 300 & 5 & 0.77 & 0.77 & 0.77 & 0.77 & 0.77\\
SEG & 2310 & 7 & 0.29 & 0.5 & 0.63 & 0.29 & 0.35\\
SEISMIC & 98528 & 3 & 0.5 & 0.56 & 0.53 & 0.5 & 0.5\\
SEMEION & 1593 & 10 & 0.14 & 0.85 & 0.8 & 0.55 & 0.59\\
SPAM & 4601 & 2 & 0.61 & 0.61 & 0.61 & 0.61 & 0.61\\
SPECT & 267 & 3 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 \\
STRIKE & 24 & 3 & 0.79 & 1.0 & 0.96 & 0.96 & 0.96\\
TERROR & 1293 & 6 & 0.44 & 0.44 & 0.45 & 0.43 & 0.43\\
UMIST & 575 & 20 & 0.35 & 0.78 & 0.62 & 0.65 & 0.67\\
VOWEL & 990 & 11 & 0.25 & 0.3 & 0.36 & 0.26 & 0.25\\
WDBC & 683 & 2 & 0.65 & 0.65 & 0.65 & 0.65 & 0.65\\
WEBKB4 & 4196 & 4 & 0.42 & 0.42 & 0.39 & 0.4 & 0.39\\
YALEB & 1292 & 38 & 0.1 & 0.29 & 0.39 & 0.42 & 0.41\\
YEAST &1484 & 10 & 0.32 & 0.52 & 0.52 & 0.48 & 0.5
\end{tabular}
\caption{Purity Measure Table}
\label{tab:purmeas}
\end{table}
\end{small}
\end{comment}
We have discussed the output of a variety of experiments on a large number of data sets, but we also want to discuss their dependence upon the $\epsilon$ parameter and the percentage of nodes that are learned in the energy \eqref{eqn:parEssl}. To that end, we consider the output purity measure for some representative data sets and look at the outputs over a range of epsilon parameters and percentages of learning. In this case, we considered only the $K$-means initialization for consistency and simplicity of comparison. For the $\epsilon$ sweep, we recall that we considered the range $\epsilon = {50 \nu}/{\| L \|_{{\rm Fro}}}$, where $\nu = e^{.2 \ell}$ with $-50 < \ell < 50$. In \cref{fig:epsweep} we show the variation in the purity measure with $\epsilon$ for a small graph ({\rm FOOTBALL}), a medium sized graph ({\rm OPTDIGITS)}, and a large graph ({\rm SEISMIC}). Similarly, in \cref{fig:persweep} we visualize how results vary with the fraction of supervision (nodes with labels provided) under semi-supervised learning, for the same graphs, with $\nu = .6,.8,1.0,1.2,1.4,1.6,1.8$.
\begin{figure}[t!]
\centering
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Football_EpsSweep-eps-converted-to.pdf}
\caption{Football}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Optdigits_EpsSweep-eps-converted-to.pdf}
\caption{Optdigits}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Seismic_EpsSweep-eps-converted-to.pdf}
\caption{Seismic}
\end{subfigure}
\caption{Purity measures for three selected data sets as a function of the scale parameter $\nu$. In all three panels, we observe a stable range (on a log scale) where purity is stably nontrivial, and in the left panel, there are two such scales.}
\label{fig:epsweep}
\end{figure}
\begin{figure}[t!]
\centering
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Football_persweep-eps-converted-to.pdf}
\caption{Football}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Optdigits_persweep-eps-converted-to.pdf}
\caption{Optdigits}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Seismic_persweep-eps-converted-to.pdf}
\caption{Seismic}
\end{subfigure}
\caption{Purity measures on three selected data sets as a function of the fraction of supervision (nodes with labels provided) under semi-supervised learning. We observe that supervision can either consistently help (as in the right panel) or can have inconsistent effects (as in the left and middle panels). Once possible explanation for this is that there may be multiple clustering structures present in the data, and it takes a lot of supervision to force the partitioner to switch to a partition aligned with the metadata indicated by the supervision, rather than a different clustering structure that is better from the perspective of the optimizer.}
\label{fig:persweep}
\end{figure}
\section{Discussion}
\label{s:disc}
Throughout our study we emphasize that our methodology operates fundamentally on the possibly directed nature of the underlying graph data. Considering the Index of Complex Networks \cite{ICON} as a representative collection of widely-studied networks, we note that (as of our writing here) 327 of the 698 entries in the Index contain directed data. Whereas there are undoubtedly settings where one can ignore edge direction, there are inevitably others where respecting direction is essential. By formulating a strategy for subgraph detection and graph partitioning inherently built on processes running on the directed graph, we avoid the need for any \emph{post hoc} modifications to try to respect directed edges. In particular, our method nowhere relies on any correspondingly undirected version of the graph, avoiding possible information lost in symmetrizing.
While we expect that our formulation of escape times can be useful in general, including for undirected graphs, our proper treatment of the directed graph data should prove especially useful. For example, the directed follower v.\ following nature of some online social networks (e.g., Twitter) is undoubtedly important for understanding the processes involved in the viral spread of (mis)information. As shown by \cite{Weng_Menczer_Ahn_2013} (and extended by \cite{li2019infectivity}), the community structure is particularly important for identifying the virality of memes specifically because a meme that ``escapes" (in our present language) its subgraph of origin is typically more likely to continue to propagate. Another application where directed escape times could be relevant is in detecting the (hidden) circulation of information, currency, and resources that is part of coordinated adversarial activity, as explored for example in~\cite{jin2019noisy,moorman2018filtering,sussman2020matched}
To close, we highlight two related thematic areas for possible future work that we believe would lead to important extensions on the methods presented here.
\subsection{Connection to distances on directed graphs}
In previous work of the present authors \cite{boyd2020metric} along with Jonathan Weare, we construct a symmetrized distance function on the vertices of a directed graph. We recall the details briefly here, which is based somewhat upon the hitting probability matrix construction used in umbrella sampling (\cite{dinner2017stratification,Thiede_2015}). For a general probability transition matrix $P$, we denote the Perron eigenvector as
\[
P' \phi = \phi.
\]
Let us define a matrix $M$ such that $M_{ij} = \prob_i [\tau_j < \tau_i]$, where $\prob_i [\tau_j < \tau_i]$ is the probability that starting from site $i$ the hitting time of $j$ is less than the time it takes to return to $i$. Let $X(t)$ be the Markov chain with transition matrix $P$. Then, it can be observed that (\cite{dinner2017stratification,Thiede_2015})
\[
\prob_i [\tau_j < \tau_i] \phi_i = \prob_j [\tau_i < \tau_j] \phi_j,
\]
where $\prob_i$ represents the probability from $X(0) = i$.
This means that from hitting times, one can construct a symmetric adjacency matrix,
\begin{equation}
\label{Aht}
A^{(hp)}_{ij} = \frac{ \sqrt{\phi_i} }{ \sqrt{\phi_j} } \prob_i [\tau_j < \tau_i] = A^{(hp)}_{ji}\,.
\end{equation}
This adjacency matrix has built-in edge weights based upon hitting times, and we can then easily partition this adjacency matrix using our symmetric algorithms, in particular the mean exit time fast algorithm developed here.
The distance function in \cite{boyd2020metric} is given by $d^\beta \colon [n] \times [n] \to \mathbb R$, which we refer to as the \emph{hitting probability pseudo-metric}, by
\begin{equation} \label{e:Dist}
d (i,j) = - \log \left( A^{(hp)}_{ij} \right).
\end{equation}
This is generically a pseudo-metric as it is possible distinct nodes can be distance $0$ from one another, however there exists a quotient graph on which $d$ is a genuine metric.
Indeed, a family of metrics is given in \cite{boyd2020metric} that has to do with possible choices of the normalization in \cref{Aht} with different powers of the invariant measure.
A natural question to pursue is whether parsing the directed network with this approach to create the symmetrized $A^{(hp)}$ matrix, then applying our clustering scheme can be used to effectively detect graph structures in a more robust manner. In particular, comparison of our clustering scheme versus $K$-means studies of the distance structure should be an important direction for future study.
\subsection{Continuum Limits}
The methods presented here have a clear analog in the continuum setting to the motivated problems in the continuum discussed in the introduction. The primary continuum problem is related to the landscape function, or torsion function, on a sub-domain prescribed with Dirichlet boundary conditions,
\begin{align}
-\Delta u_S = 1_S, \ \ u_S |_{\partial S} = 0.
\end{align}
This is known as the mean exit time from a set $S$ of a standard Brownian motion random walker, see \cite{pavliotis2014stochastic}, Chapter $7$. Correspondingly, for a domain $\Omega$ with Neumann boundary conditions (to make life easier with graphs) and some $0 < \alpha < 1$, we propose the following optimization
\begin{align}
\max_{S \subset \Omega, |S| = \alpha |\Omega|} \int_S u_S\, dx\,,
\end{align}
meaning that we wish to maximize the exit time of a random walker from a given sub-domain. Through the Poisson formula for the mean exit time, we have that $\int u_S = (- \Delta u_S, u_S)$, allowing us to frame things similarly via a Ginzburg--Landau like penalty term for being in a set $S$,
$$
\min_{ \substack{0\leq \phi \leq 1 \\ \int \phi = \alpha |\Omega| }} \ \min_{ \int u = 1 } \frac12 (- \Delta u_S, u_S) + \frac{1}{2 \epsilon} \langle u, (1-\phi) u \rangle.
$$
Analysis of optimizers for such a continuum problem and its use in finding sub-domains and domain partitions is one important direction for future study.
Related results in a continuum setting have been studied for instance in \cite{briancon2004regularity,buttazzo1993existence}, but the regularization of this problem seems to be new and connects the problem through the inverse of the Laplacian to the full domain and its boundary conditions. Following works such as \cite{osting2017consistency,singer2017spectral,trillos2016continuum,trillos2018variational,trillos2016consistency,YUAN_2021}, an interesting future direction would be to prove consistency of our algorithm to these well-posed continuum optimization problems.
\bibliographystyle{amsplain}
|
\section{Introduction}
Feature matching is a key component in many 3D vision applications such as structure from motion (SfM) or simultaneous localization and mapping (SLAM).
Conventional pose estimation is a multi-step process: feature detection finds interest points, for which local descriptors are computed. Based on the descriptors, pairs of keypoints from different images are matched,
which defines constraints in the pose optimization.
A major challenge lies in the ambiguity of matching local descriptors by nearest-neighbor search, which is error-prone, particularly in texture-less areas or in presence of repetitive patterns.
Hand-crafted heuristics or outlier filters become necessary to circumvent this problem to some degree.
Recent learning-based approaches~\cite{Sarlin2020SuperGlueLF,Sun2021LoFTRDL,Jiang2021COTRCT}
instead leverage the greater image context to address the matching difficulty, e.g.,
SuperGlue~\cite{Sarlin2020SuperGlueLF} introduces a graph neural network (GNN) for descriptor matching on an image pair.
Graph edges connect keypoints from arbitrary locations
and enable reasoning in a broad context, leading to globally well
informed solutions compared to convolutional neural networks (CNN) with limited receptive field.
The receptive field in SuperGlue, however, remains limited by the two-view setup, despite that more images are typically available in pose estimation tasks.
Our idea is to further facilitate information flow by joining
multiple views into the matching process. This way, we allow multi-view correlation to strengthen geometric reasoning and confidence prediction.
Joint matching of multiple images integrates well into pose estimation pipelines, as they
typically solve for more than two cameras.
Additionally, we note that accurate feature matching, in and of itself, does not necessarily give rise to accurate pose estimation, as the spatial distribution of feature matches is essential for robust pose optimization.
For instance, perfectly precise matches may form a degenerate case (e.g., lying on a line) and thus have no value for pose optimization.
In addition, confidence scores predicted by matching networks do not necessarily reflect the value of matches towards pose optimization.
Feature matching and pose estimation are thus tightly coupled problems, for which we propose a joint solution:
We encode
keypoints and descriptors from multiple images to construct a graph, where self-attention provides context awareness within the same image and cross-attention enables reasoning with respect to all other images. A GNN predicts matches along with confidence weights, which define constraints on the camera poses that we optimize with a differentiable Gauss-Newton solver. The GNN is trained end-to-end using gradients from the pose optimization. From this feedback, the network learns to produce valuable matches for pose estimation and thereby
learns effective outlier rejection.
We evaluate our method on the ScanNet~\cite{Dai2017ScanNetR3}, Matterport3D~\cite{Chang2017Matterport3DLF} and MegaDepth~\cite{Li2018MegaDepthLS} datasets and show that it improves over prior work on learned feature matching.
In summary, we demonstrate that a joint approach to feature matching and pose estimation benefits both matching and pose accuracy, enabled by the following contributions:
\begin{itemize}
\item We propose a multi-view graph attention network to learn feature matches simultaneously across multiple frames.
\item We introduce an end-to-end trainable pose estimation that both guides confidence weights of feature matches in an unsupervised fashion and backpropagates gradients to inform the graph-matching network.
\end{itemize}
\section{Related Work}
\subsubsection{Conventional Feature Matching.}
The classical feature matching pipeline comprises the following steps: 1) interest point detection, 2) feature description, 3) matching through nearest neighbor search in descriptor space, and 4) outlier filtering. In this pipeline, hand-crafted features like SIFT~\cite{LoweDavid2004DistinctiveIF} and ORB~\cite{Rublee2011ORBAE} are very successful and have been widely used for many years. However, they tend to struggle with appearance or viewpoint changes.
Starting with LIFT~\cite{Yi2016LIFTLI}, learning-based descriptors have been developed to tackle these challenges~\cite{Ono2018LFNetLL,Dusmanu2019D2NetAT,Revaud2019R2D2RA,Bhowmik2020ReinforcedFP,Tyszkiewicz2020DISKLL}. They often combine interest point detection and description, such as SuperPoint \cite{DeTone2018SuperPointSI}, which we use for our method.
Nearest neighbor feature matching is prone to outliers, making post-processing methods indispensable. This includes mutual check, ratio test \cite{LoweDavid2004DistinctiveIF}, neighborhood consensus \cite{Tuytelaars2000WideBS,Cech2008EfficientSC,Cavalli2020HandcraftedOD,Bian2017GMSGM,Ma2018LocalityPM} and sampling based outlier rejection~\cite{Fischler1981RandomSC,Barth2019MAGSACMS,Raguram2008ACA}.
Learning-based approaches have also addressed outlier detection as a classification task~\cite{Yi2018LearningTF,Ranftl2018DeepFM,Brachmann2019NeuralGuidedRL,Zhang2019LearningTC}. These methods rely on reasonable matching proposals and lack visual information in their decision process.
\subsubsection{Learning-based Feature Matching.}
Recent approaches employ neural networks for feature matching on image pairs. There are methods that determine dense, pixel-wise correspondences
with confidence estimates for filtering~\cite{Rocco2018NeighbourhoodCN,Rocco2020EfficientNC,Li2020DualResolutionCN}. This effectively combines steps (1)-(3) from the classical matching pipeline. However, it suffers from the limited receptive field of CNNs and fails to distinguish regions of little texture or repetitive structure, due to missing global context.
In contrast, SuperGlue \cite{Sarlin2020SuperGlueLF} represents a sparse matching network that operates on keypoints with descriptors. Using an attention-based GNN~\cite{Vaswani2017AttentionIA} all keypoints interact, hence the receptive field spans across both images, leading to accurate matches in wide-baseline settings. Inspired by the success of GNN-based feature matching, we build up on SuperGlue by further extending its receptive field through multi-view matching and by improving
outlier filtering through end-to-end training with pose optimization.
LoFTR \cite{Sun2021LoFTRDL} recently proposed a detector-free approach, that processes CNN features in a coarse-to-fine manner. Combined with attention it equally achieves a receptive field across the image pair and high quality matches.
COTR \cite{Jiang2021COTRCT}, like LoFTR, operates on images directly
in a coarse-to-fine fashion. It is a transformer network that predicts for a query point in one image the correspondence in a second image. This way, it considers the global context; however, inference for a complete image pair takes tens of seconds.
We show that our multi-view, end-to-end approach performs better than
SuperGlue and the detector-free methods LoFTR and COTR.
\subsubsection{Pose Optimization.}
Once matches between a set of images are found, poses can be optimized using a bundle adjustment formulation~\cite{triggs1999bundle}.
The optimization can be applied to a set of RGB images~\cite{agarwal2011building} or lifted to the RGB-D case, if depth data is available from range sensors~\cite{dai2017bundlefusion}.
The resulting optimization problems typically lead to non-linear least squares formulations which are optimized using non-linear solvers such as Gauss-Newton or Levenberg-Marquardt.
The pipeline in these methods usually performs feature matching as a pre-process; i.e., correspondences are established first and then filtered with a combination of RANSAC and robust optimization techniques~\cite{zach2014robust,Choi_2015_CVPR}.
However, feature matching and pose optimization largely remain separate steps and cannot inform each other.
To this end, differentiable optimization techniques have been proposed for pose estimation, such as DeMoN~\cite{ummenhofer2017demon}, BA-Net~\cite{tang2018ba}, RegNet~\cite{han2018regnet}, or 3DRegNet~\cite{pais20203dregnet}.
The core idea of these methods is to obtain gradients through the pose optimizations that in turn guide the construction of learned feature descriptors.
In comparison to treating feature extraction as a separate step, feature descriptors are now learned with the objective to obtain well-aligned global poses instead of just trying to get good pair-wise matches.
In our work, we go a step further and focus on learning how to match features rather than using a pre-defined matching method.
As a result, we can leverage differentiable pose optimization to provide gradients for our newly-proposed multi-view graph attention network for feature matching, and achieve significantly improved pose estimation results.
\section{Method}
Our method associates keypoints from $N$ images $\{I_n\}^{N}_{n=1}$, such that resulting matches and confidence weights are particularly valuable for estimating the corresponding camera poses $\{\mathbf{p}_n\}^{N}_{n=1}$, $\mathbf{p}_n \in \mathbb{R}^6$.
Keypoints are represented by their image coordinates $\mathbf{x} \in \mathbb{R}^2$, visual descriptors $\mathbf{d} \in \mathbb{R}^D$ and a confidence score $c \in [0, 1]$.
We use the SuperPoint network for feature detection and description, as it has shown to perform well in combination with learned feature matching \cite{DeTone2018SuperPointSI,Sarlin2020SuperGlueLF}. The source of input descriptors, however, is flexible; for instance, the use of conventional descriptors, such as SIFT \cite{LoweDavid2004DistinctiveIF}, is also possible.
Our pipeline, as shown in \cref{fig:pipeline}, ties together feature matching and pose optimization: we employ a GNN to associate keypoints across multiple images (\cref{ssec:multi_view_matching}). The resulting matches and confidence weights define constraints in the subsequent pose optimization (\cref{ssec:pose_optimization}), which is differentiable, thus enabling end-to-end training (\cref{ssec:end2end_training}).
\subsection{Graph Attention Network for Multi-View Matching}
\label{ssec:multi_view_matching}
\subsubsection{Motivation.}
In the multi-view matching problem of $N$ images, each keypoint matches to at most $N - 1$ other keypoints, where each of the matching keypoints has to come from a different input image.
Without knowing the transformations between images, one keypoint can match to any keypoint location in the other images. Hence, all keypoints in the other images need to be considered as matching candidates. Although keypoints from the same image are not matching candidates, they contribute valuable constraints in the assignment problem, e.g., their projection into other images must follow consistent transformations. The matching problem can be represented as a graph, where nodes model keypoints and edges their relationships.
A GNN architecture reflects this structure and enables learning the complex relations between keypoints to determine feature matches. The iterative message passing process enables the search for globally optimal matches as opposed to a greedy local assignment.
On top of that, attention-based message aggregation allows each keypoint to focus on information from the keypoints that provide the most insight for its assignment.
We build upon the SuperGlue architecture, which introduces an attention-based GNN for descriptor matching between image pairs \cite{Sarlin2020SuperGlueLF}. Our extension to multi-image matching is motivated by the following considerations: first, graph-based reasoning can benefit from tracks that are longer than two keypoints---i.e., a match becomes more confident, if multiple views agree on the keypoint similarity and its coherent location with respect to the other keypoints in each frame.
In particular, with regards to robust pose optimization, it is crucial to facilitate this information flow and boost the confidence prediction.
Second, pose estimation or SLAM systems generally consider multiple input views.
With the described graph structure, jointly
matching $N$ images is more efficient in terms of GNN messages than matching the corresponding image pairs individually, as detailed in the following paragraph.
\subsubsection{Graph Construction.}
Each keypoint represents a graph node. The initial node embedding ${}^{(1)}\mathbf{f}_i$ of keypoint $i$ is computed from its image coordinates $\mathbf{x}_i$, confidence $c_i$ and descriptor $\mathbf{d}_i$ (\cref{eq:node_embedding}). This allows the GNN to consider spatial location, certainty and visual appearance in the matching process:
\begin{equation}
{}^{(1)}\mathbf{f}_i = \mathbf{d}_i + F_\mathrm{encode}\left(\left[\mathbf{x}_i \mathbin\Vert c_i\right]\right),
\label{eq:node_embedding}
\end{equation}
where $\mathbin\Vert$ denotes concatenation and $F_{\mathrm{encode}}$ is a multilayer perceptron (MLP) that lifts the image point and its confidence into the high-dimensional space of the descriptor. Such positional encoding helps the spatial learning \cite{Sarlin2020SuperGlueLF,Gehring2017ConvolutionalST,Vaswani2017AttentionIA}.
The graph nodes are connected by two kinds of edges: self-edges connect keypoints within the same image. Cross-edges connect keypoints from different images (\cref{fig:graph_edges}). The edges are undirected, i.e., information flows in both directions. \cref{tab:gnn_messages} shows that jointly matching $N$ images reduces the number of GNN messages compared to separately matching the corresponding $P=\sum_{n=1}^{N-1}n$ pairs. The savings result from fewer intra-frame messages between keypoints of the same image, e.g., for five images with $K$ keypoints each, pairwise matching involves $20K^2$ messages on a self-layer and $20K^2$ on a cross-layer---joint matching requires only $5K^2$ and $20K^2$, respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth,trim={4.8cm 9.9cm 4.3cm 5.7cm},clip]{figures/graph_edges.pdf}
\caption{Self- and cross-edges connected to a node $i$.}
\vspace{-0.4cm}
\label{fig:graph_edges}
\end{figure}
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\begin{center}
\caption{Number of GNN messages per layer for matching $N$ images, each with $K$ keypoints, as $P$ individual image pairs versus joint matching in a single graph.}
\label{tab:gnn_messages}
\begin{tabular}{lccc}
\toprule
& Messages along self-edges & Messages along cross-edges \\
\midrule
Pairwise matching & $2PK^2$ & $N(N-1)K^2$ \\
Joint matching & $NK^2$ & $N(N-1)K^2$ \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-0.6cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsubsection{Message Passing.}
Interaction between keypoints---the graph nodes---is realized through message passing \cite{Duvenaud2015ConvolutionalNO,Gilmer2017NeuralMP}. The goal is to achieve a state where node descriptors of matching keypoints are close in descriptor space, whereas unrelated keypoints are far apart. The GNN has $L$ layers, where each layer $\ell$ corresponds to a message exchange between keypoints. The layers alternate between updates along self-edges $\mathcal{E}_{\mathrm{self}}$ and cross-edges $\mathcal{E}_{\mathrm{cross}}$---starting with an exchange along self-edges in layer $\ell=1$ \cite{Sarlin2020SuperGlueLF}. \cref{eq:node_update} describes the iterative node descriptor update, where ${}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}$ is the aggregated message from all keypoints that are connected to keypoint $i$ by an edge in $\mathcal{E} \in \{\mathcal{E}_{\mathrm{self}}, \mathcal{E}_{\mathrm{cross}}\}$. ${}^{(\ell)}F_{\mathrm{update}}$ is a MLP, where each GNN layer $\ell$ has a separate set of network weights.
\begin{equation}
{}^{(\ell+1)}\mathbf{f}_i = {}^{(\ell)}\mathbf{f}_i + {}^{(\ell)}F_{\mathrm{update}}\left(\left[{}^{(\ell)}\mathbf{f}_i \mathbin\Vert {}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}\right]\right)
\label{eq:node_update}
\end{equation}
Multi-head attention \cite{Vaswani2017AttentionIA} is used to merge all incoming information for keypoint $i$ into a single message ${}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}$ \cite{Sarlin2020SuperGlueLF}.
Messages along self-edges are combined by self-attention between the keypoints of the same image, messages along cross-edges by cross-attention between the keypoints from all other images.
Linear projection of node descriptors is used to compute the query ${}^{(\ell)}\mathbf{q}_i$ of query keypoint $i$, as well as the keys ${}^{(\ell)}\mathbf{k}_j$ and values ${}^{(\ell)}\mathbf{v}_j$ of its source keypoints $j$:
\begin{align}
{}^{(\ell)}\mathbf{q}_i &= {}^{(\ell)}\mathbf{W}_1 {}^{(\ell)}\mathbf{f}_i + {}^{(\ell)}\mathbf{b}_1 ,
\label{eq:query} \\
\begin{bmatrix}
{}^{(\ell)}\mathbf{k}_j \\ {}^{(\ell)}\mathbf{v}_j
\end{bmatrix}
&=
\begin{bmatrix}
{}^{(\ell)}\mathbf{W}_2 \\ {}^{(\ell)}\mathbf{W}_3
\end{bmatrix}
{}^{(\ell)}\mathbf{f}_j +
\begin{bmatrix}
{}^{(\ell)}\mathbf{b}_2 \\ {}^{(\ell)}\mathbf{b}_3
\end{bmatrix}.
\label{eq:key_val}
\end{align}
The set of source keypoints $\{j : (i, j) \in \mathcal{E}\}$ comprises all keypoints connected to $i$ by an edge of the type, that is relevant to the current layer. $\mathbf{W}$ and $\mathbf{b}$ are per-layer weight matrices and bias vectors, respectively.
For each source keypoint the similarity to the query is computed by the dot product ${}^{(\ell)}\mathbf{q}_i\cdot{}^{(\ell)}\mathbf{k}_j$. The softmax over the similarity scores determines the attention weight $\alpha_{ij}$ of each source keypoint $j$ in the aggregated message to $i$:
\begin{equation}
{}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i} = \sum_{j : (i, j) \in \mathcal{E}} {}^{(\ell)}\alpha_{ij}{}^{(\ell)}\mathbf{v}_j .
\end{equation}
It is important to note that in layers, which update along cross-edges, the source keypoints $j$ to a query keypoint $i$ come from multiple images. The softmax-based weighting is robust to variable number of input views and therewith variable number of keypoints.
After $L$ message passing iterations the node descriptors for subsequent assignment are retrieved by linear projection:
\begin{equation}
\mathbf{f}_i = \mathbf{W}_4 {}^{(L+1)}\mathbf{f}_i + \mathbf{b}_4 .
\label{eq:final_proj}
\end{equation}
\subsubsection{Partial Assignment.}
SuperGlue \cite{Sarlin2020SuperGlueLF} addresses the partial assignment problem between keypoints of two images, $I_1$ and $I_2$, where each keypoint either obtains a match in the other image or remains unmatched. A score matrix $\mathbf{S} \in \mathbb{R}^{(K_1+1)\times (K_2+1)}$ is defined, where $K_1$ and $K_2$ are the number of keypoints in the images, hence all potential matches and the unmatched option are represented. The elements $\mathbf{S}_{i,j}$ are filled with the dot-product similarity of the final node descriptors $\mathbf{f}_{1,i} \cdot \mathbf{f}_{2,j}$, where $\mathbf{f}_{1,i}$ is from $I_1$ and $\mathbf{f}_{2,j}$ from $I_2$.
The last row and column of $\mathbf{S}$, representing unmatched, are initialized with a trainable parameter $q \in \mathbb{R}$. The differentiable Sinkhorn algorithm \cite{Sinkhorn1967ConcerningNM,Cuturi2013SinkhornDL} optimizes for a soft assignment matrix $\mathbf{P} \in [0, 1]^{(K_1+1)\times (K_2+1)}$ that maximizes the sum of scores $\sum_{r,c}\mathbf{S}_{r,c}\mathbf{P}_{r,c}$ while obeying constraints on the number of matches:
\begin{equation}
\mathbf{P}\mathbf{1}_{K_2+1} =
\begin{bmatrix}
\mathbf{1}_{K_1 + 1}^{\top} & K_2
\end{bmatrix}^{\top} \quad \text{and} \quad
\mathbf{P}^\top\mathbf{1}_{K_1+1} =
\begin{bmatrix}
\mathbf{1}_{K_2 + 1}^{\top} & K_1
\end{bmatrix}^{\top}.
\label{eq:sinkhorn_constraints}
\end{equation}
We adopt this approach and apply it pairwise to the images in the multi-view setting. $\mathcal{P}$ is the set of all possible image pairs from $\{I_n\}_{n=1}^N$, excluding pairs between identical images, as well as pairs that are a permutation of an existing pair. For each pair $(a,b) \in \mathcal{P}$, where $a,b \in \{1,2,\dots,N\}$, a score matrix $\mathbf{S}_{ab}$ is created and the assignment $\mathbf{P}_{ab}$ is computed by means of Sinkhorn algorithm. From $\mathbf{P}_{ab}$ the set of matches $\mathcal{M}_{ab}$ is derived: first, a candidate match for each keypoint in $I_a$ and $I_b$ is determined by the row-wise and column-wise maximal elements of $\mathbf{P}_{ab}$. Second, mutual agreement of matching keypoints is enforced.
\subsection{Differentiable Pose Optimization}
\label{ssec:pose_optimization}
We introduce a differentiable optimizer $\Omega$ that jointly estimates all camera poses from the matches determined by the partial assignment:
\begin{equation}
\{\mathbf{p}_n\}_{n=1}^{N} = \Omega(\{\mathcal{M}_{ab}:(a,b) \in \mathcal{P}\}, \{Z_n\}_{n=1}^{N}) .
\label{eq:optimizer}
\end{equation}
To stabilize the optimization, we use the depth maps $\{Z_n\}_{n=1}^{N}$ as additional input. Without depth measurements or good pose initialization, the optimization of bundle adjustment formulations is prone to fall into local minima.
We define the energy as weighted sum of squared errors between matches in world coordinates (\cref{eq:energy}). A match consists of the image coordinates, $\mathbf{x}_a$ in $I_a$ and $\mathbf{x}_b$ in $I_b$, as well as the matching confidence $w$, i.e., the corresponding element from the assignment $\mathbf{P}_{ab}$. The function $\pi_n^{-1}(\mathbf{x}_n, Z_n)$ unprojects an image point $\mathbf{x}_n$ in $I_n$ to homogeneous camera coordinates using its depth from $Z_n$. ${\mathbf{T}_{\mathbf{p}_n} \in \mathbb{R}^{3\times4}}$ defines the transformation from camera pose $\mathbf{p}_n$ to world coordinates. $\mathbf{p} \in \mathbb{R}^{6N}$ refers to the concatenation of all pose vectors, which are in $\mathfrak{se}(3)$ coordinates, i.e., three translation elements followed by three rotation elements.
\begin{align}
E(\mathbf{p}) = \sum_{(a,b) \in \mathcal{P}} \; \sum_{(\mathbf{x}_a,\mathbf{x}_b,w) \in \mathcal{M}_{ab}} w^2 \left\Vert \mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a - \mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b \right\Vert_2^2&,
\label{eq:energy} \\
\text{where} \quad \mathbf{y}_a=\pi_a^{-1}(\mathbf{x}_a, Z_a) \quad \text{and} \quad \mathbf{y}_b=\pi_b^{-1}(\mathbf{x}_b, Z_b)&.
\end{align}
Gauss-Newton is used to minimize the energy with respect to the camera poses. For this purpose, a residual vector $\mathbf{r} \in \mathbb{R}^{3M}$ is created from the energy terms, where $M$ is the total number of matches between all images. Each match $m$ fills its corresponding subvector $\mathbf{r}_m \in \mathbb{R}^3$:
\begin{equation}
\mathbf{r}_m = w(\mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a - \mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b).
\end{equation}
All poses are initialized to $\mathbf{0}$. We keep one pose fixed, which defines the world frame, and optimize for the remaining poses $\bar{\mathbf{p}}\in \mathbb{R}^{6(N-1)}$. The Jacobian matrix $\mathbf{J}\in \mathbb{R}^{3M\times6(N-1)}$ is initialized to $\mathbf{0}$ and filled with the partial derivatives with respect to the pose parameters: for each match $m$ the corresponding blocks $\mathbf{J}_{ma}, \mathbf{J}_{mb} \in \mathbb{R}^{3\times6}$ are assigned \cite{Blanco}:
\begin{equation}
\mathbf{J}_{ma}=\frac{\partial \mathbf{r}_m}{\partial \mathbf{p}_a}=w\begin{bmatrix}
\mathbf{I}_3 & \: & -\left(\mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a\right)^\wedge
\end{bmatrix} \enspace , \enspace
\mathbf{J}_{mb}=\frac{\partial \mathbf{r}_m}{\partial \mathbf{p}_b}=w\begin{bmatrix}
-\mathbf{I}_3 & \: & \left(\mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b\right)^\wedge
\end{bmatrix}.
\label{eq:jac}
\end{equation}
$\mathbf{I}_3$ is a $3\times3$ identity matrix and $(\cdot)^\wedge$ maps a vector $\in \mathbb{R}^3$ to its skew-symmetric matrix: $\scriptsize \begin{bmatrix}
x \\ y \\ z
\end{bmatrix}\rightarrow\begin{bmatrix}
0&-z&y\\
z&0&-x\\
-y&x&0
\end{bmatrix}$.
If $a$ or $b$ identify the fixed pose, the corresponding assignment to $\mathbf{J}$ is skipped.
Using the current state of the camera poses, each Gauss-Newton iteration establishes a linear system, that is solved for the pose update $\mathrm{\Delta} \bar{\mathbf{p}}$ using LU decomposition:
\begin{equation}
\mathbf{J}^\top\mathbf{J} \mathrm{\Delta} \bar{\mathbf{p}}=-\mathbf{J}^\top\mathbf{r}.
\label{eq:gn_update}
\end{equation}
We update the poses in $T=10$ Gauss-Newton iterations, from which the set of poses with minimal energy is used for end-to-end training in \cref{ssec:end2end_training}.
\subsection{End-to-End Training}
\label{ssec:end2end_training}
The learnable parameters include the GNN parameters and the parameter $q$ of the partial assignment module.
The whole pipeline, from the matching network to the pose optimization, is differentiable, which allows for a pose loss that guides the matching network to produce valuable matches and accurate confidences for robust pose optimization. The training objective $\mathcal{L}$ consists of a matching term $\mathcal{L}_{\mathrm{match}}$ \cite{Sarlin2020SuperGlueLF} and a pose term $\mathcal{L}_{\mathrm{pose}}$, which are balanced by the factor $\lambda$:
\begin{align}
\mathcal{L}&=\sum_{(a,b)\in \mathcal{P}}\mathcal{L}_{\mathrm{match}}(a,b)+\lambda \mathcal{L}_{\mathrm{pose}}(a,b), \quad \text{where}
\label{eq:total_loss} \\
\mathcal{L}_{\mathrm{match}}(a,b)&=-\sum_{(i,j)\in \mathcal{T}_{ab}}\log \mathbf{P}_{ab,i,j}-\sum_{i\in \mathcal{U}_{ab}}\log \mathbf{P}_{ab,i,K_b+1}-\sum_{j\in \mathcal{V}_{ab}}\log \mathbf{P}_{ab,K_a+1,j}, \nonumber \\
\mathcal{L}_{\mathrm{pose}}(a,b)&=\left\Vert\hat{\mathbf{t}}_{a\rightarrow b}-\mathbf{t}_{a\rightarrow b}\right\Vert_2+\lambda_{\mathrm{rot}}\cos^{-1}\left(\frac{\mathrm{tr}(\mathbf{R}_{a\rightarrow b}^\top\hat{\mathbf{R}}_{a\rightarrow b})-1}{2}\right). \nonumber
\end{align}
$\mathcal{L}_{\mathrm{match}}$ computes the negative log-likelihood of the assignment between an image pair. The labels are computed using the ground truth depth maps, camera poses and intrinsic parameters: $\mathcal{T}_{ab}$ is the set of matching keypoints, $\mathcal{U}_{ab}$ and $\mathcal{V}_{ab}$ identify unmatched keypoints from $I_a$ and $I_b$, respectively.
$\mathcal{L}_{\mathrm{pose}}$ computes a transformation error between a pair of camera poses, where the translational and rotational components are balanced by $\lambda_{\mathrm{rot}}$. $\hat{\mathbf{R}}_{a\rightarrow b}$ and $\hat{\mathbf{t}}_{a\rightarrow b}$ are a rotation matrix and translation vector computed from the pose optimization result (\cref{ssec:pose_optimization}). Rodrigues' formula is used to convert from axis-angle representation to rotation matrix. $\mathbf{R}_{a\rightarrow b}$ and $\mathbf{t}_{a\rightarrow b}$ define the ground truth transformation.
We use the Adam optimizer \cite{Kingma2015AdamAM}. Further details on the network architecture and training setup are provided in the supplementary material.
\section{Results}
We compare our method to baselines by evaluating indoor and outdoor pose estimation (\cref{ssec:pose_estimation}) and matching accuracy (\cref{ssec:matching}). \cref{ssec:ablation} shows the effectiveness of the added components in an ablation study. Runtime considerations are part of the supplementary material.
\subsection{Datasets}
\label{ssec:datasets}
\subsubsection{ScanNet \cite{Dai2017ScanNetR3}.}
Following the data generation in previous works \cite{Sarlin2020SuperGlueLF,Sun2021LoFTRDL}, we sample images from the video sequence, such that the overlap to the previous image lies in $[0.4, 0.8]$.
Instead of sampling a pair, we append three more images according to this overlap criterion. The resulting 5-tuples enable multi-view evaluation and provide a more realistic pose estimation scenario. The overlap is computed from ground truth poses, depth maps and intrinsic parameters.
\subsubsection{Matterport3D \cite{Chang2017Matterport3DLF}.}
Compared to ScanNet, Matterport3D view captures are much more sparse, i.e., neighboring images are $60\degree$ horizontally and $30\degree$ vertically apart. Hence, Matterport3D is a challenging dataset for the matching task.
To obtain a sufficient dataset size, we relax the overlap criterion to $[0.25, 0.8]$.
This challenging dataset, serves to measure robustness on the pose estimation task.
\subsubsection{MegaDepth \cite{Li2018MegaDepthLS}.}
As in prior work \cite{Sarlin2020SuperGlueLF,Dusmanu2019D2NetAT}, the overlap between images is the portion of co-visible 3D points of the sparse reconstruction, thus the overlap definition is different from the indoor datasets and not comparable. Overlap ranges $[0.1, 0.7]$ and $[0.1, 0.4]$ are used at train and test time, respectively \cite{Sarlin2020SuperGlueLF}.
\subsection{Pose Estimation}
\label{ssec:pose_estimation}
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\centering
\caption{Baseline comparison and ablation study on wide-baseline indoor pose estimation on ScanNet; ``cross-dataset'' indicates that COTR was trained on MegaDepth.}
\label{tab:pose_scannet}
\resizebox{\linewidth}{!}{
\begin{tabular}{l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
& @5\degree & @10\degree & @20\degree & @5cm & @10cm & @20cm \\
\midrule
Mutual nearest neighbor & 14.5 & 25.9 & 40.7 & 3.7 & 8.9 & 17.9 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 63.4 & 78.9 & 88.2 & 28.9 & 49.0 & 67.8 \\
LoFTR \cite{Sun2021LoFTRDL} & 72.2 & 83.9 & 90.4 & 40.2 & 59.7 & 75.4 \\
COTR \cite{Jiang2021COTRCT} cross-dataset & 46.2 & 60.5 & 72.0 & 20.9 & 36.1 & 51.8 \\
Ours w/o multi-view & 68.7 & 81.9 & 89.6 & 35.8 & 56.7 & 73.6 \\
Ours w/o end-to-end & 66.0 & 80.6 & 89.2 & 31.0 & 51.8 & 70.3 \\
Ours & \textbf{72.5} & \textbf{84.6} & \textbf{91.5} & \textbf{41.5} & \textbf{61.8} & \textbf{77.5} \\
\bottomrule
\end{tabular}
}
\vspace{0.4cm}
\caption{Baseline comparison and ablation study on wide-baseline indoor pose estimation on Matterport3D.}
\label{tab:pose_matterport}
\resizebox{\linewidth}{!}{
\begin{tabular}{l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
& @5\degree & @10\degree & @20\degree & @5cm & @10cm & @20cm \\
\midrule
Mutual nearest neighbor & 0.6 & 2.0 & 5.4 & 0.0 & 0.1 & 0.3 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 18.5 & 29.6 & 41.7 & 3.4 & 8.5 & 16.9 \\
Ours w/o multi-view & 27.2 & 38.0 & 49.1 & 6.5 & 14.2 & 24.6 \\
Ours w/o end-to-end & 30.5 & 42.3 & 53.5 & 7.1 & 16.3 & 28.2 \\
Ours & \textbf{42.4} & \textbf{55.4} & \textbf{66.2} & \textbf{12.2} & \textbf{24.9} & \textbf{39.8} \\
\bottomrule
\end{tabular}
}
\vspace{0.4cm}
\caption{Baseline comparison and ablation study on wide-baseline outdoor pose estimation on MegaDepth. For comparison to LoFTR, we retrain and test our model on the LoFTR data split (bottom section of the table).}
\label{tab:pose_megadepth}
\resizebox{\linewidth}{!}{
\begin{tikzpicture}[very thick,squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\begin{tabular}{p{0.3cm}l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& & \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){3-5}
\cmidrule(lr){6-8}
& & @5\degree & @10\degree & @20\degree & @5\degree & @10\degree & @20\degree \\
\midrule
& Mutual nearest neighbor & 14.3 & 27.8 & 44.2 & 6.6 & 14.6 & 26.5 \\
& SuperGlue \cite{Sarlin2020SuperGlueLF} & 70.3 & 77.8 & 83.7 & 53.3 & 64.1 & 73.6 \\
& COTR \cite{Jiang2021COTRCT} & 61.4 & 69.7 & 77.5 & 45.7 & 56.7 & 66.9 \\
& Ours w/o multi-view & 74.4 & 80.8 & 86.1 & 58.5 & 68.8 & 77.4 \\
& Ours w/o end-to-end & 74.5 & 81.6 & 87.0 & 57.8 & 68.9 & 77.8 \\
& Ours & \textbf{81.1} & \textbf{86.8} & \textbf{91.2} & \textbf{67.7} & \textbf{76.6} & \textbf{83.6} \\
\cmidrule(l){2-8}
& LoFTR \cite{Sun2021LoFTRDL} & 75.2 & 83.0 & 88.6 & 60.5 & 71.3 & 79.7 \\
& Ours & \textbf{89.6} & \textbf{93.6} & \textbf{95.9} & \textbf{74.1} & \textbf{82.3} & \textbf{88.3} \\
\bottomrule
\end{tabular}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode,rotate=90] at (0.02, 0.5) (a) {\scriptsize Split A};
\draw [decorate, decoration = {calligraphic brace}] (0.045,0.23) -- (0.045,0.74);
\node[squarednode,rotate=90] at (0.02, 0.12) (a) {\scriptsize Split B};
\draw [decorate, decoration = {calligraphic brace}] (0.045,0.02) -- (0.045,0.2);
\end{scope}
\end{tikzpicture}
}
\vspace{-0.4cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\begin{figure*}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0.}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/scannet_results.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.05, 69/72) (a) {\bf 1};
\node[squarednode] at (0.05, 57/72) (b) {\bf 2};
\node[squarednode] at (0.05, 45/72) (c) {\bf 3};
\node[squarednode] at (0.05, 33/72) (d) {\bf 4};
\node[squarednode] at (0.05, 21/72) (e) {\bf 5};
\node[squarednode] at (0.05, 9/72) (f) {\bf 6};
\node[squarednode] at (0.165, -0.025) (g) {Input 5-tuples};
\node[squarednode] at (0.4215, -0.025) (g) {SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.59, -0.025) (g) {LoFTR \cite{Sun2021LoFTRDL}};
\node[squarednode] at (0.75, -0.025) (g) {COTR \cite{Jiang2021COTRCT}};
\node[squarednode] at (0.915, -0.025) (g) {Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on ScanNet 5-tuples (left). With multi-view matching and end-to-end training, our method successfully handles challenging pose estimation scenarios, while baselines show severe camera pose errors.}
\label{fig:scannet_results}
\vspace{-0.4cm}
\end{figure*}
\begin{figure*}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/matterport_results_short.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.05, 19/20) (a) {\bf 1};
\node[squarednode] at (0.05, 15/20) (b) {\bf 2};
\node[squarednode] at (0.05, 11/20) (c) {\bf 3};
\node[squarednode] at (0.05, 7/20) (d) {\bf 4};
\node[squarednode] at (0.05, 3/20) (e) {\bf 5};
\node[squarednode] at (0.16, -0.026) (g) {Input 5-tuples};
\node[squarednode] at (0.41, -0.026) (g) {SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.58, -0.026) (g) {Ours w/o};
\node[squarednode] at (0.58, -0.057) (g) {multi-view};
\node[squarednode] at (0.745, -0.026) (g) {Ours w/o};
\node[squarednode] at (0.745, -0.057) (g) {end-to-end};
\node[squarednode] at (0.915, -0.026) (g) {Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on Matterport3D 5-tuples (left). Our complete method improves camera alignment over the ablated versions and SuperGlue, showing the importance of multi-view matching and end-to-end training.}
\label{fig:matterport_results}
\vspace{-0.4cm}
\end{figure*}
Prior work, in particular SuperGlue~\cite{Sarlin2020SuperGlueLF}, has extensively demonstrated the superiority of the GNN approach over conventional matching. Hence, we focus on comparisons to recent feature matching networks: SuperGlue~\cite{Sarlin2020SuperGlueLF}, LoFTR~\cite{Sun2021LoFTRDL} and COTR~\cite{Jiang2021COTRCT}. We additionally compare to a non-learning-based matcher, i.e., mutual nearest neighbor search on the SuperPoint \cite{DeTone2018SuperPointSI} descriptors. This serves to confirm the effectiveness of SuperGlue and our method, which both use SuperPoint descriptors.
For each method, the matches and confidences are used to optimize for the camera poses according to \cref{ssec:pose_optimization}. As the baselines are designed for matching image pairs, we run them repeatedly on all 10 possible pairs of the 5-tuples and use all resulting matches in the pose optimization.
The pose accuracy is evaluated based on the area under the curve (AUC) in \% at the thresholds $[5\degree, 10\degree, 20\degree]$ for rotation error and $[5\mathrm{cm}, 10\mathrm{cm}, 20\mathrm{cm}]$ for translation error on ScanNet and Matterport3D. As MegaDepth reconstructions are up to an unknown scale factor, the translation error is measured by the angle between translation vectors using thresholds $[5\degree, 10\degree, 20\degree]$ for the AUC. For qualitative comparison we use the computed poses to fuse the 5 depth maps in a truncated signed distance field (TSDF), which is then converted into a mesh using marching cubes \cite{Lorensen1987MarchingCA}.
Quantitative results on ScanNet are shown in \cref{tab:pose_scannet}, demonstrating that our method achieves higher accuracy than baselines.
The misalignments in the reconstructions (\cref{fig:scannet_results}) reveal that the baselines struggle in presence of repetitive patterns such as the washing machines (sample 1), the pictures on the wall (sample 5) or the patterned couches (sample 6). With multi-view reasoning during matching and learned outlier rejection through end-to-end training, our approach is more robust in these situations.
\cref{tab:pose_matterport} evaluates pose estimation on Matterport3D. The pose accuracy on Matterport3D is overall lower than on ScanNet, due to the smaller overlap between images and possibly amplified by the smaller training dataset.
In this scenario, our method outperforms SuperGlue with a larger gap than on ScanNet, which shows that our approach copes better with the more challenging setting in Matterport3D.
We show additional analysis in the ablation study.
Quantitative results on MegaDepth demonstrate the gain from multi-view matching and end-to-end training in the outdoor setting, leading to higher accuracy than baselines (\cref{tab:pose_megadepth}). Qualitative results are provided in the supplement.
\paragraphNoSpace{Implementation Details.}
COTR does not predict confidences, hence we use equal weights for all matches. For SuperGlue and LoFTR, the predicted confidences are used in the pose optimization, which we empirically found to perform better than thresholding. Further implementation detail is available in the supplemental.
\subsection{Matching}
\label{ssec:matching}
To avoid manual setting of confidence thresholds, the matching accuracy is evaluated by computing the weighted mean of the epipolar error $e$ on image pairs:
\begin{equation}
e = \frac{\sum_{m=1}^{M} w_m e_m}{\sum_{m=1}^{M} w_m},
\label{eq:epipolar_error}
\end{equation}
where $M$ is the number of matches between an image pair, $e_m$ the symmetric epipolar error of a match and $w_m$ its confidence.
The epipolar error and the average number of detected matches on ScanNet are listed in \cref{tab:match_scannet}.
As SuperGlue explicitly proposes a confidence threshold at 0.2 to determine valid matches, we also report this version of the baseline.
While our method achieves the lowest epipolar error, LoFTR produces a much higher number of matches.
This shows that the number of matches is not a reliable indicator for pose accuracy, but rather accurate matches and confidences are beneficial.
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\centering
\caption{Baseline comparison on wide-baseline matching accuracy on ScanNet.}
\label{tab:match_scannet}
\begin{tabular}{lcc}
\toprule
& Number of matches & Epipolar error [m] $\downarrow$ \\
\midrule
Mutual nearest neighbor & 192 & 0.373 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 207 & 0.158 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} w/ threshold 0.2 & 189 & 0.032 \\
LoFTR \cite{Sun2021LoFTRDL} & \textbf{1304} & 0.034 \\
COTR \cite{Jiang2021COTRCT} & 96 & 0.069 \\
Ours & 186 & \textbf{0.020} \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Ablation Study}
\label{ssec:ablation}
The quantitative results on ScanNet, Matterport3D and MegaDepth (\cref{tab:pose_scannet,tab:pose_matterport,tab:pose_megadepth}), show that the full version of our method achieves the best performance. This is consistent with the qualitative results in \cref{fig:matterport_results}.
\paragraphNoSpace{Without Multi-View.}
Omitting multi-view in the GNN causes an average performance drop of 3.9\% on ScanNet and 13.6\% on Matterport3D.
This suggests that the importance of multi-view increases with decreasing overlap between images. Intuitively, the multi-view receptive field supports information flow from other views to bridge gaps, where the overlap is small. \cref{fig:matterport_results} shows the notably improved camera alignment through multi-view input.
\paragraphNoSpace{Without End-to-End.}
Omitting end-to-end training drops the average performance by 6.8\% on ScanNet and 10.5\% on Matterport3D.
This shows that end-to-end training enables the learning of reliable outlier down-weighting, which is even more beneficial in the difficult Matterport3D scenarios.
Lack of end-to-end training is visible in the reconstructions (\cref{fig:matterport_results}), e.g., the misaligned pattern on the floor (sample 3) or the failure to reconstruct thin objects (sample 5).
\paragraphNoSpace{Variable Number of Input Views.} In \cref{fig:number_images}, we investigate the impact of the number of images used for matching, both in pairwise (ours w/o multi-view) and joint (our full version) fashion.
The experiment is conducted on sequences of 9 images which are generated on ScanNet as described in \cref{ssec:datasets}.
The results show that pose accuracy improves, when matching across a larger span of neighboring images. The curves, however, plateau when a larger window size does not bring any more relevant images into the matching.
Additionally, the results show the benefit of joint matching in a single graph as opposed to matching all possible image pairs individually.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=white, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0,draw opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=0.85\textwidth,trim={0.2cm 0.3cm -3.cm 0.2cm},clip]{figures/number_images.pdf}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\scriptsize
\node[squarednode] at (0.21, -0.06) (a) {Number of images};
\node[squarednode] at (0.66, -0.06) (a) {Number of images};
\node[squarednode] at (0.97, 0.63) (a) {Pairwise matching};
\node[squarednode] at (0.95, 0.47) (a) {Joint matching};
\node[squarednode] at (0.215, 1.07) (b) {Rotation error AUC @10\degree [\%] $\uparrow$};
\node[squarednode] at (0.665, 1.07) (b) {Translation error AUC @10cm [\%] $\uparrow$};
\end{scope}
\end{tikzpicture}
\vspace{-.2cm}
\caption{Pose error AUC on sequences of 9 images on ScanNet using variable number of images in pairwise or joint matching.}
\label{fig:number_images}
\vspace{-.4cm}
\end{figure}
\subsection{Limitations}
Our method builds on SuperGlue~\cite{Sarlin2020SuperGlueLF} and improves pose estimation accuracy and robustness to small image overlap.
Here, one of our contributions is the end-to-end differentiablity of the pose optimization that guides the matching network.
While this significantly improves matching quality, we currently only backpropgate gradients to the matching network but do not update keypoint descriptors; i.e., we use existing SuperPoint~\cite{DeTone2018SuperPointSI}.
However, we believe that jointly training feature descriptors is a promising avenue to even further improve performance.
\section{Conclusion}
We have presented a method that couples multi-view feature matching and pose optimization into an end-to-end trainable pipeline.
Using a graph neural network, we match features across multiple views in a joint fashion, which increases global awareness of the matching process.
Combined with differentiable pose optimization, gradients inform the matching network, which learns to produce valuable, outlier-free matches for pose estimation.
The experiments show that our method improves both pose and matching accuracy compared to prior work. In particular, we observe increased robustness in challenging settings, such as in presence of repetitive structure or small image overlap.
Overall, we believe that our end-to-end approach is an important stepping stone towards an end-to-end trained SLAM method.
\section*{Acknowledgements}
This project is funded by a TUM-IAS Rudolf Mößbauer Fellowship, the ERC Starting Grant Scan2CAD (804724), and the German Research Foundation (DFG) Grant Making Machine Learning on Static and Dynamic 3D Data Practical.
We thank Angela Dai for the video voice-over.
\section{Qualitative Results on MegaDepth}
\cref{fig:megadepth_results} shows qualitative results from the ablation study and baseline comparison on MegaDepth dataset. The full version of our method accurately estimates camera poses even across large viewpoint changes (e.g., sample 4), and strong appearance variations (e.g., samples 1 and 2).
\begin{figure*}[b]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0.}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/megadepth_results.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.037, 0.935) (a) {\bf 1};
\node[squarednode] at (0.037, 0.685) (b) {\bf 2};
\node[squarednode] at (0.037, 0.435) (c) {\bf 3};
\node[squarednode] at (0.037, 0.185) (d) {\bf 4};
\node[squarednode] at (0.115, -0.035) (g) {\footnotesize Input 5-tuples};
\node[squarednode] at (0.31, -0.035) (g) {\footnotesize SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.47, -0.035) (g) {\footnotesize COTR \cite{Jiang2021COTRCT}};
\node[squarednode] at (0.615, -0.035) (g) {\footnotesize Ours w/o};
\node[squarednode] at (0.615, -0.075) (g) {\footnotesize multi-view};
\node[squarednode] at (0.77, -0.035) (g) {\footnotesize Ours w/o};
\node[squarednode] at (0.77, -0.075) (g) {\footnotesize end-to-end};
\node[squarednode] at (0.925, -0.035) (g) {\footnotesize Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on MegaDepth 5-tuples (left). With multi-view matching and end-to-end training, our method successfully estimates camera poses in challenging outdoor scenarios, while baselines show misalignment.}
\label{fig:megadepth_results}
\vspace{-0.4cm}
\end{figure*}
\section{Runtime}
Our method takes on average 207ms for matching a 5-tuple, which corresponds to matching 10 image pairs. SuperGlue requires on average 40ms for matching one pair, which shows that inference time correlates well with the number of GNN messages (\cref{tab:gnn_messages}). LoFTR takes on average 89ms for a pair and COTR is much slower with 35s.
Although we did not optimize our implementation for speed, the measurement shows that it is suited for real-time application.
We believe that the coupling of multi-view matching with pose optimization fits particularly well for keyframe alignment in reconstruction or SLAM. An alternative pose optimization module using relative poses, e.g., from inertial sensors, instead of depth measurements can also be realized.
All runtime is measured on a Nvidia GeForce RTX 2080.
\section{Architecture Details}
Our multi-view matching network builds up on the SuperGlue \cite{Sarlin2020SuperGlueLF} architecture and uses the following parameters:
\paragraphNoSpace{Keypoint Encoder.}
The input visual descriptors from SuperPoint \cite{DeTone2018SuperPointSI} have size $D = 256$. The graph nodes equally have an embedding size of $D$. Hence, the keypoint encoder $F_{\mathrm{encode}}$ maps a keypoint's image coordinates and confidence score to $D$ dimensions. It is a MLP, composed of five layers with 32, 64, 128, 256 and $D$ channels. Each layer, except the last, uses batch normalization and ReLU activation.
\paragraphNoSpace{Graph Attention Network.}
The GNN has $L=9$ layers. The layers alternate between message exchange along self-edges and message exchange along cross-edges, such that the first and last layers perform updates along self-edges.
The attentional aggregation of incoming messages from other nodes uses multi-head attention with four heads. The resulting messages have size $D$, like the node embeddings.
The MLP $F_{\mathrm{update}}$, which computes the update to the receiving node, operates on the concatenation of the current node embedding with the incoming message. It has two layers with $2D$ and $D$ channels. Batch normalization and ReLU activation are employed between the two layers.
\paragraphNoSpace{Partial Assignment.}
We use 100 iterations of the Sinkhorn algorithm to determine the partial assignment matrices.
\paragraphNoSpace{Pose Optimization.}
The camera poses are optimized by conducting \mbox{$T=10$} Gauss-Newton updates.
\section{Training Details}
\subsubsection{Two-Stage Training.}
Our end-to-end pipeline is trained in two stages. The first stage uses the loss term on the matching result $\mathcal{L}_{\mathrm{match}}$. The second stage additionally employs the pose loss $\mathcal{L}_{\mathrm{pose}}$. Stage 1 is trained until the validation match loss converges, stage 2 until the validation pose loss converges. On ScanNet~\cite{Dai2017ScanNetR3}/ Matterport3D~\cite{Chang2017Matterport3DLF}/ MegaDepth~\cite{Li2018MegaDepthLS} the training takes 25/ 228/ 71 epochs for stage 1 and 4/ 7/ 11 epochs for stage 2. We found that the training on MegaDepth benefits from initializing the network parameters to the network parameters after the first ScanNet training stage. During stage 2 we linearly increase the weight of $\mathcal{L}_{\mathrm{pose}}$ from 0 to 2000 on the indoor datasets, and from 0 to 685 on MegaDepth, while linearly decreasing the weight of $\mathcal{L}_{\mathrm{match}}$ from 1 to 0, over a course of 40000 iterations. The balancing factor of the rotation term in $\mathcal{L}_{\mathrm{pose}}$ is set to $\lambda_{\mathrm{rot}}=2$ on the indoor datasets and $\lambda_{\mathrm{rot}}=6.75$ on MegaDepth.
We use the Adam optimizer with learning rate 0.0001.
The learning rate is exponentially decayed with a factor of 0.999992 starting after 100k iterations.
\subsubsection{Ground Truth Generation.}
The ground truth matches $\mathcal{T}_{ab}$ and sets of unmatched keypoints $\mathcal{U}_{ab}$, $\mathcal{V}_{ab}$ of an image pair are computed by projecting the detected keypoints from each image to the other, resulting in a reprojection error matrix. Keypoint pairs where the reprojection error is both minimal and smaller than 5 pixels in both directions are considered matches. Unmatched keypoints must have a minimum reprojection error greater than 15 pixels on the indoor datasets and greater than 10 pixels on MegaDepth.
\subsubsection{Input Data.}
We train on 5-tuples with a batch size of 24 on indoor data and with a batch size of 4 on outdoor data.
The image size is 480$\times$640 on ScanNet, 512$\times$640 on Matterport3D and 640$\times$640 on MegaDepth.
The SuperPoint network is configured to detect keypoints with a non-maximum suppression radius of 4/ 3 on indoor/ outdoor data.
On the indoor datasets we use 400 keypoints per image during training time: first, keypoints above a confidence threshold of 0.001 are sampled, second, if there are fewer than 400, the remainder is filled with random image points and confidence 0 as a data augmentation. On MegaDepth the same procedure is applied to sample 1024 keypoints using confidence threshold 0.005. At test time on indoor/ outdoor data, we use up to 1024/ 2048 keypoints above the mentioned confidence thresholds.
\subsubsection{Dataset Split.}
On ScanNet and Matterport3D we use the official dataset split. On Mega\-Depth we use scenes 0016, 0047, 0058, 0064, 0121, 0129, 0133, 0168, 0175, 0178, 0181, 0185, 0186, 0204, 0205, 0212, 0217, 0223, 0229 for validation, 0271, 0285, 0286, 0294, 0303, 0349, 0387, 0412, 0443, 0482, 0768, 1001, 3346, 5014, 5015, 5016, 5018 for testing and the remaining scenes for training.
This way on ScanNet/ Matterport3D/ MegaDepth we obtain 240k/ 20k/ 15k 5-tuples for training, 62k/ 2200/ 1200 for validation and 1500/ 1500/ 1000 for testing.
\section{Baseline Comparison}
In the baseline comparison we use the network weights trained by the authors of SuperGlue~\cite{Sarlin2020SuperGlueLF}, LoFTR~\cite{Sun2021LoFTRDL} and COTR~\cite{Jiang2021COTRCT}.
There are SuperGlue and LoFTR models trained on ScanNet and on MegaDepth, as well as a COTR model trained on MegaDepth. We additionally train a SuperGlue model on Matterport3D~\cite{Chang2017Matterport3DLF}.
|
\section{0pt}{12pt plus 4pt minus 2pt}{8pt plus 2pt minus 2pt}
\usepackage{amsthm,amsmath}
\RequirePackage{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{nameref}
\def{}
\def{}
\usepackage{graphicx}
\graphicspath{{./figures/}}
\usepackage{subcaption}
\captionsetup{compatibility=false}
\usepackage{amsfonts}
\usepackage{dsfont}
\usepackage{xcolor}
\usepackage{lipsum}
\usepackage[nameinlink]{cleveref}
\crefname{equation}{Eq.}{Eqs.}
\Crefname{equation}{Equation}{Equations}
\crefname{paragraph}{Paragraph}{Paragraphs}
\usepackage[multidot]{grffile}
\newcommand{\vv} {{\bm v}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\renewcommand{\L}{\mathcal{L}}
\renewcommand{\vec}[1]{#1}
\newcommand{\mbeq}{\stackrel{!}{=}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\grad}{{\nabla}}
\let\vaccent=\v
\renewcommand{\v}[1]{\mathbf{#1}}
\renewcommand{\bf}[1]{\textbf{#1}}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\f}[2]{\frac{#1}{#2}}
\newcommand{\avg}[1]{\left\langle #1 \right\rangle}
\let\setunion=\ccup
\newcommand{\ccup}[1]{\left\{#1\right\}}
\let\setunion=\bup
\let\setunion=\rup
\newcommand{\bup}[1]{\left(#1\right)}
\newcommand{\rup}[1]{\left[#1\right]}
\let\oldt=\t
\renewcommand{\t}[1]{\tilde{#1}}
\newcommand{\omega}{\omega}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\sigma}{\sigma}
\newcommand{\times}{\times}
\newcommand{\alpha}{\alpha}
\renewcommand{\b}[1]{\bar{#1}}
\newcommand{\h}[1]{\hat{#1}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\usepackage[normalem]{ulem}
\newcommand{\CDB}[1]{\textcolor{blue}{#1}}
\newcommand{\CDBcom}[1]{[\textcolor{blue}{CDB: #1}]}
\newcommand{\DBT}[1]{\textcolor{red}{#1}}
\newcommand{\DBTcom}[1]{[\textcolor{red}{DT: #1}]}
\newcommand{\nix}[1]{\sout{#1}}
\newcommand{\mbox{{\small \textit{DMK-Solver}}}}{\mbox{{\small \textit{DMK-Solver}}}}
\newcommand{\mbox{{\small DMK}}}{\mbox{{\small DMK}}}
\newcommand{\mbox{{\small \textit{discrete-DMK-Solver}}}}{\mbox{{\small \textit{discrete-DMK-Solver}}}}
\begin{document}
\title{Convergence properties of optimal transport-based temporal hypernetworks}
\author{Diego Baptista}
\affiliation{ Max Planck Institute for Intelligent Systems, Cyber Valley, Tuebingen, 72076, Germany}
\author{Caterina De Bacco}
\affiliation{ Max Planck Institute for Intelligent Systems, Cyber Valley, Tuebingen, 72076, Germany}
\begin{abstract}
We present a method to extract temporal hypergraphs from sequences of 2-dimensional functions obtained as solutions to Optimal Transport problems. We investigate optimality principles exhibited by these solutions from the point of view of hypergraph structures. Discrete properties follow patterns that differ from those characterizing their continuous counterparts. Analyzing these patterns can bring new insights into the studied transportation principles. We also compare these higher-order structures to their network counterparts in terms of standard graph properties. We give evidence that some transportation schemes might benefit from hypernetwork representations. We demonstrate our method on real data by analyzing the properties of hypernetworks extracted from images of real systems.
\end{abstract}
\maketitle
\section*{Introduction}
Optimal Transport (OT) is a principled theory to compare probability distributions \cite{kantorovich1942transfer, villaniot, santambrogio2015optimal, peyre2019computational}. Although this task is usually framed as an optimization problem, recent studies have mapped it within the framework of dynamic partial differential equations \cite{evans1999differential, facca2016towards, facca2019numerics, facca2021branching,tero2007mathematical, tero2010rules}. In this context, solutions to a transportation problem are often found as the convergent state of evolving families of functions.
\\
In some scenarios, the steady states of these evolving families are supported in network-shaped structures \cite{xia2003optimal, xia2014landscape, Xia2015}. Recently, this fact has called the attention of network scientists and graph theorists leading to the development of methods that convert the solutions of OT problems into actual graph structures \cite{baptista2020network,leite2022revealing}. This has broadened the available set of tools to understand and solve these transportation problems. Recent studies have shown that common patterns can be unveiled in both the original mathematical setting and in the converted graph structures \cite{baptista2021temporal}.
Representations of these functions as sets of dyadic relations have been proven meaningful in various applications \cite{baptista2020principlednet,facca2021branching}. Nonetheless, traditional dyadic representations may be limited in representing flows of quantities like mass or information as observed in real systems. Various examples of systems where interactions happen between 3 individuals or more are observed in applications as social contagion \cite{PhysRevResearch.2.023032,chowdhary2021simplicial}, random walks \cite{PhysRevE.101.022308,schaub2020random} or non-linear consensus \cite{PhysRevE.101.032310}. Understanding the relation between the structure and
dynamics taking place on higher-order structures is an active field of research \cite{taylor2015topological,patania2017topological}. For instance, key elements controlling dynamics are linked to the heterogeneity of hyperedges' sizes present in their higher-order representations \cite{patania2017topological}. These systems are hence best described by hypergraphs, generalizations of networks that encode structured relations among any number of individuals. With this in mind, a natural question to ask is how do OT-based structures perform in terms of higher-order representations?
\\
To help bridge this knowledge gap about higher-order properties of structures derived from OT solutions, we elaborate on the results observed in \cite{baptista2021temporal}. Specifically, we propose a method to convert the families of 2-dimensional functions into temporal hypernetworks. We enrich the existing network structures associated with these functions by encoding the observed interactions into hyperedges. We study classic hypergraph properties and compare them to the predefined cost functional linked to the transportation problems. Finally, we extend this method and the analysis to study systems coming from real data. We build hypergraph representations of \textit{P. polycephalum} \cite{westendorf2016quantitative} and analyze their topological features.
\section*{Methods}\label{section:methods}
\subsection*{The Dynamical Monge-Kantorovich Method}
\paragraph*{The Dynamical Monge-Kantorovich set of equations.} We start by reviewing the basic elements of the mechanism chosen to solve the OT problems. As opposed to other standard optimization methods used to solve this \cite{cuturi2013sinkhorn}, we use an approach that turns the problem into a dynamical set of partial differential equations. In this way, initial conditions are updated until a convergent state is reached. The dynamical system of equations as proposed by Facca et al. \cite{facca2016towards,facca2019numerics,facca2021branching}, is presented as follows. We assume that the OT problem is set on a continuous 2-dimensional space $\Omega \in \mathbb{R}^{2}$, and at the beginning, no underlying network structure is observed. This gives us the freedom of exploring the whole space to design an optimal network topology, solution of the transportation problem. The main quantities that need to be specified in input are \textit{source} and \textit{target} distributions. We refer to them as sources and sinks, where a certain mass (e.g. passengers in a transportation network, water in a water distribution network) is injected and then extracted. We denote these with a ``forcing'' function $f(x)=f^+(x)-f^-(x)\in \mathbb{R}$, describing the flow-generating sources $f^+(x)$ and sinks $f^-(x)$. To ensure mass balance it is imposed $\int_\Omega f(x)dx = 0$. We assume that the flow is governed by a transient Fick-Poiseuille flux $q=- \mu \grad u$, where $\mu,u$ and $q$ are called \textit{conductivity} (or \textit{transport density}), \textit{transport potential} and \textit{flux}, respectively. Intuitively, mass is injected through the source, moved based on the conductivity across space, and then extracted through the sink. The way mass moves determines a flux that depends on the pressure exerted on the different points in space; this pressure is described by a potential function.
The set of \textit{Dynamical Monge-Kantorovich} (DMK) equations is given by:
\begin{align}
-\nabla \cdot (\mu(t,x)\nabla u(t,x)) &= f^+(x)-f^-(x) \,, \label{eqn:ddmk1}\\
\frac{\partial \mu(t,x)}{\partial t} &= \rup{\mu(t,x)\nabla u(t,x)}^{\beta} - \mu(t,x) \,, \label{eqn:ddmk2}\\
\mu(0,x) &= \mu_0(x) > 0 \label{eqn:ddmk3} \,,
\end{align}
where $\nabla=\nabla_{x}$. \Cref{eqn:ddmk1} states the spatial balance of the Fick-Poiseuille flux and is complemented by no-flow Neumann boundary conditions. \Cref{eqn:ddmk2} enforces the dynamics of this system, and it is controlled by the so-called \textit{traffic rate} $\beta$. It determines the transportation scheme, and it shapes the topology of the solution: for $\beta<1$ we have congested transportation where traffic is minimized, whereas $\beta>1$ induces branched transportation where traffic is consolidated into a smaller amount of space. The case $\beta=1$ recovers shortest path-like structures. Finally, \Cref{eqn:ddmk3} constitutes the initialization of the system and can be thought of as an initial guess of the solution.
Solutions $(\mu^*, u^*)$ of \crefrange{eqn:ddmk1}{eqn:ddmk3} minimize the transportation cost function $\mathcal{L}(\mu,u)$ \cite{facca2016towards,facca2019numerics,facca2021branching}, defined as:
\begin{align}\label{eqn:L}
& \mathcal{L}(\mu,u) := \mathcal{E}(\mu,u)+ \mathcal{M}(\mu,u) \\
& \mathcal{E}(\mu,u) := \dfrac{1}{2}\int_{\Omega} \mu |\grad u|^2 dx, \ \ \mathcal{M}(\mu,u) := \dfrac{1}{2}\int_{\Omega} \dfrac{\mu^{\frac{(2-\beta)}{\beta}}}{2-\beta} dx \quad.
\end{align}
$\mathcal{L}$ can be thought of as a combination of $\mathcal{M}$, the total energy dissipated during transport (or network operating cost) and $\mathcal{E}$, the cost to build the network infrastructure (or infrastructural cost). It is known that this functional's convexity changes as a function of $\beta$. Non-convex cases arise in the branched schemes, inducing fractal-like structures \cite{facca2021branching, santambrogio2007optimal}. This is the case that we considered in this work, and it is the only one where meaningful network structures, and thus, hypergraphs, can be extracted \cite{baptista2020network}.
\subsection*{Hypergraph sequences}
\paragraph*{Hypergraph construction.} We define a hypergraph (also, hypernetwork) as follows \cite{battiston2020networks}: a \textit{hypergraph} is a tuple $H = (V, E),$ where $V = \{v_1, ... ,v_n\}$ is the set of \textit{vertices} and $E = \{ e_1, e_2, ... , e_m\}$ is the set of \textit{hyperedges} in which $e_i\subset V, \forall i = 1,...,m,$ and $|e_i|>1$. If $|e_i|=2,\forall i$ then $H$ is simply a graph. We call \textit{edges} those hyperedges $e_i$ with $|e_i|=2$ and \textit{triangles}, those with $|e_i|=3$. We refer to the \textit{1-skeleton} of $H$ as the \textit{clique expansion} of $H$. This is the graph $G=(V,E_{G})$ made of the vertices $V$ of $H$, and of the pairwise edges built considering all the possible combinations of pairs that can be built from each set of nodes defining each hyperedge in $E$.
Let $\mu$ be the conductivity found as a solution of \crefrange{eqn:ddmk1}{eqn:ddmk3}. As previously mentioned, $\mu$ at convergence regulates where the mass should travel for optimal transportation. Similar to \cite{baptista2021temporal}, we turn this 2-dimensional function into a different data structure, namely, a hypergraph. This is done as follows: consider $G(\mu) = (V_G,E_G)$ the network extracted using the method proposed in \cite{baptista2020network}. We define $H(\mu)$ as the tuple $(V_H,E_H)$ where $V_H = V_G$ and $E_H = E_G \cup T_G,$ s.t., $T_G = \{(u,v,w): (u,v),(v,w),(w,u) \in E_G, \}.$ In words, $H(\mu)$ is the graph $G(\mu)$ together with all of its triangles.
This choice is motivated by the fact that the graph-extraction method proposed in \cite{baptista2020network} uses triangles to discretize the continuous space $\Omega$, which can have a relevant impact on the extracted graph or hypergraph structures. Hence, triangles are the natural sub-structure for hypergraph constructions. The method proposed in this work is valid for higher-order structures beyond triangles. Exploring how these additional structures impact the properties of the resulting hypergraphs is left for future work.
\Cref{fig:image1} shows an example of one of the studied hypergraphs. The red shapes represent the different triangles of $H(\mu)$. Notice that, although we consider here the case where $|e|\leq 3$ for each hyperedge $e$---for the sake of simplicity---higher-order structures are also well represented by the union of these elements, as shown in the right panels of the figure.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.95\textwidth}
[width=\textwidth]{./fig1_family_of_subgraphs.jpg}
\end{subfigure}
\caption{\textbf{Hypernetwork construction.} Higher order structures are built using edges and triangles as hyperedges. The leftmost panel shows one of the studied graphs together with the triangles (in red) used. The subsequent panels highlight different clusters of triangles that can be seen in the main hypergraph.} \label{fig:image1}
\end{figure}
Since this hypergraph construction method is valid for any 2-dimensional transport density, we can extract a hypergraph not only from the convergent $\mu$ but also at any time step before convergence. This then allows us to represent optimal transport sequences as hypergraphs evolving in time, i.e. temporal hypernetworks.
\paragraph*{Hypergraph sequences.} Formally, let $\mu(x,t)$ be a \textit{transport density} (or \textit{conductivity}) function of both time and space obtained as a solution of the DMK model. We denote it as the sequence $\{\mu_t\}_{t=0}^T$, for some index $T$ (usually taken to be that of the convergent state). Each $\mu_{t}$ is the $t$-th update of our initial guess $\mu_0$, computed by following the rules described in \crefrange{eqn:ddmk1}{eqn:ddmk3}. This determines a sequence of hypernetworks $\{ H(\mu_t)\}_{t=0}^T$ extracted from $\{\mu_t\}_{t=0}^T$ with the extraction method proposed in \cite{baptista2020network}. \Cref{fig:image2} shows three hypergraphs built from one of the studied sequences $\{\mu_t\}$ using this method at different time steps. The corresponding OT problem is that defined by the (filled and empty) circles: mass is injected in the bottom left circle and must be extracted at the highlighted destinations. On the top row, different updates (namely, $t=12, 18, 26$) of the solution are shown. They are defined on a discretization of $[0,1]^2.$ Darkest colors represent their support. Hypergraphs extracted from these functions are displayed at the bottom row. As can be seen, only edges (in gray) and triangles (in red) are considered as part of $H(\mu_t)$. Notice that the larger the $t$ is, the less dense the hypergraphs are, which is expected for a uniform initial distribution $\mu_0$ and branched OT ($\beta>1$) \cite{facca2021branching}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.9\textwidth}
[width=\textwidth]{./fig2_hypernet_sequence.jpg}
\end{subfigure}
\caption{\textbf{Temporal hypergraphs.} Top row: different timestamps of the sequence $\{\mu_t\}$; triangles are a discretization of $[0,1]^2$. Bottom row: hypergraphs extracted for $\mu_t$ at the time steps displayed on the top row; triangles are highlighted in red. In both rows, filled and empty circles correspond to the support of $f^+$ and $f^-$, i.e. sources and sinks, respectively. This sequence is obtained for $\beta = 1.5$.} \label{fig:image2}
\end{figure}
\subsection*{Graph and hypergraph properties}
We compare hypergraph sequences to their correspoding network counterparts (defined as described in the previous paragraph). We analyze the following main network and hypergraph properties for the different elements in the sequences and for different sequences. Denote with $G = (V_G,E_G)$ and $H = (V_H, E_H)$ one of the studied graphs and hypergraphs belonging to some sequence $\{ G(\mu_t)\}_{t=0}^T$ and $\{ H(\mu_t)\}_{t=0}^T$, respectively. We consider the following network properties:
\\
\begin{enumerate}
\item $|E_G|$, total number of edges;
\item Average degree $d(G)$, the mean number of neighbors per node;
\item Average closeness centrality $c(G)$: let $v\in V_G$, the closeness centrality of $v$ is defined as $
\sum_{u\in V_G} 1/d(u,v),$ where $d(u,v)$ is the shortest path distance between $u$ and $v$.
\end{enumerate}
Hypernetwork properties can be easily adapted from the previous definitions with the help of generalized adjacency matrices and line graphs \cite{aksoy2020hypernetwork}. Let $H$ be a hypergraph with vertex set $V = \{1,..,n\}$ and edge set $E = \{e_1, ... ,e_m\}$. We define the generalized \textit{node} $s$-\textit{adjacency matrix} $A_s$ of $H$ as the binary matrix of size $n\times n$, s.t., $A_s[i][j]=1$ if $i$ and $j$ are part of at least $s$ shared hyperedges; $A_s[i][j]=0,$ otherwise. We define the $s$-\textit{line graph} $L_s$ as the graph generated by the adjacency matrix $A_s$. Notice that $A_1$ corresponds to the adjacency matrix of $H$'s skeleton (which is $L_1$). \Cref{fig:image3} shows a family of adjacency matrices together with the line graphs generated using them. We can then define hypergraphs properties in the following way:
\\
\begin{enumerate}
\item $|E_H|$, total number of hyperedges;
\item $|T| = |\{e \in E_H: |e|= 3\}|,$ total number of triangles;
\item $S = \sum_{t\in T} a(t),$ \textit{covered area}, where $a(t)$ is the area of the triangle $t;$
\item Average degree $d_s(H)$, the mean number of incident hyperedges of size greater or equal than $s$ per node;
\item Average closeness centrality $c_s(H)$: let $v\in V_H$, the closeness centrality of $v$ is defined as its closeness centrality in $L_s$.
\end{enumerate}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{\textwidth}
[width=0.98\textwidth]{./fig3_adj_matrices.jpg}
\end{subfigure}
\caption{\textbf{Adjacency matrices and line graphs.} Top: generalized node $s$-adjacency matrices for different values of $s$ from a given toy graph $G$. Bottom, from left to right: reference network $G$, and $s$-line graphs for $s=2,3,$ and $4$. } \label{fig:image3}
\end{figure}
$S$ can be defined in terms of any other property of a hyperedge, e.g. a function of its size $|e|$. Here we consider the area covered by a hyperedge to keep a geometrical perspective. On the other hand, this area $S$ can be easily generalized to hyperedges with $|e_{i}|>3$ by suitably changing the set $T$ in the summation, e.g. by considering structures containing four nodes. As for the centrality measures, we focus our attention to compare the case $s>1$ against $s=1$, as the latter traces back to standard graph properties and we are interested instead to investigate what properties are inherent to hypergraps. \Cref{fig:image4} shows values of the $d_s(H)$ and $c_s(H)$ for convergent hypergraphs $H$ (obtained from different values of $\beta$) together with the degree and closeness centrality of their correspondent graph versions. The considered hypergraphs are displayed in the top row of the figure. As can be seen in the figure, patterns differ considerably for different values of $\beta$. As $s$ controls the minimum number of shared connections for different nodes in the networks, the higher this number, the more restrictive this condition becomes, thus leading to more disconnected line graphs. In the case of the $s$-degree centrality, we observe decreasing values for increasing $s$, with nodes with the highest centrality having much higher values than nodes less central. For both $s=2,3$ we observe higher values than nodes in $G$. This follows from the fact that once hyperedges are added to $G$, the number of incidences per node can only increase. Centrality distributions strongly depend on $\beta$. For small values---more distributed traffic ($\beta=1.1$)---the number of hyperedges per node remains larger than the number of regular edges connected to it. But if traffic is consolidated on less space ($\beta=1.9$), then very few hyperedges are found. This suggests that the information learned from hypergraphs that is distinct to that contained in the graph skeleton is influenced by the chosen traffic regime.
As for the closeness centrality distribution, this resembles that of $G$ for small values of $\beta$, regardless $s$. For higher $\beta$ it switches towards an almost binary signal. Thus, nodes tend to become more central as $\beta$ increases, suggesting that adding hyperedges to networks $G$ leads to shorter distances between nodes. The loss of information seen for the highest values of $s$ is due to the fact that the line graphs $L_s$ become disconnected with many small connected components. In these cases, the closeness centrality of a node is either 0 if it is isolated, or proportional to the diameter of the small connected component where it lives in.
\begin{figure}[!ht]
\centering
[width=0.98\textwidth]{./fig4_side-by-side-hnx-deg-close.jpg}
\caption{\textbf{Graph and Hypergraph properties.} Top row: optimal hypernetworks obtained with different traffic rates. Center and bottom rows: degree distributions and closeness distributions for the hypernetworks shown on the top row, and their 1-skeletons. The node labels in the $x$-axis of the center and bottom rows are sorted by their degree of centrality values.} \label{fig:image4}
\end{figure}
\paragraph{Convergence criteria.} Numerical convergence of the DMK \crefrange{eqn:ddmk1}{eqn:ddmk3} is usually defined by fixing a threshold $\tau$. The updates are considered enough once the cost associated to them does not change more ($\leq \tau$) than that of the previous time step. As it is usually the case when this threshold is too small ($\tau=10^{-12}$ in our experiments), the cost or the network structure may consolidate to a constant value earlier than algorithmic convergence. Similar to \cite{baptista2021temporal}, to meaningfully establish when is hypergraph optimality reached, we consider as convergence time the first time step when the transport cost, or a given network property, reaches a value that is smaller or equal to a certain fraction $p$ of the value reached by the same quantity at algorithmic convergence (in the experiments here we use $p=1.05$). We refer to $t_\mathcal{L}$ and $t_P$ for the convergence in times in terms of cost function or a network property, respectively.
\section*{Results}
To test the properties presented in the previous section and understand their connection to transportation optimality, we synthetically generate a set of optimal transport problems, determined by the configuration of sources and sinks. As done in \cite{baptista2021temporal}, we fix a source's location and sample several points in the set $[0,1]^2$ to be used as sinks' locations. Let $S = \{s_0,s_1,...,s_M\}$ be the set of locations in the space $[0,1]^2,$ and fix a positive number $0<r$. We define the distributions $f^+$ and $f^-$ as $
f^+(x) \propto \mathds{1}_{R_0}(x),$ and $f^-(x) \propto \sum_{i>0} \mathds{1}_{R_i}(x),$ where $\mathds{1}_{R_i}(x) := 1,$ if $x\in R_i$, and $\mathds{1}_{R_i}(x) := 0$, otherwise; $R_i = C(s_i,r)$ is the circle of center $s_i$ and radius $r$. The value of $r$ is chosen based on the used discretization, and as mentioned before, the centers are sampled uniformly at random. The symbol $\propto$ stands for proportionality and is used to ensure that $f^+$ and $f^-$ are both probability distributions. The transportation cost is that of \cref{eqn:L}.
\paragraph{Synthetic OT problems.}\label{sec:synthetic}
The set of transportation problems considered in our experiments consists of 100 source-sink configurations. We place the location of the source $s_0=(0,0)$ (i.e. the support of $f^+$ at $(0,0)$), and sample 15 points $s_1,s_2,...,s_M$ uniformly at random from a regular grid. By sampling them from the nodes of the grid, we ensure that two different locations are at a safe distance so they are considered different once the space is discretized. We initialize $\mu_0(x)=1, \forall x$ to be a uniform distribution on $[0,1]^2$. This can be interpreted as a non-informative initial guess for the solution. Starting from $\mu_0,$ we compute a maximum of 300 updates. Depending on the chosen traffic rate $\beta$ more or fewer iterations can be needed. We claim that the sequence $\{\mu_t\}_{t=0}^T$ \textit{converges} to a certain function $\mu^*$ at iteration $T$ if either $|\mu_T-\mu_{T-1} |<\tau,$ for a \textit{tolerance} $\tau\in (0,1],$ or $T$ reaches the mentioned maximum. For the experiments reported in this manuscript, the tolerance $\tau$ is set to be $10^{-12}$. Given the dependence of the solution of traffic constraints, a wide range of values of $\beta$ is considered. Namely, we study solutions obtained from low traffic cases ($\beta=1.1$, and thus, less traffic penalization) to large ones ($\beta=1.9$), all of them generating branched transportation schemes. Our 100 problems are linked to a total of 900 hypergraph sequences, each of them containing between 50 and 80 higher-order structures.
\begin{figure}[!h]
\centering
[width=\textwidth]{./fig5_surface_decay.jpg}
\caption{\textbf{Covered area and Lyapunov cost.} Mean (markers) and standard deviations (shades around the markers) of the covered area $S$ (top plots) and of the Lyapunov cost, energy dissipation $\mathcal{E}$ and structural cost $\mathcal{M}$ (bottom plots), as functions of time $t$. Means and standard deviations are computed on the set described in Paragraph \textit{Synthetic OT problems}. From left to right: $\beta=1.2, 1.5$ and $1.8$. Red and blue lines denote $t_P$ and $t_\mathcal{L}$.} \label{fig:image5}
\end{figure}
\paragraph{Convergence: transport cost vs hypernetwork properties.}
As presented in \cite{baptista2021temporal}, we show a comparison between hypernetwork properties and the cost function minimized by the dynamics, where convergence times are highlighted (\Cref{fig:image5}). We focus on the property $S$, the area of the surface covered by the triangles in $H$. This quantity is influenced by both the amount of triangles (hence of hyperedges) and their distribution in space. Hence, it is a good proxy for how hypergraph properties change both in terms of iteration time and as we tune $\beta$.
We observe that $t_P>t_\mathcal{L}$ in all the cases, i.e. convergence in terms of transportation cost is reached earlier than the convergence of the topological property. Similar behaviors are seen for other values of $\beta\in[1.1,1.9]$ and other network properties (see Appendix). Similar to DMK-based network properties, the covered area's decay is faster for the smallest values of $\beta$. This is expected, given the convexity properties of $\mathcal{L}$ \cite{facca2016towards,facca2019numerics,facca2021branching}. However, the transport cost decays even faster, in a way that the value of $S$ is still far away from convergence in the congested transportation case (small $\beta$).
\\
Notice that $S$ remains stable after the first few iterations, and then it starts decreasing at different rates (depending on $\beta$) until reaching the converged value. This suggests that the dynamics tend to develop thick branches---covering a large area--- at the beginning of the evolution, and then it slowly compresses them until reaching the optimal topologies.
\\
These different convergence rates for $S$ and $\mathcal{L}$ may prevent construction of converged hypernetwork topologies: if the solver is stopped at $t_\mathcal{L}< t_{P}$, the resulting hypergraphs $H(\mu_t), \ t=t_\mathcal{L}$ would mistakenly cover a surface larger than that covered by the convergent counterpart ($H(\mu_t),$ for $t\geq t_P$). This scenario is less impactful for larger values of $\beta$, although in these scenarios $H$ is much more similar to a regular graph, because of the small number of higher-order structures. Topological differences between converged hypernetworks can be seen in \Cref{fig:image4}.
\\
Finally, we observe that both $t_\mathcal{L}(\beta)$ and $t_P(\beta)$ are increasing functions on $\beta$. This is expected since the larger the traffic rate is, the longer it takes for the sequences to converge. This particular behavior matches what is shown in \cite{baptista2021temporal} in the case of $t_\mathcal{L}$, but this is not the case for $t_P(\beta)$: it was observed a non-monotonic behavior in the network case.
\paragraph{Convergence behavior of hypernetwork properties.}
\Cref{fig:image6} shows how the various network properties change depending on the traffic rate. Mean values and standard deviations are computed across times, for a fixed value of $\beta$. As shown, the number of hyperedges, number of triangles, covered area, and average 1-degree exhibit decreasing patterns as functions of $t$. As a consequence, transport optimality can be thought of as reaching minimum states on the mentioned hypernetwork properties. Another clear feature of these functions is related to the actual converged values: the larger the $\beta$ is, the smaller these metrics become. This is explained by a cost function increasingly encouraging consolidations of paths on fewer edges. Notice also that the gap between these converged values signals a non-linear dependence on the outputs of the dynamics; e.g., a converged hypernetwork obtained for $\beta=1.1.$ loses many more hyperedges if the traffic rate is then set to 1.2, whereas this loss would not be that large if $\beta=1.2$ is increased to 1.3. The nature of these gaps is substantially different depending on the property itself. This also shows that certain properties better reveal the distinction between different optimal traffic regimes.
The behavior of the closeness centralities is distinctly different than that of the other properties. While its initial values are the same for all values of $\beta$ (similar to the previous properties), no clear trend can be found as time increases. For $s=1$, on average $\beta=1.1$ generates sequences that tend to recover initial values after increasing and then decreasing behavior. For the other traffic rates, we observe different patterns. Notice that $s-$closeness centrality on the hypergraph for $s=1$ is the same as the classic closeness centrality on the skeleton of it. Thus, these rather noisy patterns are not due to the addition of hyperedges. On the other hand, for $s=2$ the average centrality shows increasing curves. This may be due to $L_s$ getting increasingly disconnected with small connected components. Therefore, the larger $s$, the closer the nodes are seen (see \Cref{fig:image3}). Moreover, in this case small values of $\beta$ lead to more stable closeness centrality values, showing the impact of $\beta$ in building higher-order structures. While different values of $\beta$ lead to different behaviors of the hypergraph properties (e.g. decreasing degrees and amount of hyperedges for increasing $\beta$) we remark that choosing the value of $\beta$ should depend on the application at hand. The analysis performed here showcases how this choice may impact the resulting topologies. This can help practitioners to visualize possible consequences in terms of downstream analysis on the transportation properties of the underlying infrastructure.
\begin{figure}[!h]
\centering
[width=0.8\textwidth]{./fig6_single_fig_dpp.jpg}
\caption{\textbf{Evolution of hypernetwork properties}. Mean (markers) and standard deviations (shades around the markers) of number of hyperedges $|E_H|$ (upper left), number of triangles $|T|$ (upper center), covered area $S(H)$ (upper right), average $2$-degree $d_2(H)$ (lower left), average $1$-closeness centrality $c_1(H)$(lower center) and $2$-closeness centrality $c_2(H)$(lower right), computed for different values of $\beta$ as a function of time.}\label{fig:image6}
\end{figure}
\section*{\textit{P. polycephalum} hypernetworks}
We now analyze hypernetworks extracted from images of real data. We are interested in the evolution of the area covered by triangles in the sequences $ \{ H(\mu_t)\}_{t=0}^T$ extracted from real images of the slime mold \textit{P. polycephalum}. The behavior of this organism is the inspiration of the modeling ideas of the DMK equations described in \nameref{section:methods}. It has been shown that these slime molds follow a similar optimization strategy as that captured by the DMK dynamics while foraging for food in 2D surfaces \cite{nakagaki2000maze,tero2007mathematical,tero2010rules}.
We extract hypernetworks from images using the idea described in \nameref{section:methods} but instead of applying \cite{baptista2020network} to obtain the networks, we use the method proposed by \cite{baptista2020principlednet} which takes images as input. This pipeline uses the color intensities of the different image pixels to build a graph, by connecting adjacent meaningful nodes. We dedicate our attention to 4 image sequences from the Slime Mold Graph Repository \cite{dirnberger2017introducing}. The sequences are then describing the evolution of a \textit{P. polycephalum} placed in a rectangular Petri dish. Each image, and thus each hypernetwork, is a snapshot of the movement of this organism over periods of 120 seconds.
We study the covered area for every one of the 4 sequences, and plot the results for one of them (namely, image set \textit{motion12}; see Appendix) in \Cref{fig:image7}. We highlight 4 times along the property sequence to display the used images together with the corresponding hypernetworks. The lower leftmost plot shows a subsection of one of the studied snapshots. As can be seen there, this subhypernetwork topology exhibits a significant number of hyperedges of dimension 3, mainly around the thickest parts of the slime mold. On the other side, in the lower rightmost plot, the evolution of $S$ is overall decreasing in time (similar results are obtained for other sequences, as shown in the Appendix). This suggests that the thicker body parts tend to get thinner as the \textit{P. polycephalum} evolves into a consolidated state. This pattern resembles what is shown above for the synthetic data, i.e. the covered area tends to decrease as time evolves similar to the behavior of the DMK-based hypernetwork sequence. This suggests that the DMK model realistically mirrors a consolidation phase towards optimality of real slime molds \cite{dirnberger2017introducing}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{1\textwidth}
[width=0.95\textwidth]{./fig7_hnx_surface_phys_insets_for_motion12.jpg}
\end{subfigure}
\caption{\textbf{\textit{P. polycephalum} hypergraphs.} On top: \textit{P. polycephalum} images and hypernetworks extracted from them. Bottom left: a zoomed-in part of the hypergraph shown inside the red rectangle on top. Bottom right: covered area as a function of time. The red shade highlights a tentative consolidation phase towards optimality.} \label{fig:image7}
\end{figure}
\section*{Conclusions}
We proposed a method to build higher-order structures from OT sequences. This method maps every member of the sequence into a hypergraph, outputting a temporal hypernetwork. We analyzed standard hypergraph properties on these temporal families and compared them to their continuous counterparts. We showed that convergence in terms of transportation cost tends to happen faster than that given by the covered area of the hypernetworks. This suggests that the dynamics used to solve the OT problems concentrates the displaced mass into main branches, and once this task is carried out, it slightly reduces the area covered by them. We studied this and other hypergraph properties, and compared them to those of their network versions. In some cases, hypernetworks reveal more information about the topology at convergence. This suggests that hypernetworks could be a better alternative representation to solutions of OT problems for some transportation schemes. The conclusions found in this work may further enhance our comprehension of OT solutions and the links between this field and that of hypergraphs.
\paragraph{Acknowledgements
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Diego Baptista.
\bibliographystyle{splncs03}
|
\section{Introduction}
Because of the broadcast nature of wireless communications, multiple
receiving nodes may overhear the transmission of a transmitting node
in a wireless network. Therefore, a packet can be routed through
different routes to its destination node. This ``multiple-route
diversity'' can be exploited to improve various measure of network
performance including throughput, average delay, and the probability
of transmission failure.
Different routing and scheduling policies have been proposed to
exploit the ``broadcast advantage'' of the wireless medium. Reference
\cite{ExOR} proposes ExOR, a packet-level opportunistic routing
protocol to exploit high loss-rate radio links in wireless
networks. Reference \cite{Rozner_2009} proposes a proactive link state
routing protocol that improves the forward nodes selection in
\cite{ExOR}. Reference \cite{Neely_2008} characterizes the maximum
network capacity region when packet-level opportunistic routing is
exploited, and proposes a routing algorithm named DIVBAR to stabilize
the system. DIVBAR adopts the idea of backpressure algorithms first
proposed in \cite{Tassiulas_1992, Tassiulas_1993}. References
\cite{ExOR, Rozner_2009, Neely_2008} discuss how to select a forwarder
based on the feedback information sent from the receivers that have
successfully received the packets. With a similar maximum weight
scheduling idea, \cite{Yeh_2007} analyzes the optimal information
theoretic based cooperative scheduling in a diamond network. The
system is assumed to be able to adaptively select the optimal
encoding-decoding scheme such that any desirable transmission rate
vector within the corresponding capacity region can be achieved in
every slot. This ``fluid model'' is not quite practical. However,
because it is simple to analyze, it has been widely adopted in the
literature.
The policy space explored in the above papers, in particular those
having to do with wireless~\cite{ExOR, Rozner_2009, Neely_2008,
Yeh_2007}, assumes that packet decoding is carried out within a
single slot. This means that decoding depends only on current control
actions and channel quality. However, there are many physical layer
techniques, such as energy accumulation and mutual information
accumulation (MIA), that do not fit this assumption. Such techniques
do not require decoding to occur within a single slot but rather allow
the receivers to accumulation observations across multiple time slots
before decoding. This allows the network to exploit weak radio links
more fully, thereby increasing system throughput.
In this paper we focus on mutual information accumulation and, while
there is some prior research on resource optimization in networks
using MIA \cite{Draper_2008,Urgaonkar_2010,mia_allerton11}, that work
does not consider questions of network stability and protocol design.
Our objectives in this work are therefore two fold. First, we want to
characterize the maximum network capacity region when mutual
information accumulation is implemented at the physical layer. Second,
we aim to design joint routing and scheduling algorithms that
stabilize any point within the capacity region.
To make the concept of mutual information accumulation more tangible,
it helps to contrast it with energy accumulation. In energy
accumulation multiple transmissions are combined non-coherently by
receiving nodes. This is usually enabled by using space-time or
repetition coding~\cite{Maric_2004, Maric_2005, Chen_2005}. Mutual
information accumulation is more efficient, the difference between the
two being explained in~\cite{Draper_2008}. \textcolor{black}{Consider a pair of senders transmitting information over two independent additive white Gaussian noise channels to the same receiver. Assume transmitter power $P$ and channel noise power $N$. Energy accumulation corresponds to the scenario when each transmitter sends the same codeword. When the decoder uses maximum ratio combining, a throughput of $\frac{1}{2}\log (1+\frac{2P}{N})$ bits per channel use can be achieved. With mutual information accumulation, independent parity symbols are sent, and the system can achieve $2\times\frac{1}{2}\log (1+\frac{P}{N})$ bits per channel use, which outperforms energy accumulation.} It has been noted in
\cite{Draper_2008} that for Gaussian channels at low signal-to-noise
ratio (SNR) energy accumulation is equivalent to mutual information
accumulation because capacity at low SNR is linear in SNR. Mutual
information accumulation can be realized through the use of rateless
(or fountain) codes \cite{Draper_2008, Mitzenmacher_2004, LT_Code}.
Similar to \cite{ExOR, Rozner_2009, Neely_2008}, we assume our system
operates at packet-level, and each active transmitter transmits a
packet in each slot. Different from the probabilistic channel model
discussed in these references, we consider the scenario where the
channel state varies slowly, so that link conditions can be assumed to
be static over a long transmission period. Under this scenario, for a
weak link whose link rate is below the transmission rate, a packet
cannot be successfully delivered across the link in any single slot,
even with repetitive attempts in different slots. Thus, the
corresponding receiver can never decode and become a forwarder under
the schemes discussed in \cite{ExOR, Rozner_2009,
Neely_2008}. However, when rateless codes are used by the
transmitters, although the corresponding receiver of the weak link
cannot decode the packet within a single slot, it can store that
corrupted packet and accumulate more information in later
slots. Eventually, a packet can be successfully delivered across a
weak link after a number of transmissions by accumulating information
across the slots. Thus, weak links can still be utilized in slowly
changing network environments. Compared with opportunistic routing
schemes, mutual information accumulation provides more reliable
throughput over weak links, and doesn't require the feedback
information to determine the forwarder in each slot. Thus it reduces
the required overhead.
Compared to the networks in \cite{Yeh_2007} and references \cite{cohen, Neely05}
where varying rate can be achieved within a slot through adaptive
encoding-decoding schemes, the mutual information accumulation scheme
doesn't require the encoding-decoding to be changed in every
slot. However, on average, it can achieve the same rate by repetitive
transmission and information bits accumulation. Therefore, the system
is more practical to implement without sacrificing throughput. On the
other hand, the information bit accumulation process also brings new
challenges to the design and analysis of routing and scheduling
algorithm.
The contribution of our work is three-fold.
\begin{itemize}
\item We characterize the maximum stability region of a wireless
network enabled with mutual information accumulation under certain
natural assumptions. Compared with networks where information
accumulation is not allowed, the system is able to exploit weak
links and an expanded stability region is thereby achieved.
\item We propose two dynamic routing and scheduling policies to
achieve the maximum stability region. Both policies require simple
coordination and limited overhead.
\item The techniques we develop to cope with the temporally
coupled mutual information accumulation process are novel and
may have use in queuing problems more widely.
\end{itemize}
The rest of the paper is organized as follows. In
Section~\ref{sec:system}, we describe the system model. In
Section~\ref{sec:capacity}, we present the maximum stability region of
the network under our system model. In Section~\ref{sec:Tslot} and
Section~\ref{sec:virtual}, we design two different routing protocols
to achieve the maximum stability region and analyze the
performance. We present simulation evaluation in
Section~\ref{sec:simu}. Finally, we conclude in
Section~\ref{sec:conclusions}. Proofs are deferred to the appendices.
\section{System Model}\label{sec:system}
\subsection{The Basic System Setting}\label{sec:system_para}
We consider a time-slotted system with slots normalized to integral units $t\in\{0, 1, 2, \ldots\}$. There are $N$ network
nodes, and links are labeled according to {\it ordered} node pairs $(i,j)$ for $i,j \in \{1, \ldots , N\}$. We assume that there are $K$ different commodities in the network, $K\leq N$, where each commodity is labeled according to its destination node, e.g., all packets from commodity $c$ should be routed to destination node $c$. Data arrives randomly in packetized units. Let $A^c_i(t)$ denote the number of packets from commodity $c$ that exogenously arrive at network node $i$ during slot $t$.
Arrivals are assumed to be independent and identically distributed (i.i.d.) over timeslots, and we let $\lambda^c_i=E\{A^c_i(t)\}$ represent the arrival rate of commodity $c$ into source node $i$ in units of packets/slot. We assume $A^c_i(t)\leq A_{max}$ for all $c,i,t$.
We assume the channel fading states between any pair of nodes are stationary, and the active transmitters transmit with the same power level. Thus, a fixed reliable communication rate over an active link can be achieved in each slot. We expect that the algorithms developed in this paper can be modified to accommodate more general fading processes.
\textcolor{black}{We use $r_{ij}$ to denote the link rate between nodes $i,j$. We assume the link rate is reciprocal, i.e., $r_{ij}=r_{ji}$.}
We assume that each node can transmit at most one packet during any given node during any single time slot, i.e., $r_{ij}\leq 1$ packet/slot. We term the links with rate one packet per slot to be {\it strong} links; the rest of the links we term {\it weak} links. For weak links, we assume their rates are lower bounded by some constant value $r_{min}$, $0<r_{min}<1$. Define the set of neighbors of node $i$, $\mathcal{N}(i)$, as the set of nodes with $r_{ij}>0$, $j \in \mathcal{N}(i)$. We assume the size of $\mathcal{N}(i)$, denoted as $|\mathcal{N}(i)|$, is upper bounded by a positive integer $d$. We define $\mu_{max}$ as the maximum number of packets a node can successfully decode in any slot, which is equivalent to the maximum number of nodes that can transmit to a single node simultaneously. Therefore, we have $\mu_{max}\leq d$.
We assume the system operates under constraints designed to reduce interference among transmitters. Under the interference model, in any timeslot, only a subset of nodes are allowed to transmit simultaneously. We assume that transmissions from nodes active at the same time are interference free. We denote the set of feasible {\it activation pattern} as $\mathcal{S}$, where an activation pattern $s\in \mathcal{S}$ represents a set of active nodes. With a little abuse of the notation, we interchangeably use $s$ to represent an activation pattern and the set of active nodes of the pattern. \textcolor{black}{For any $s\in \mathcal{S}$, we assume all of its subsets also belong to $s$. This allows us to assert that all of the nodes in an activation pattern always transmit when that activation pattern is selected.} This interference model can accommodate networks with orthogonal channels, restricted TDMA, etc.
\subsection{Mutual Information Accumulation at Physical Layer}\label{sec:system2}
We assume mutual information accumulation is adopted at physical layer. Specifically, we assume that if a {\it weak} link with $r_{ij}<1$ is activated, rateless codes are deployed at its corresponding transmitter. When a packet transmitted over a weak link cannot be successfully decoded during one slot, instead of discarding the observations of the undecodable message, the receiver stores that partial information, and accumulates more information when the same packet is retransmitted. A packet can be successfully decoded when the accumulated mutual information exceeds the packet size. The assumption $r_{ij}>r_{min}$ implies that for any active link, it takes at most some finite number of time slots, $\lceil 1/r_{min}\rceil$, to decode a packet.
In order to simplify the analysis, we assume that $1/r_{ij}$ is an integer. The reason for this choice will become clear when our algorithm is proposed in Section~\ref{sec:capacity}. If $r_{ij}$ does not satisfy this assumption, we round it down to $1/\lceil 1/r_{ij}\rceil$ to enforce it.
The challenge in extending the backpressure routing framework to systems that use mutual information accumulation is illustrated through the following example. Suppose that a node $i$ has already accumulated half of the bits in packet 1 and half of the bits in packet 2. Since neither of the packets can be decoded, none of these bits can be transferred to the next hop, even though the total number of bits at node $i$ is equal to that of a full packet. This means that we need to handle the undecoded bits in a different manner. We also observe that, if node $i$ never accumulates enough information for packets 1 or 2, then these packets can never be decoded and will be stuck at node $i$. If we assume that the system can smartly drop the undecoded partial packet whenever a fully decoded copy is successfully delivered to its destination, coordination among the nodes is required to perform this action. The overhead of the coordination might offset the benefits brought by mutual information accumulation. Moreover, unlike the opportunistic model studied in \cite{ExOR, Rozner_2009, Neely_2008}, given that weak link is active, whether or not a successful transmission will occur in that slot is not an i.i.d random variable but rather a deterministic function of the number of already accumulated bits of that packet at the receiving node. This difference makes the analysis even more complicated.
Therefore, in the following sections, we define two different types of queues. One is the traditional {\em full packet} queue which stores fully decoded packets at each node. The other type of queue is a {\em partial packet} queue. It represents the fraction of accumulated information bits from a particular packet. The specific definition of the queues and their evolution depends on the scheduling policy, and will be clarified in Section~\ref{sec:Tslot}, and Section~\ref{sec:virtual}, respectively.
We assume each node has infinite buffer space to store fully decoded packets and partial packets. Overflow therefore does not occur.
\subsection{Reduced Policy Space}
The policy space of the network we consider can be much larger than that of networks without mutual information accumulation. First, because of the broadcast nature of wireless communication, multiple receivers can accumulate information regarding the same packet in any given slot, and a receiver can collect information on a single packet from multiple nodes. Keeping track of the locations of different copies of the same packet requires a lot of overhead. Second, allowing multiple receivers to store copies of the same packet introduces more traffic into the network. Therefore, stabilizing the network requires a sophisticated centralized control strategy. Finally, accumulating information bits of a packet from multiple nodes makes the decoding options of a packet increase exponentially; thus characterizing the network maximum stability region becomes intractable. Therefore, we make the following assumptions:
\begin{itemize}
{\it \item[A1.] For each packet in the system, at any given time, only one node is allowed to keep a fully decoded copy of it.
\item[A2.] In addition to the node with the fully decoded copy, one other node is chosen as the potential forwarder for the packet. Only the potential forwarder is allowed to accumulate information about the packet.}
\end{itemize}
Restricting ourselves to this policy space may sacrifice some part of the stability region that could be achieved under a more general policy space. However, as we will see, these assumptions enable us to explicitly characterize the maximum stability region with the given policy space. Compared with systems that operate without the capability to accumulate mutual information, our system is able to exploit weak links even when one-slot decoding is not possible. The stability region will thereby be greatly enlarged.
If a node directly contributes to the successful decoding of a packet at another node, this node is denoted as a {\it parent} for that packet. Assumptions A1-A2 guarantee that for any packet in the network, there is only one parent at any given time.
We also note that, if we relax assumptions A1-A2, and make the following assumption instead
\begin{itemize}
{\it \item[A3.] Every packet existing in the network has a single parent at any given time. In other words, the accumulated information required to decode a packet at a node is received from a single transmitting node.}
\end{itemize}
then, the maximum stability region under A1-A2 and that under A3 are equivalent. Assumption A3 means that we don't allow a node to accumulate information from different nodes (different copies of the same packet) to decode that packet. However, multiple copies of a packet may still exist in the network.
\section{Network Capacity with Mutual Information Accumulation}\label{sec:capacity}
In this section we characterize the optimal throughput region under all possible routing and scheduling algorithms that conform to the network structure specified in Section~\ref{sec:system_para} and Assumption A3.
At the beginning of each slot, a certain subset $s$ of nodes is selected to transmit in the slot. Any node $i\in s$ can transmit any packet that it has received {\it and} successfully decoded in the past. Note that because packets must be decoded prior to transmission, partial packets cannot be delivered. For node $j\in\mathcal{N}(i)$, if it has already decoded a packet being transmitted, it can simply ignore the transmission; otherwise, it listens to the transmission and aims to decode it at the end of that slot. Receiving nodes connected with strong links can decode the packet in one slot. Nodes connected with weak links cannot decode the packet in one slot. Rather, they need to listen to the same node transmitting the same packet over a number of slots.
A packet is said to be successfully delivered to its destination node when the {\it first copy} of the packet is decoded by its destination node. These assumptions allow for any possible routing and scheduling policy satisfying Assumption A3. \textcolor{black}{We note that packets arrive at a node from a single commodity may get delivered to their destination node in a permuted order since each packet may take a different route.}
Let $\boldsymbol{\lambda}$ represent the input rate vector to the system, where $\lambda^c_i$ is the input rate of commodity $c$ entering node $i$. Define $Y^c_i(t)$ as the number of packets from commodity $c$ that originated at node $i$ and have been successfully delivered to destination node $c$ over $[0,t)$. According to the definition of network stability \cite{Neely_now}, a policy is defined as {\it stable} if
\begin{align}
\lim_{t\rightarrow \infty} \frac{Y^c_i(t)}{t}=\lambda^c_i,\qquad \forall c.
\end{align}
Stronger definitions of stability can be found in \cite{Tassiulas_1992, Tassiulas_1993, stable94}.
The maximum stability region or {\it network layer capacity region} $\Lambda$ of a wireless network with mutual information accumulation is defined as the closure of all $\boldsymbol{\lambda}$ that can be stabilized by the network according to some policy with the structure described above.
\begin{Theorem}\label{thm1}
For a network with given link rates $\{r_{ij}\}$ and a feasible activation pattern set $\mathcal{S}$,
the network capacity region $\Lambda$ under assumption A3 consists of all rate vectors $\{\lambda^c_n\}$ for which there exists flow variables $\{\mu^c_{ij}\}$ together with a probability $\pi_s$ for each possible activation pattern $s\in \mathcal{S}$ such that
\begin{align}
\mu^c_{ij}&\geq 0,\quad \mu^c_{ci}=0,\quad \mu^c_{ii}=0, \quad \forall i,j,c\label{eqn:cap1}\\
\sum_{l}\mu^c_{li}+\lambda^c_i&\leq \sum_{j}\mu^c_{ij},\quad \forall i\neq c, \forall c\label{eqn:cap2}\\
\sum_c\mu^c_{ij}&\leq \sum_c\sum_{s\in \mathcal{S}}\pi_s\theta^c_{ij}(s)r_{ij},\quad \forall i,j\label{eqn:cap3}\\
\sum_{s\in \mathcal{S}}\pi_s&\leq1,
\end{align}
where the probabilities $\theta_{ij}(s)$ satisfy
\begin{align}
\theta^c_{ij}(s)&=0 \mbox{ \rm{if} }i\notin s, \\
\quad \sum_c\sum_{j}\theta^c_{ij}(s)&= 1,\forall i.\label{eqn:cap5}
\end{align}
\end{Theorem}
The necessity of this theorem can be proved following the same approach in \cite{Neely_2008} and is provided in Appendix~\ref{apx:thm1}. The sufficiency part is proved in Section~\ref{sec:Tslot} by constructing a stabilizing policy for any rate vector $\boldsymbol{\lambda}$ that is in the interior of capacity region.
The capacity region is essentially similar to the capacity theorem of \cite{Neely_now,Neely_2008}. The relations in (\ref{eqn:cap1}) represent non-negativity and flow efficiency constraints for conservation constraints. Those in (\ref{eqn:cap2}) represent flow conservation constraints. Those in (\ref{eqn:cap3}) represent link constraints for each link $(i,j)$. The variable $\theta^c_{ij}(s)$ can be interpreted as the probability that the transmission over link $(i,j)$ eventually contributes to the delivery of a packet of commodity $c$ at node $c$, given that the system operates in pattern $s$. In other words, link $(i,j)$ is on the routing path for this packet from its origin node to its destination node.
This theorem implies that the network stability region under Assumption A3 can be defined in terms of an optimization over the class of all stationary policies that use only single-copy routing. Thus, for any rate vector $\boldsymbol{\lambda}\in \Lambda$, there exists a stationary algorithm that can support that input rate vector by single-copy routing all data to the destination.
The $\Lambda$ defined above are in the same form as when a ``fluid model'' is considered. In other words, the extra decoding constraint imposed by mutual information accumulation does not sacrifice any part of the stability region. We can simply ignore the packetization effect when we search for the maximum stability region.
The $\sum_c\mu^c_{ij}$ defining the stability region represents the {\it effective} flow rate over link $(i,j)$. An ``effective'' transmission means that the bits transferred by that transmission eventually get delivered to the destination. If the transferred information becomes a redundant copy, or discarded partial packet, the transmission is not effective, and doesn't contribute to the effective flow rate. We can always make the inequalities (\ref{eqn:cap2})-(\ref{eqn:cap3}) tight by controlling the value of $\pi_s$.
Solving for the parameters $\{\pi_s\}$ and $\{\theta^c_{ij}(s)\}$ required to satisfy the constraints requires a complete knowledge about the set of arrival rates $\{\lambda^c_i\}$, which cannot be accurately measured or estimated in real networks. On the other hand, even when $\boldsymbol{\lambda}$ is given, solving the equations can still be quite difficult. In the following, we overcome this difficulty with online algorithms which stabilize any $\boldsymbol{\lambda}$ within $\Lambda$, but with a possibly increased average delay as $\lambda$ approaches the boundary of $\Lambda$.
\section{$T$-slot Dynamic Control Algorithm}\label{sec:Tslot}
In the following, we construct a policy that fits Assumptions A1-A2. Although more restrictive than Assumption A3, we will see that the stronger assumptions do not compromise stability performance in the sense that they do not reduce the stability region.
To construct a dynamic policy that stabilizes the system anywhere in the interior of $\Lambda$, specified in Theorem~\ref{thm1}, we first define our decision and queues variables.
We assume that each packet entering the system is labeled with a unique index $k$. At time $t$, $0\leq k\leq \sum_{c,i}\sum_{\tau=1}^t A^c_i(\tau)$. If packet $k$ belongs to commodity $c$, we denote it as $k\in \mathcal{T}_c$. Let $\left\{\beta_{ij}^{(k)}(t)\right\}$ represent the binary control action of the system at time $t$. Specifically, $\beta_{ij}^{(k)}(t)=1$ means that at time $t$, node $i$ transmits packet $k$ to node $j$. We restrict the possible actions so that in each slot each node transmits at most one packet, i.e.,
\begin{align}\label{beta_con}
&\sum_{j,k}\beta_{ij}^{(k)}(t)\leq 1,\quad \forall i,
\end{align}
and at most one node is chosen as the forwarder for packet $k$, i.e.,
\begin{align}\label{beta_con2}
\sum_{i,j}\beta_{ij}^{(k)}(t)\leq 1, \quad\forall k.
\end{align}
Because of the mutual information accumulation property, even if packet $k$ is transmitted over link $(i,j)$ in slot $t$, it doesn't necessarily mean that packet $k$ can be decoded at node $j$ at the end of slot $t$. In particular, under the fixed link rate assumption, successful transmission cannot occur over weak links in a single timeslot.
We let $f_{ij}^{(k)}(t)$ be an indicator function where $f_{ij}^{(k)}(t)=1$ indicates that the packet $k$ has been successfully delivered from node $i$ to node $j$ in slot $t$. The indicator function is a function of the current control action and partial queue status at the beginning of slot $t$. Apparently, $f_{ij}^{(k)}(t)=1$ implies that $\beta_{ij}^{(k)}(t)=1$.
As discussed in Section~\ref{sec:system2}, we define two types of queues at each nodes.
One is to store the fully received and successfully decoded packet at the nodes, while the other queue stores the partially received packets.
We use $Q^c_i(t)$ to denote the length of node $i$'s queue of fully received packets from commodity $c$ at time $t$, and use $P_i^{(k)}(t)$ to represent the total fraction of packet $k$ accumulated by node $i$ up to time $t$. The sum-length of partial queues of commodity $c$ at node $i$ storing partial packets can be represented as $P^c_i(t)=\sum_{k\in \mathcal{T}_c}P_i^{(k)}(t)$. The fraction of packet $k$, $P_i^{(k)}(t)$, can be cleared either when packet $k$ is successfully decoded and enters the full packet queue $Q^c_i$, or when the system controller asks node $i$ to drop packet $k$. \textcolor{black}{With a little abuse of notation, we use $Q_i^c$ and $P_i^c$ to denote the full packet queue and partial packet queue from commodity $c$ at node $i$, respectively.}
Then, according to our Assumptions A1-A2, the queue lengths evolve according to
\begin{align}
Q^c_i(t+1)&=\Big( Q^c_i(t)-\sum_{j,k\in \mathcal{T}_c}\beta_{ij}^{(k)}(t)f^{(k)}_{ij}(t)\Big)^+\nonumber\\
&\quad+\sum_{l,k\in \mathcal{T}_c}\beta_{li}^{(k)}(t)f^{(k)}_{li}(t)+A^c_i(t)\\
P_i^{(k)}(t+1)&=P_i^{(k)}(t)+\sum_{l}\beta_{li}^{(k)}(t)r^{(k)}_{li}(t)\hspace{-0.02in}-\hspace{-0.03in}\sum_{l}\beta_{li}^{(k)}(t)f^{(k)}_{li}(t)\nonumber\\
&\quad-\sum_{l,(m\neq i)}P_i^{(k)}(t)\beta_{lm}^{(k)}(t)\label{eqn:p1}
\end{align}
where
\begin{align}
r^{(k)}_{li}(t)&=\left\{\begin{array}{ll}r_{li}& P_i^{(k)}(t)+r_{li}\leq 1\\
1-P_i^{(k)}(t)& P_i^{(k)}(t)+r_{li}> 1
\end{array}\right.
\end{align}
and $(x)^+=\max\{x,0\}$.
Under the assumption that $1/r_{ij}$ is an integer for every $(i,j)$, we have $r^{(k)}_{li}(t)=r_{li}$.
Since we only allow there to be only a forwarder for any given packet at any time, if $\beta_{lm}^{(k)}(t)=1$, any nodes other than node $m$ which have accumulated partial information of packet $k$ must drop that partial packet $k$. This effect results in the last negative term in (\ref{eqn:p1}). On the other hand, the first negative term in (\ref{eqn:p1}) corresponds to successful decoding of packet $k$, after which it is removed and enters $Q^c_i$ for some $c$.
\subsection{The $T$-slot Algorithm}
Our algorithm works across epochs, each consisting of $T$ consecutive timeslots. Action decisions are made at the start of each epoch and hold constant through the epoch. We analyze the choice of $T$ on the stability region and average backlog. Any rate vector $\boldsymbol{\lambda}$ inside the capacity region $\Lambda$ can be stabilized by a sufficiently large choice of $T$.
\begin{itemize}
\item[1)] \textbf{Check single-link backpressure.} At the beginning of each epoch, i.e., when $t=0,T,2T,\ldots$, node $i$ checks its neighbors and computes the differential backlog weights
$$W_{ij}(t)=\max_c[Q^c_i(t)-Q^c_j(t)]^+r_{ij},\quad j\in\mathcal{N}(i) .$$
Denote the maximizing commodity as $$c^*_{ij}=\arg \max_c [Q^c_i(t)-Q^c_j(t)]^+.$$
\item[2)] \textbf{Select forwarder.} Choose the potential forwarder for the packets in $Q_i$ as the node $j$ with the maximum weight $W_{ij}(t)$. Denote this node as $j^*_i=\arg \max_j W_{ij}(t)$.
\item[3)] \textbf{Choose activation pattern.} Define the activation pattern $s^*$ as the pattern $s\in S$ that maximizes
$$\sum_{i\in s}W_{ij^*_i}.$$
Any node $i\in s^*$ with $W_{ij^*_i}>0$ transmits packets of commodity $c^*_{ij^*_i}$ to $j^*_i$. The pairing of transmitter $i\in s^*$ and receiver $j^*_i$ and the commodity being transmitted $c^*_{ij^*_i}$ is continued for $T$ consecutive timeslots.
\item [4)] \textbf{Clear partial queues.} Release all the accumulated bits in the partial queue $P^c_i$, $\forall i,c$, at the end of each epoch.
\end{itemize}
The $T$-slot algorithm satisfies constraints (\ref{beta_con})-(\ref{beta_con2}). The ``potential forwarder'' in Step 2) refers to the forwarder of node $i$ if node $i$ is active.
We clear all of the partial queues in the system every $T$ slots (in Step 4)) for the simplicity of analysis. It is likely not the best approach to handle the partial queues. Intuitively, the performance should be improved if we only release the partial queues when a selected forwarder for a packet is not the same as the previous one (thus satisfying A3).
\begin{Theorem}\label{thm:Tslot}
The algorithm stabilizes any rate vector satisfying $\boldsymbol{\lambda}+\boldsymbol{\epsilon}(T) \in \Lambda$, where $\boldsymbol{\epsilon}(T)$ is a vector with minimum entry $\epsilon> 1/T$. The average expected queue backlog $\lim_{t\rightarrow \infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\sum_{c,i}\mathds{E}\{Q^c_i(\tau)\}$ in the system is upper bounded by $$\frac{KNT^2(\mu_{max}\hspace{-0.02in}+\hspace{-0.02in}A_{max})^2+NT^2}{2(\epsilon T-1)}+\frac{KN(T\hspace{-0.02in}-\hspace{-0.02in}1)(\mu_{max}\hspace{-0.02in}+\hspace{-0.02in}A_{max})}{2}.$$
\end{Theorem}
The proof of Theorem~\ref{thm:Tslot} is provided in Appendix~\ref{apx:thm_Tslot}. The proof is based on the fact that the $T$-slot algorithm minimizes the $T$-slot Lyapunov drift, which is shown to be negative when $\sum_{c,i}Q^c_i(t)$ is sufficiently large.
The constructed algorithm proves the sufficiency of Theorem~\ref{thm1}. The intuition behind the algorithm is that by using the weak links consecutively over a long window, the potentially contributing loss caused by dropped partial packets is kept small, therefore, the effective rates over the weak links can get close to the link rate. The algorithm approaches the boundary of the capacity region in $O(1/T)$.
When $T$ is large enough, the average expected backlog in the system scales as $O(T)$. For this reason, in the next section, we introduce a virtual queue based algorithm which updates action every single slot. We expect that average backlog under the virtual queue based algorithm will be improved since its upper bound does not scale as $O(T)$.
Given $T> \frac{1}{\epsilon}$, the upper bound \textcolor{black}{in Theorem~\ref{thm:Tslot}} is a convex function of $T$.
This implies that for any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}(T)\in \Lambda$, there exists an optimal value of $T$ which stabilizes the system and introduces minimal delay bound. However, when the arrival rates are unknown, it may not be practical to search for this optimal value.
Finally, we note that for some special values of $T$, the network can still be stabilized even when $T\leq 1/\epsilon$. For example, when $T$ is chosen as $\prod_{(i,j)}\frac{1}{r_{ij}}$, then, under any possible activation pattern $s\in\mathcal{S}$, all partial packets are decoded at the end of the $T$-slot window. This implies that the policy can stabilize any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in\Lambda$. This phenomena will be illustrated through examples in Section~\ref{sec:simu}. For small networks, such a value for $T$ can be easily computed and may be small; for large networks with many weak links, such value may still be quite large.
\section{Virtual Queue Based Algorithm}\label{sec:virtual}
In this section, we develop a second algorithm that exhausts the stability region without needing to set a large $T$. Thereby it attains better delay performance. As seen in Theorem~\ref{thm1}, the delay is caused by the long time window of planning and infrequent update of the control action. Therefore, in order to obtain better delay performance, intuitively, we need to update our policy more frequently. This requires us to design more sophisticated mechanisms to handle the partial packet queues and additional analysis challenges brought by the temporally coupled decoding process over weak links.
Our approach is to construct a network that contains ``virtual'' queues, which handle the partial packets and decoding process over weak links. The resulting network has the same maximum stability region as the original network. By stabilizing the constructed network, the original network is also stabilized.
Specifically, in order to handle the partial packet queue in a simple and effective way, we introduce buffers over weak links. We assume there is a buffer at the transmitter side for each weak link. Then, if a node wants to send a packet over a weak link, the packet is pushed into the buffer. The buffer keeps the packet until it is successfully decoded at the corresponding receiver.
The intuition behind the introduction of these buffers is that, since we don't want dropped partial packets to lose much effective rate over weak links, once the system picks a forwarding node for a particular packet, the system never changes this decision.
For transmissions over weak links, a packet can only be decoded and transferred to next hop when enough information is collected. Under the $T$-slot algorithm, since control actions updates every $T$ slot, and partial queues are cleared at the end of every epoch, it is relatively simple to track the queue evolution and perform the analysis. When control actions change every slot, queues may evolve in a more complicated way and thus difficult to track and analyze.
In order to overcome the analysis challenges, we introduce a second buffer at the receiver side of each weak link. Under the proposed algorithm, we ensure that the receiver never tries to decode a packet until it accumulates enough information, i.e., queue length in the second buffer only decreases when it is sufficiently long. By doing this, the evolution of the queue lengths can be tracked and analyzed easily.
\textcolor{black}{Essentially, we only need to introduce virtual nodes and virtual buffers over weak links in order to handle partial packets. However, link rates may vary over time and vary in different activation patterns (discussed in Sec.~\ref{sec:vary}). Therefore, for the virtual queue based algorithm, we introduce virtual nodes and buffers over both weak links and strong links, and treat them uniformly.}
\subsection{The Virtual Queue Vector}
We divide $Q^c_i(t)$ into two parts. The first stores the packets that have not yet been transmitted in any previous slots, denoted as $U^c_i(t)$. The second stores packets partially transmitted over some links but not yet decoded, denoted as $V^c_i(t)$. Since each packet in the second part is associated with some link, in order to prevent any loss of effective rate caused by dropped partial packets, we require these packets to be transmitted over the same link until they are decoded.
We use $V^{(k)}_{ij}(t)$ to denote the information of packet $k$ required to be transmitted over link $(i,j)$, and $P^{(k)}_{ij}(t)$ to denote the accumulated information of packet $k$ at node $j$.
We define $V^c_{ij}(t)=\sum_{k\in \mathcal{T}_c} V^{(k)}_{ij}(t)$, and $P^c_{ij}(t)=\sum_{k\in \mathcal{T}_c} P^{(k)}_{ij}(t)$, where we recall that $\mathcal{T}_c$ is the set of packets of commodity $c$. Note that $P^c_{ij}(t)$ is different from $P^c_{j}(t)$ defined in Section~\ref{sec:Tslot}, since the latter is associated with node $j$ and the former is associated with link $(i,j)$.
Associated with virtual queues, we define {\it virtual nodes}, as depicted in Fig.~\ref{fig:virtual}. For the link from node $i$ to node $j$, we associate one virtual node with $\{V^c_{ij}\}_c$ and a second with $\{P^c_{ij}\}_c$. The virtual node associated with $\{V^c_{ij}\}_c$ is denoted as $v_{ij}$, while the virtual node associated with $\{P^c_{ij}\}_c$ is denoted as $p_{ij}$. We have decomposed the weak link $(i,j)$ into three links: $(i,v_{ij}), (v_{ij},p_{ij}),(p_{ij},j)$, with link rates $1,r_{ij},1$, respectively. The virtual nodes and corresponding link rates for link $(j,i)$ can be defined in a symmetric way.
We follow the definition of control actions, where $\left\{\beta_{ij}^{(k)}(t)\right\}$ represent the control action of the system at time $t$. Depending on whether or not packet $k$ has already been transmitted by node $i$, we also divide the decision actions into two types, denoted as $\beta^{1(k)}_{ij}$ and $\beta^{2(k)}_{ij}$, respectively.
In the following algorithm, we only make control decisions for the packets at the head of corresponding queues. Therefore we can replace the superscript packet index $(k)$ by its commodity $c$ without any worry of confusion.
When $\beta^{1c}_{ij}(t)=1$, node $i$ pushes a {\it new} packet from $U^c_i(t)$ into the tail of $V^c_{ij}$ at the beginning of slot $t$. This implies that the system assigns node $j$ to be the next forwarder for that packet. Once the packet is pushed into $V^c_{ij}$, we transmit the packet that is at the head of $V^c_{ij}$ to node $j$, generally a different packet. Thus, an amount $r_{ij}$ of information can be accumulated at the tail of $P^c_{ij}(t)$, and the length of $V^c_{ij}$ is reduced by $r_{ij}$. This mechanism ensures that the packets in the virtual buffer is transmitted and decoded in a FIFO fashion.
When $\beta^{2c}_{ij}(t)=1$, without pushing a new packet into the buffer, we retransmit the packet at the head of $V^c_{ij}(t)$.
We let
\begin{align}
\beta^c_{ij}(t)&=\beta_{ij}^{1c}(t)+\beta^{2c}_{ij}(t).
\end{align}
We require that
\begin{align}\label{eqn:bcon}
\sum_{c,j}\beta^c_{ij}(t)\leq 1,\quad \forall i,t.
\end{align}
Further, we define $f^c_{ij}(t)\in\{0,1\}$ as binary decoding control actions. $f^c_{ij}(t)=1$ indicates that receiver $j$ has accumulated enough information to decode the packet at the head of $P^c_{ij}(t)$. It then moves that packet out of $P^c_{ij}(t)$ and into $U^c_j(t)$. We impose the following constraint
\begin{align}\label{eqn:fcon}
f^c_{ij}(t)&\leq P^c_{ij}(t),\quad\forall c,i,j
\end{align}
which indicates that $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)\geq 1$, i.e., receiver $j$ has accumulated enough information to decode the packet at the head of $P^c_{ij}(t)$. The reason for imposing this constraint on $f^c_{ij}(t)$ is that, we cannot simply use $(P^c_{ij}(t)-f^c_{ij}(t))^+$ to represent the queue length of $P^c_{ij}$ after a decoding action is taken at that queue. If $P^c_{ij}(t)<1$, even if $f^c_{ij}(t)=1$, after the decoding action, the queue length will still be $P^c_{ij}(t)$ since no packet can be successfully decoded. This is not equal to $(P^c_{ij}(t)-f^c_{ij}(t))^+$, which is zero in this scenario.
Then, according to constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the queue lengths evolve according to
\begin{align}
U^c_i(t+1)&=\Big( U^c_i(t)\hspace{-0.03in}-\hspace{-0.03in}\sum_{j}\beta_{ij}^{1c}(t)\Big)^+\hspace{-0.03in}+\hspace{-0.03in}\sum_{l}f^c_{li}(t)\hspace{-0.03in}+\hspace{-0.03in}A^c_i(t)\label{eqn:u}\\
V^c_{ij}(t+1)&\leq\left(V^c_{ij}(t)+\beta_{ij}^{1c}(t)(1-r_{ij})-\beta^{2c}_{ij}(t)r_{ij}\right)^+\nonumber\\
&= \left(V^c_{ij}(t)+\beta_{ij}^{1c}(t)-\beta^c_{ij}(t)r_{ij}\right)^+\label{eqn:v}\\
P^c_{ij}(t+1)&\leq P^c_{ij}(t)+\beta^c_{ij}(t)r_{ij}-f^c_{ij}(t)\label{eqn:p}
\end{align}
The inequalities in (\ref{eqn:v}) and (\ref{eqn:p}) come from the fact that $\beta^{1c}_{ij}(t)$ and $\beta^{2c}_{ij}(t)$ can be applied to a dummy packet when a queue is empty. When the corresponding queue is not empty, the inequality becomes an equality.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{virtual_queue2.eps}}
\end{center}
\vspace{-0.15in}
\caption{The constructed virtual system.}
\label{fig:virtual}
\vspace{-0.15in}
\end{figure}
Define
\begin{align}\label{eqn:lya12}
L(\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t))&=\sum_{c,i}(U^c_i(t))^2+\sum_{c,(i,j)}\hspace{-0.03in}(V^c_{ij}(t))^2\nonumber\\
&\quad+\sum_{c,(i,j)}\hspace{-0.03in}(P^c_{ij}(t)-\eta)^2
\end{align}
where $\eta$ is a parameter used to control the length of $P^c_{ij}(t)$.
Define $\Delta(t)$ as the one-slot sample path Lyapunov drift:
\begin{align*}
\Delta(t)&:= L(\mathbf{U}(t+1),\hspace{-0.03in}\mathbf{V}(t+1),\hspace{-0.02in}\mathbf{P}(t+1))-L(\mathbf{U}(t),\hspace{-0.03in}\mathbf{V}(t),\hspace{-0.02in}\mathbf{P}(t))
\end{align*}
\begin{Lemma}\label{lemma:1drift}
Under constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the sample path Lyapunov drift satisfies
\begin{align}
\Delta(t)&\leq 2\sum_{c,i}U^c_i(t)A^c_i(t)-2\sum_{c,i,j}[U^c_i(t)-V^c_{ij}(t)]\beta^{1c}_{ij}(t)\nonumber\\
&\quad-2\sum_{c,i,j}[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\beta^c_{ij}(t)\nonumber\\
&\quad -2\sum_{c,i,j}[P^c_{ij}(t)-\eta-U^c_j(t)]f^c_{ij}(t)+\alpha_2\label{eqn:delta}
\end{align}
where
\begin{align}
\alpha_2&=KN(d+A_{max})^2+2N+KNd
\end{align}
\end{Lemma}
The proof of this lemma is provided in Appendix~\ref{apx:lemma_1drift}.
\subsection{The Algorithm}\label{sec:virtual_algo}
In contrast to the algorithm of Section~\ref{sec:Tslot}, this algorithm updates every timeslot. The purpose of the algorithm is to minimize the right hand side of (\ref{eqn:delta}) given the current $\mathbf{U},\mathbf{V},\mathbf{P}$.
\begin{itemize}
\item[1)] \textbf{Find per-link backpressure.} At the beginning of a timeslot, node $i$ checks its neighbors and computes the differential backlogs. We compute the weight for the link between node $i$ and the first virtual node, and, separately, the weight for the link between the two virtual nodes. Specially, the weight for control action $\beta^{1c}_{ij}$ is computed as
$$W^{1c}_{ij}(t)= [U^c_i(t)-V^c_{ij}(t)+(V^c_{ij}(t)-P^c_{ij}(t))r_{ij}]^+$$
and the weight for control action $\beta^{2c}_{ij}$ is computed as
$$W^{2c}_{ij}(t)=[V^c_{ij}(t)-P^c_{ij}(t)]^+r_{ij}$$
The weight of commodity $c$ over link $(i,j)$ is $W^c_{ij}(t)=\max\{W^{1c}_{ij}(t),W^{2c}_{ij}(t)\}$.
The weight for the link $(i,j)$ is $$W_{ij}(t)=\max_c W^{c}_{ij}(t),$$ and the optimal commodity $$c^*_{ij}=\arg\max_c W^{c}_{ij}(t).$$
\item[2)] \textbf{Select forwarder.} Choose the potential forwarder of the current slot for node $i$ with the maximum weight $W_{ij}(t)$ and denote it as $j^*_i=\arg \max_j W_{ij}(t)$.
\item[3)] \textbf{Choose activation pattern.} Define the optimal activation pattern $s^*$ as the pattern $s\in S$ that maximizes
$$\sum_{i\in s}W_{ij^*_i}.$$
\item[4)] \textbf{Transmit packets.} For each $i\in s^*$, if $W_{ij^*_i}>0$, let node $i$ transmit a packet of commodity $c^*_{ij^*_i}$ to node $j^*_i$. For strong links, node $i$ transmits a packet from the head of $U^{c^*}_i$. If link $(i,j^*)$ is a weak link, and $W_{ij}(t)=W_{ij}^1(t)$, node $i$ pushes a new packet from $U^{c^*}_i$ into $V^{c^*}_{ij^*_i}$ and transmits the packet from the head of $V^{c^*}_{ij^*_i}(t)$; otherwise, node $i$ resends the packet at the head of $V^{c^*}_{ij^*_i}(t)$.
\item[5)] \textbf{Decide on decoding actions.} For each link $(i,j)$ and each commodity $c$, we choose $f^c_{ij}(t)\in\{0,1\}$ to maximize
\begin{align}
[P^c_{ij}(t)-\eta-U^c_{j}(t)]f^c_{ij}(t)
\end{align}\textcolor{black}{where $\eta$ is a parameter greater than or equal to 1. We let $f^c_{ij}(t)=1$ when $P^c_{ij}(t)-\eta-U^c_{j}(t)=0$.}
\end{itemize}
\begin{Lemma}\label{lemma:Pij}
Under the above virtual queue based algorithm: $(a)$ If $P^c_{ij}(t)<\eta$ for some weak link $(i,j)$ and slot $t$, then $f^c_{ij}(t)=0$. $(b)$ If $P^c_{ij}(t_0)\geq \eta-1$, then, under the proposed algorithm, $P^c_{ij}(t)\geq \eta-1$ for every $t\geq t_0$.
\end{Lemma}
\begin{Proof}
In order to maximize $(P^c_{ij}(t)-\eta-U^c_j(t))f^c_{ij}(t)$, $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)-\eta-U^c_j(t)>0$. Therefore, if $P^c_{ij}(t)<\eta$, $f^c_{ij}(t)$ must equal zero, which proves $(a)$.
Now suppose that $P^c_{ij}(t)\geq \eta-1$ for some slot $t$. We show that it also holds for $t+1$. If $P^c_{ij}(t)\geq \eta$, then it can decrease by at most one packet on a single slot, so that $P^c_{ij}(t+1)\geq P^c_{ij}(t)-f^c_{ij}(t)\geq \eta-1$. If $P^c_{ij}(t)<\eta$, we must have $f^c_{ij}(t)=0$, the queue cannot decrease in slot $t$, and we again have $P^c_{ij}(t+1)\geq \eta-1$.
\end{Proof}
With Lemma~\ref{lemma:Pij}, we can see that when setting $\eta=1$, under the proposed algorithm, if $P^c_{ij}(t)<1$ for some weak link $(i,j)$ and slot $t$, then $f^c_{ij}(t)=0$. $f^c_{ij}(t)$ can only equal one when $P^c_{ij}(t)\geq 1$. Thus, constraint (\ref{eqn:fcon}) is satisfied automatically for every slot under the proposed algorithm
\begin{Theorem}\label{thm:capacity2}
For a network with given link rates $\{r_{ij}\}$ and a feasible activation pattern set $\mathcal{S}$, the network capacity region $\Lambda'$ for the constructed network consists of all rate matrices $\{\lambda^c_n\}$ for which there exists flow variables $\{\mu^{vc}_{ij}\}, v=1,2,3$ together with probabilities $\pi_s$ for all possible activation pattern $s\in S$ such that
\begin{align}
\mu^{vc}_{ij}&\geq 0,\quad \mu^{vc}_{ci}=0,\quad \mu^{vc}_{ii}=0, \quad \forall i,j,v,c\label{eqn:cap21}\\
\sum_{l}\mu^{3c}_{li}+\lambda^c_i&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c\label{eqn:cap22}\\
\mu^{1c}_{ij}&\leq \mu^{2c}_{ij},\quad \mu^{2c}_{ij}\leq \mu^{3c}_{ij}\quad \forall i,j,c\label{eqn:cap24}\\
\sum_c\mu^{2c}_{ij}&\leq \sum_c\sum_{s\in S}\pi_s\theta^c_{ij}(s)r_{ij},\quad \forall i,j\label{eqn:cap23}\\
\sum_{s\in \mathcal{S}}\pi_s&\leq1
\end{align}
where the probabilities $\theta^c_{ij}(s)$ satisfies
\begin{align}
\theta^c_{ij}(s)=0 \mbox{ if }i\notin s, \quad \sum_{c,j}\theta^c_{ij}(s)= 1,\forall i\label{eqn:cap25}
\end{align}
\end{Theorem}
\begin{Proof}
The necessary part can be proved in the same way for Theorem~\ref{thm1}. The sufficiency will be proved through constructing an algorithm that stabilizes all rate vectors satisfying the constraints.
\end{Proof}
In this constructed virtual network, $\mu_{ij}^{1c}, \mu_{ij}^{2c}, \mu_{ij}^{3c}$ can be interpreted as the flow over links $(i,v_{ij})$, $(v_{ij},p_{ij})$, $(p_{ij},j)$, respectively.
The constraints (\ref{eqn:cap21}) represent non-negativity and flow efficiency constraints. The constraints in (\ref{eqn:cap22}), (\ref{eqn:cap24}) represent flow conservation constraints, where the exogenous arrival flow rates for nodes $v_{ij}, p_{ij}$ are zero. The constraints in (\ref{eqn:cap23}) represent the physical link constraint for virtual link $(v_{ij},p_{ij})$, which equals the link constraint for the real link $(i,j)$ in the original system. Note that there is no explicit link constraints for $(i,v_{ij})$ and $(p_{ij},j)$, since the transfer of packets over these links happen at the same node, and there is no physical link constraint on them.
\begin{Lemma}
The network capacity region for the virtual network $\Lambda'$ defined in Theorem~\ref{thm:capacity2} is equal to that for the original system $\Lambda$ defined in Theorem~\ref{thm1}.
\end{Lemma}
\begin{Proof}
First, we show that if $\boldsymbol{\lambda}\in \Lambda' $, then it must lie in $\Lambda$ as well. This can be shown directly by letting $\mu^c_{ij}=\mu_{ij}^{2c}$, Thus, we have $\mu_{ij}^{1c}\leq \mu^c_{ij}\leq\mu_{ij}^{3c}$. Plugging into (\ref{eqn:cap22}), we have (\ref{eqn:cap2}), i.e., if $\boldsymbol{\lambda}$ satisfies the constraints in (\ref{eqn:cap21})-(\ref{eqn:cap25}), it must satisfy (\ref{eqn:cap1})-(\ref{eqn:cap5}) as well. Thus $\boldsymbol{\lambda}\in \Lambda$.
The other direction can be shown in the following way: we prove that for any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in \Lambda$, $\boldsymbol{\lambda}+\frac{\epsilon}{2d+1}\in \Lambda'$, where $d$ is the maximum degree of the network.
Since $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in \Lambda$, we have
\begin{align}
\sum_{l}\mu^c_{li}+\lambda^c_i+\epsilon&\leq \sum_{j}\mu^c_{ij},\quad \forall i\neq c.\label{eqn:cap32}
\end{align}
By letting $\mu_{ij}^{2c}=\mu^c_{ij}$, we have that (\ref{eqn:cap23})-(\ref{eqn:cap25}) satisfied. At the same time, we let $\mu_{ij}^{1c}+\epsilon_1=\mu_{ij}^{3c}-\epsilon_1=\mu_{ij}^{2c}$, and plug them into (\ref{eqn:cap32}), which gives
\begin{align}
\sum_{l}(\mu^{3c}_{li}-\epsilon_1)+\lambda^c_i+\epsilon&\leq \sum_{j}(\mu^{1c}_{ij}+\epsilon_1),\quad \forall i\neq c.
\end{align}
Therefore,
\begin{align}
\sum_{l}\mu^{3c}_{li}+\lambda^c_i+\epsilon-2d\epsilon_1&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c.
\end{align}
By letting $\epsilon-2d\epsilon_1=\epsilon_1$, we have
\begin{align}
\sum_{l}\mu^{3c}_{li}+\lambda^c_i+\epsilon_1&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c\\
\mu_{ij}^{1c}+\epsilon_1&=\mu_{ij}^{2c},\quad \mu_{ij}^{2c}+\epsilon_1=\mu_{ij}^{3c}.
\end{align}
Thus, we have $\boldsymbol{\lambda}+\boldsymbol{\epsilon}_1\in \Lambda'$. As $\epsilon\rightarrow 0$, $\epsilon_1$ approaches zero as well. Thus, $\Lambda=\Lambda'$.
\end{Proof}
\begin{Theorem}\label{thm:virtual}
\textcolor{black}{For $\eta\geq 1$}, the proposed algorithm stabilizes any rate vector satisfying $\boldsymbol{\lambda}+\boldsymbol{\epsilon} \in \Lambda$. The average expected queue backlog in the system is upper bounded by $$\frac{(2d+1)(KN(d+A_{max})^2+2N+(2\eta+1)KNd)}{\epsilon}.$$\end{Theorem}
The proof of the theorem is given in Appendix~\ref{apx:thm_virtual}. \textcolor{black}{Since the upper bound is monotonically increasing in $\eta$, we can always set $\eta$ to 1 to achieve a better delay performance.}
The algorithm updates every slot. This avoids the delay caused by infrequent policy updating in the $T$-slot algorithm. On the other hand, we introduce virtual queues in the system. Since the differential backlog in Step 1) of the algorithm is not the physical differential backlog between nodes $(i,j)$ in the real system, the inaccuracy of the queue length information can, potentially, increase the average backlog in the system. This is reflected by the $2d+1$ factor in the upper bound. But for arrival rate vectors that are close to the boundary of network capacity region, the $T$-slot algorithm can only stabilize the system if $T$ is large. In such situation thus the virtual-queue based algorithm attains a better delay performance.
The algorithm exhausts the maximum stability region of the network without any pre-specified parameter depending on traffic statistics. Compared to the $T$-slot algorithm where the stabilizing parameter $T$ actually depends on how close a rate vector $\boldsymbol{\lambda}$ gets to the boundary of the network stability region $\Lambda$, this is a big advantage.
\section{Discussions}
\subsection{Enhanced Virtual Queue Based Algorithm}
According to Theorem~\ref{thm:Tslot}, the average expected backlog in the system under $T$-slot algorithm is $O(T^2)$, which indicates poor delay performance when $T$ is large. The virtual queue based algorithm avoids the long delay caused by the infrequent updating of queue length information. However, because of the introduction of virtual nodes, packets accumulate in virtual queues over weak links, which negatively impacts delay performance, especially when the system is lightly loaded. In order to improve delay performance, we proposed to enhance the virtual queue based algorithm by adjusting the term associated with $V_{ij}$ in the Lyapunov function.
Define a modified Lyaponov function
\begin{align}\label{eqn:lya3}
L(\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t))&=\sum_{c,i}(U^c_i(t))^2+\sum_{c,(i,j)}(V^c_{ij}(t)+\gamma/ r_{ij})^2\nonumber\\
&\quad+\sum_{c,(i,j)}(P^c_{ij}(t)-\eta)^2.
\end{align}
Compared with (\ref{eqn:lya12}), we have added a term $\gamma/ r_{ij}$ to each $V^c_{ij}(t)$ in (\ref{eqn:lya3}). This is equivalent to add queue backlog $\gamma/ r_{ij}$ in the virtual queue.
Following an analysis similar to that of the virtual queue algorithm (cf. Appendix~\ref{apx:lemma_1drift}), we can show that under constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the sample path Lyapunov drift satisfies
\begin{align*}
\Delta(t)&\leq \hspace{-0.02in}2\sum_{c,i}U^c_i(t)A^c_i(t)\hspace{-0.02in}-\hspace{-0.02in}2\hspace{-0.02in}\sum_{c,i,j}[V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t)]\beta^c_{ij}(t)\nonumber\\
&\quad-2\sum_{c,i,j}[U^c_i(t)-(V^c_{ij}(t)+\gamma/r_{ij})]\beta^{1c}_{ij}(t)\nonumber\\
&\quad -2\sum_{c,i,j}[(P^c_{ij}(t)-\eta)-U^c_j(t)]f^c_{ij}(t)+\alpha_3
\end{align*}
where $\alpha_3$ is a positive constant. In order to minimize the one-step Lyapunov drift, we substitute the following ``modified'' step M1) for step 1) of the virtual queue based algorithm as follows:
\begin{itemize}
\item[M1)] \textbf{Find per-link backpressure.} At the beginning of a timeslot, node $i$ checks its neighbors and computes the differential backlogs. We compute the weight for the link between node $i$ and the first virtual node, and, separately, the weight for the link between the two virtual nodes. Specially, the weight for control action $\beta^{1c}_{ij}$ is computed as
\begin{align*}
W^{1c}_{ij}(t)= &[U^c_i(t)-(V^c_{ij}(t)+\gamma/r_{ij})\\
&+(V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t))r_{ij}]^+,
\end{align*}
and the weight for control action $\beta^{2c}_{ij}$ is computed as
$$W^{2c}_{ij}(t)=[V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t)]^+r_{ij}.$$
The weight of commodity $c$ over link $(i,j)$ is $W^c_{ij}(t)=\max\{W^{1c}_{ij}(t),W^{2c}_{ij}(t)\}$.
The weight for link $(i,j)$ is $$W_{ij}(t)=\max_c W^{c}_{ij}(t),$$ and the optimal commodity $$c^*_{ij}=\arg\max_c W^{c}_{ij}(t).$$
\end{itemize}
The rest of the steps remain the same as in Sec.~\ref{sec:virtual_algo}. Following similar steps as in the proof of Theorem 4, we can show that the enhanced version also achieves the maximum stability region. The intuition is, in heavily backlogged regime $V^c_{ij}(t)\gg \gamma/r_{ij}$, therefore the added queue backlog is negligible and doesn't impact the stability of the queue.
As in the virtual queue based algorithm, in the enhanced algorithm, we also set $\eta=1$ to guarantee that $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)\geq 1$. We now discuss the effect of the $\gamma$ parameter. We set $\gamma=1/2$. By setting $\gamma$ to be some positive value, the system adds some {\it virtual} backlog to buffer $V_{ij}$, thus preventing packets from entering the empty buffers over the weak links when the system starts from an initial empty state. It also increases backpressure between $V_{ij}$ and $P_{ij}$. Therefore, packets tend to be pushed through links more quickly, and the decoding time is shortened accordingly. Besides, in the modified Lyaponov function, we select weights for the {\it virtual} backlogs of virtual queues as the inverse of link rates. The reason for such selection is that the number of slots required to deliver a packet through a link is equal to the inverse of the link rate. We aim to capture different delay effects over different links through this adjustment. The intuition behind the enhanced algorithm is that, when the system is lightly loaded, passing packets only through strong links can support the traffic load while still providing good delay performance. Therefore, using weak links is not necessary, and using strong links is preferable. Setting the virtual backlog length to be $\gamma/r_{ij}$ forces packets to select strong links and improve the delay performance of the virtual queue based algorithm in the low traffic regime. When the system is heavily loaded and strong links cannot support all traffic flows, the differential backlogs over certain strong links eventually \textcolor{black}{decrease}, and weak links start to be used. \textcolor{black}{The enhanced algorithm essentially is a hybrid of the classic backpressure algorithm ($T=1$) and virtual queue based algorithm. It always stabilizes the system, and automatically adjusts the portion of slots that the system operates under each algorithm to maintain a good delay performance. }
\subsection{Dependent Link Rates}\label{sec:vary}
For the simplicity of analysis, in previous sections we assume that the link capacity for a given transmitter-receiver pair is fixed. We can easily relax this assumption and generalize the system model by assuming that the link rates are not fixed but are rather a function of the chosen activation pattern. Specifically, to be better able to capture the effects of interference, for link pair $(i,j)$ we assume the link rate under activation pattern $s$ is $r_{ij}(s)$. Then, the network capacity region under the new assumption is characterized by the same inequalities in Theorem~\ref{thm1} except that in eqn. (\ref{eqn:cap3}) $r_{ij}(s)$ is used instead of $r_{ij}$.
The scheduling algorithms should be adjusted accordingly in order to achieve the network capacity region. E.g., for the $T$-slot algorithm, in each updating slot, after determining the maximizing commodity $c^*_{ij}$ for each link $(i,j)$, the system selects the activation pattern, as well as the corresponding forwarder for each active transmitter, to maximize
\begin{align*}
\max_{s}\sum_{i\in s,j\in\mathcal{N}(i)}[Q^{c^*}_i(t)-Q^{c^*}_j(t)]^+r_{ij}(s).
\end{align*}
The remaining steps remain the same.
For the virtual queue based algorithm, the maximizing commodity should be jointly selected with the activation pattern and the corresponding forwarders at the same time.
\subsection{Distributed Implementation}
The $T$-slot algorithm and the virtual queue based algorithm presented in the previous sections involving solving a constrained optimization problem (max-weight matching) in a centralized fashion. Here we consider distributed implementations. We assume nodes have information of the link rates between themselves and their neighbors, and the queue backlogs of their neighbors.
In interference-free networks where nodes can transmit simultaneously without interfering with each other, minimizing the Lyaponov drift function can be decomposed into local optimization problems. Each node individually makes the transmission decision based only on locally available information.
In networks with non-trivial interference constraints, we can use the message passing algorithm proposed in \cite{shah_msg_pass07} to select the activation pattern in a distributed way. First, each node selects its forwarder based on its local information. Then, the system uses the belief propagation based message-passing algorithm to select the optimal activation pattern that minimizes the Lyaponov drift. If the underlying interference graph is bipartite, \cite{shah_msg_pass07} shows that the the message-passing algorithm always converges to the optimal solution. \textcolor{black}{Another possible approach is to develop carrier sensing based randomized scheduling algorithms. Carrier sensing based distributed algorithms are discussed in \cite{jiang, shah_random}, etc. The throughput optimality of these algorithms are established under certain assumptions. Some other distributed implementations of backpressure algorithms are discussed in Chapter 4.8 of \cite{Neely_now}. }
\section{Simulation Results}\label{sec:simu}
In this section we present detailed simulation results. These results
illustrate the basic properties of the algorithms.
\subsection{A single-commodity scenario}
First consider the $4$-node wireless network shown in Fig.~\ref{fig:4node} where the links with nonzero rates are shown. We consider a single commodity scenario where new packets destined for node $4$ arrive at node $1$ and node $2$ according to independent Bernoulli processes with rate $\lambda_1$ and $\lambda_2$, respectively. Node $3$ does not have incoming packets. It acts purely as a relay. We assume that the system does not have any activation constraints, i.e., all nodes can transmit simultaneously without causing each other interference.
The maximum stability region $\Lambda$, shown in Fig.~\ref{fig.stability_region}, is the union of rate pairs ($\lambda_1$, $\lambda_2$) defined according to Theorem~\ref{thm1}. If mutual information accumulation is not allowed, the corresponding maximum stability region is indicated by the dashed triangle inside $\Lambda$. This follows because when weak links are not utilized, the only route from node 1 to node 4 is through node 2, thus the sum of arrival rates from node 1 and 2 cannot exceed link rate 1. When mutual information accumulation is exploited, the weak link from node 1 to node 4 with rate $1/9$ can be utilized, expanding the stability region.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{4node.eps}}
\end{center}
\vspace{-0.15in}
\caption{The 4-node network used to compare the $T$-slot algorithm and the virtual queue based algorithm. The number labeling each link is the rate of that link.}
\label{fig:4node}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{stability.eps}}
\end{center}
\vspace{-0.15in}
\caption{The maximum stability region of the 4-node network. The inside triangular region is the network capacity region when MIA is not exploited.}
\label{fig.stability_region}
\end{figure}
We first compare the performance of the $T$-slot algorithm for different values of $T$. For each $T$, we conduct the simulation for arrival rates $\lambda_1=\lambda_2=\lambda$ ranging from 0 to 0.55. The resulting average backlog curve is shown in Fig.~\ref{fig:avgbacklog}. When $T=1$, the weak links cannot be utilized, and the algorithm can only stabilize the arrival rates up to $\lambda=1/2$, which is point $A$ in Fig.~\ref{fig.stability_region}. When $T=9$, the reciprocal of the link rate 1/9, the algorithm can stabilize arrival rates up to $\lambda=9/17$, corresponding to point $B$ in Fig.~\ref{fig.stability_region}. In this case, all of the partial packets transferred over weak link $(1,4)$ are eventually decoded, and that weak link is fully utilized. This is a special scenario since the value of $T$ perfectly matches the rate of weak link. For larger networks that consist of many weak links, selecting $T$ to match all of the weak links may be prohibitive, since such value can be very large. Except for such perfect matching scenarios, for more general values of $T$, the weak link is partially utilized, therefore, the maximum $\lambda$ the algorithm can stabilizes is some value between 1/2 and 9/17. In general, a larger $T$ stabilizes larger arrival rates, and results in an increased average backlog in the system. This is illustrated by curves with $T=15,60$ in Fig.~\ref{fig:avgbacklog}.
Fig.~\ref{fig:avgbacklog} also plots the performance of the virtual queue based algorithm. The system can be stabilized up to $\lambda=9/17$ under the virtual queue based algorithm. Compared with the $T$-slot algorithm, the virtual queue based algorithm attains a much better delay performance for large value of $\lambda$, i.e., when the system is in heavy traffic regime. It dominates even the curve with $T=9$ at high rates. For small values of $\lambda$, the virtual queue based algorithm has a worse performance in terms of delay. This is because the algorithm requires the virtual queues to build up certain lengths in order to push packets through the weak links. The virtual queue based algorithm has relatively constant delay performance for $\lambda\in [0,1/2]$, while under the $T$-slot algorithm, the average backlog increases monotonically with $\lambda$.
\begin{figure}[t]
\begin{center}
\scalebox{0.3} {\epsffile{evolve_along_lambda_ra.eps}}
\end{center}
\vspace{-0.15in}
\caption{Comparison of average backlog in the system under the algorithms.}
\label{fig:avgbacklog}
\end{figure}
\subsection{A multi-commodity scenario}
Next we consider the $10$-node network shown in Fig.~\ref{fig:10node}. We consider a multi-commodity scenario in which packets destined for node $10$ arrive at node $1$ and packets destined for node $9$ arrive at node $2$. Arrivals are distributed according to two independent Bernoulli processes with rate $\lambda_1$ and $\lambda_2$, respectively. We assume that the system does not have any activation constraints so that all nodes can transmit simultaneously without causing interference.
The maximum stability region $\Lambda$ is shown in Fig.~\ref{fig.stability_region2}. If mutual information accumulation is not allowed, the corresponding maximum stability region is the dashed triangle inside $\Lambda$. This follows because when weak links are not used, the routes from node 1 to node 10 and the routes from node 2 to node 9 must pass through link $(4,7)$, thus the sum of arrival rates from node 1 and 2 cannot exceed that link rate. When mutual information accumulation is exploited, weak links can be utilized and form additional routes $1\rightarrow 5\rightarrow 6\rightarrow 10$, $2\rightarrow 3\rightarrow 8\rightarrow 9$, thus an expanded stability region can be achieved.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{10node.eps}}
\end{center}
\caption{The 10-node network used to compare the $T$-slot algorithm and the virtual queue based algorithm. The number labeling each link is the rate of that link.}
\label{fig:10node}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{stability2.eps}}
\end{center}
\caption{The maximum stability region of the 10-node network, where $B=(11/15,11/15)$. The dashed triangular region is the network capacity region when MIA is not exploited.}
\label{fig.stability_region2}
\end{figure}
We first compare the performance of the $T$-slot algorithm for different values of $T$. For each $T$, we conduct the simulation for arrival rates $\lambda_1=\lambda_2=\lambda$ ranging from 0 to 0.75. The resulting average backlog curve is shown in Fig.~\ref{fig:avgbacklog2}. When $T=1$, the weak links are not utilized, so the algorithm can only stabilizes the arrival rates up to $\lambda=1/2$, which is point $A$ in Fig.~\ref{fig.stability_region2}. When $T=30$, which is the reciprocal of the link rate product $\frac{1}{2}\frac{1}{3}\frac{1}{5}$ and perfectly matches the rates of weak links, the algorithm can stabilize the arrival rates up to $\lambda=11/15$, corresponding to point $B$ in Fig.~\ref{fig.stability_region2}. For more general values of $T$, the weak links are partially utilized, therefore, the maximum $\lambda$ the algorithm can stabilizes is some value between 1/2 and 11/15. In general, a larger $T$ stabilizes larger arrival rates, and results in an increased average backlog in the system. This is illustrated by curves with $T=5,17$ in Fig.~\ref{fig:avgbacklog2}. In order to achieve the boundary point, a large $T$ is required in general.
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{revised_plot.eps}}
\end{center}
\vspace{-0.15in}
\caption{Comparison of average backlog in the system under the algorithms.}
\label{fig:avgbacklog2}
\vspace{-0.15in}
\end{figure}
In Fig.~\ref{fig.stability_region2}, we also present performance results from simulation of the virtual queue based algorithm. As expected, the system can be stabilized up to the edge of the stability region, $\lambda=11/15$. Compared with the $T$-slot algorithms, the virtual queue based algorithm attains a much better delay performance in heavy traffic regime. It dominates the curve with $T=30$ over the displayed rate region. Similar to the single-commodity scenario, for small values of $\lambda$, the virtual queue based algorithm does not show much advantage in terms of delay. Finally, we also provide simulation results for the enhanced virtual queue based algorithm in Fig.~\ref{fig:avgbacklog2}. The enhanced algorithm stabilize the full sepctrum of arrival rate, i.e., every input rate vector up to $\lambda=11/15$. The delay performance in the light traffic regime ($\lambda<1/2$) is improved under the enhanced version, with a smaller penalty of delay in the heavy traffic regime. The delay performance transition around $\lambda<1/2$ can be explained by the \textcolor{black}{hybrid of and automatic adjustment between the classic backpressure ($T=1$) and virtual queue based algorithm. In the simulation, we set $\gamma=1/2$. The value of $\gamma$ decides the tradeoff between the delay performance in the light traffic regime and the delay performance in the heavy traffic regime. }
\section{Conclusions}\label{sec:conclusions}
In this paper, we analyzed the optimal routing and scheduling policy when mutual information accumulation is exploited at the physical layer in a wireless network. We first characterized the maximum stability region under the natural assumption of policy space, which is shown to surpass the network capacity region when mutual information accumulation is not allowed. Two scheduling policies are proposed to cope with the decoding process introduced by mutual information accumulation. The $T$-slot algorithm and the virtual queue based algorithm can both achieve the maximum stability region but the latter has significantly reduced delay in heavy traffic regime. We also compared the performance under these two policies analytically and numerically.
\appendices
\section{Proof of the Necessity Part of Theorem~\ref{thm1}}\label{apx:thm1}
Suppose that a stabilizing control strategy exists. It is possible that the strategy may use redundant packet transfers, i.e., allowing multiple copies of a packet to exist at the same time in the network. However, under assumption A3, each packet can only have a single parent, i.e., once a node starts to accumulate information for a packet, it can only decode the packet if the total received information from a single transmitter exceeds the amount of information contained in that packet.
Define $X^c_i(t)$ as the total number of packets of commodity $c$ that have arrived at node $i$ up to slot $t$. Define $\mathcal{D}^c(t)$ as the set of distinct packets of commodity $c$ delivered to the destination node $c$ over $[0,t)$, and $D^c(t)=|\mathcal{D}^c(t)|$ be the total number of such distinct packets. Then, we have $D^c(t)=\sum_{i=1}^n Y^c_i(t)$, where $Y^c_i(t)$ is defined in Section~\ref{sec:capacity}. If multiple copies of a packet depart from the system, we only count the first delivered copy. In a stable system, for any traffic flow of commodity $c$ entering source node $i$, the average number of distinct packets delivered to destination node $c$ in each slot must be equal to its arrival rate, thus, we have
\begin{align}\label{eqn:consv}
\lim_{t\rightarrow \infty} \frac{Y^c_i(t)}{t}= \lim_{t\rightarrow \infty} \frac{X^c_i(t)}{t}=\lambda^c_i.
\end{align}
For each distinct delivered packet $k$, there is a single routing path this packet took from its origin node to its destination node. Denote $D^c_{ij}(t)$ as the total number of distinct packets in $\mathcal{D}^c(t)$ transmitted from node $i$ to node $j$ over $[0,t)$. We have
\begin{align}\label{eqn:flow}
\sum_{l=1,l\neq i}^n D^c_{li}(t)+Y^c_i(t)&= \sum_{j=1,j\neq i}^n D^c_{ij}(t).
\end{align}
Define $T_s(t)$ to be the number of slots within which the system operates with activation pattern $s\in \mathcal{S}$ up to time $t$, and let $T^c_{ij}(s,t)$ be the number of slots that link $(i,j)$ is active and transmitting a packet \textcolor{black}{in $\mathcal{D}^c(t)$} under activation pattern $s$ up to time $t$. Therefore, we have
\begin{align}
D^c_{ij}(t)&= \sum_{s\in \mathcal{S}}T^c_{ij}(s,t)r_{ij},
\end{align}
Thus,
\begin{align}
\frac{ D^c_{ij}(t)}{t}&= \sum_{s\in \mathcal{S}}\frac{T_s(t)}{t}\frac{T^c_{ij}(s,t)}{T_s(t)}r_{ij}.
\end{align}
Define $\mu^c_{ij}(t)=\frac{D^c_{ij}(t)}{t}$, $\pi_s(t)=\frac{T_s(t)}{t}$ and $\theta^c_{ij}(s,t)=\frac{T^c_{ij}(s,t)}{T_s(t)}$. We note that since we can deliver at most one packet to $i$ from $j$ in each time slot,
\begin{align}\label{eqn:con1}
0\leq\mu^c_{ij}(t)\leq 1, \quad \mu^c_{ii}(t)=0,\quad \mu^c_{c,i}(t)=0.
\end{align}
Because only one activation pattern is allowed per slot, we have
\begin{align}
\sum_{s\in \mathcal{S}}\pi_s(t)&=1.
\end{align}
On the other hand, since a node can only transmit a single packet in any slot, and we only count distinct copies of packets, then, if node $i$ is transmitting a packet of commodity $c$ at time $t$, at most one of the received copies at its neighbors can be counted as a distinct packet in $\mathcal{D}^c(t)$. Thus, we have
\begin{align}
\sum_{c,j}\theta^c_{ij}(s,t)&\leq 1\quad \textrm{if }i\in s.\label{eqn:con2}
\end{align}
We can always make inequality (\ref{eqn:con2}) tight by restricting to the policy space that a node can only transmit when it is necessary, i.e., if node $i$'s transmission at time $t$ does not contribute to the delivery of any distinct packet to its destination, it should keep silent and be removed from the activation pattern at time $t$. The remaining active nodes form another valid activation pattern, and gives $T_s(t)=\sum_{c,j}T^c_{ij}(s,t)$ for every $i\in s$.
These constraints define a closed and bounded region with finite dimension, thus there must exist an infinite subsequence $\tilde{t}$ over which the individual terms converges to points $\mu^c_{ij}$, $\pi_s$ and $\theta^c_{ji}(s)$ that also satisfy the inequalities (\ref{eqn:con1})-(\ref{eqn:con2}):
\begin{align}
\lim_{\tilde{t}\rightarrow \infty} \mu^c_{ij}({\tilde{t}})=\mu^c_{ij},\\
\lim_{\tilde{t}\rightarrow \infty} \pi_s(\tilde{t})=\pi_s,\\
\lim_{\tilde{t}\rightarrow \infty} \theta^c_{ij}(s,\tilde{t})=\theta^c_{ij}(s).
\end{align}
Furthermore, using (\ref{eqn:consv}) in (\ref{eqn:flow}) and taking $\tilde{t}\rightarrow \infty$ yields
\begin{align}
\sum_{l}\mu^c_{li}+\lambda^c_i&= \sum_{j}\mu^c_{ij},\quad \forall i\neq c.
\end{align}
This proves the result.
\section{Proof of Theorem~\ref{thm:Tslot}}\label{apx:thm_Tslot}
Our algorithm always transmit packets from the head of $Q^c_i(t)$, and there is at most one full copy of each packet in the network. Therefore, without worry of confusion, in the following analysis we drop packet index $k$ and use commodity index $c$ instead as the superscript of control actions $\{\beta^{(k)}_{ij}\}$ and $\{f^{(k)}_{ij}\}$.
First, define the Lyapunov function $$L(\mathbf{Q}(t))=\sum_{c,i}(Q^c_i(t))^2,$$ and the $T$-slot sample path Lyapunov drift as $$\Delta_T(t):=L(\mathbf{Q}(t+T))-L(\mathbf{Q}(t)).$$ Then, we have the following Lemma.
\begin{Lemma}\label{lemma:drift}
Assume the system changes its policy every $T$ slots starting at $t=0$. For all $t=0, T, 2T,\ldots$ and all possible values of $\mathbf{Q}(t)$, under a given policy $\{\beta^c_{ij}(t)\}$, we have
\begin{align}\label{eqn:lya2}
\Delta_T(t)&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_j \beta^c_{ij}(t) r_{ij}-T\sum_{l}\beta^c_{li}(t) r_{li}\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\alpha_1,
\end{align}
where
\begin{align}
\alpha_1&= KNT^2(\mu_{max}+A_{max})^2+NT^2.
\end{align}
\end{Lemma}
\begin{Proof}
Under any given policy, the queue length evolves according to
\begin{align}
Q^c_i(t+1)&= \Big(Q^c_i(t)-\sum_{j}\beta^c_{ij}(t)f^c_{ij}(t)\Big)^+\nonumber\\
&\quad+\sum_{l}\beta^c_{li}(t)f^c_{li}(t)+A^c_i(t).\end{align}
Considering the policy which updates at $t=0,T,2T,\ldots$, we have
\begin{align}
&Q^c_i(t+T)\nonumber\\
&\leq \Big(Q^c_i(t)-\sum_{\tau=0}^{T-1}\sum_{j}\beta^c_{ij}(t)f^c_{ij}(t+\tau)\Big)^+\nonumber\\
&\quad+\sum_{\tau=0}^{T-1}\sum_{l}\beta^c_{li}(t)f^c_{li}(t+\tau)+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\label{eqn:sumT}\\
&\leq \Big(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^+\nonumber\\
&\quad+\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau).\label{eqn:lya10}
\end{align}
In (\ref{eqn:sumT}), we upper bound $Q^c_i(t+T)$ by moving the negative terms into the function $(\cdot)^+$. This follows from the facts that
\begin{align*}
\max[a+b-c,0]&\leq \max[a-c,0]+b\\
\max[\max[a,0]-c,0]&=\max[a-c,0] \textrm{ for }a,b,c\geq 0.
\end{align*}
This is equivalent to letting node $i$ transmit packets existing in $Q^c_i(t)$ only, i.e., even if there are some packets of commodity $c$ arrive at node $i$ in the epoch and all of the packets existing in $Q^c_i(t)$ have been cleared, they are not transmitted until next epoch. Since under the policy, these packets may be transmitted to next hop, the actual queue length $Q^c_i(t+T)$ can be upper bounded. Eqn.~(\ref{eqn:lya10}) follows from the fact that over the T-slot window, the successfully delivered packets (including dummy packets) from node $i$ to node $j$ is $\lfloor T\beta^c_{ij}(t)r_{ij}\rfloor$, and recall $\beta^c_{ij}(t)$ is held constant for the whole epoch. Since both sides of the inequality are positive, it holds for the square of both sides, thus,
\begin{align}
&(Q^c_i(t+T))^2\nonumber\\
&\leq \Big(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^2\nonumber\\
&\quad+2Q^c_i(t)\Big(\sum_{l}\beta_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)\nonumber\\
&\quad+\Big(\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)^2\label{eqn:square}\\
&\leq(Q^c_i(t))^2-2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\nonumber\\&\quad-\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+ C^c_{i},
\end{align}
where
\begin{align*}
C^c_i&=\Big(\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)^2\\
&\quad+\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^2.
\end{align*}
We use $Q^c_i(t)$ instead of $\left(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\right)^+$ for the cross term in (\ref{eqn:square}). Since the former is always greater than the latter, the inequality (\ref{eqn:square}) holds. Therefore, we have
\begin{align}
\Delta_T(t)&\leq \sum_{c,i} -2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor-\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+\sum_{c,i} C^c_{i}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)( r_{ij}T-1)-\sum_{l}\beta^c_{li}(t) r_{li}T\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+\sum_{c,i} C^c_{i}\label{eqn:floor}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_j \beta^c_{ij}(t) r_{ij}-T\sum_{l}\beta^c_{li}(t) r_{li}\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\sum_{c,i} C^c_{i}\label{eqn:floor2}
\end{align}
where $\Delta_T(t)$ is the $T$-slot Lyapunov drift, (\ref{eqn:floor}) follows from the fact that $x-1<\lfloor x\rfloor\leq x$, and (\ref{eqn:floor2}) follows from the fact that $\sum_j \beta^c_{ij}(t)\leq 1$. Based on the assumptions that $A_i^c(t)\leq A_{max}$, the maximum number of decoded packets at a node is upper bounded by $\mu_{max}$, and the constraint (\ref{beta_con}), we have
\begin{align*}
\sum_{c,i} C^c_i&\leq KNT^2(\mu_{max}+A_{max})^2+NT^2:=\alpha_1
\end{align*}
The proof is completed.
\end{Proof}
\begin{Lemma}\label{lemma:min}
For a given $\mathbf{Q}(t)$ on slot $t=0,T,2T,\ldots$, under the $T$-slot algorithm, the $T$-slot Lyapunov drift satisfies
\begin{align}
\Delta_T(t)&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_{j}\hat{\beta}^c_{ij}(t)r_{ij}-T\sum_{l}\hat{\beta}^c_{li}(t)r_{li}\nonumber\\
&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\alpha_1,\label{eqn:lya22}
\end{align}
where $\{\hat{\beta}^c_{ij}(t)\}$ are any alternative (possibly randomized) decisions that satisfy (\ref{beta_con})-(\ref{beta_con2}).
Further more, we have
\begin{align}
&\mathds{E}\{\Delta_T(t)|\mathbf{Q}(t)\}
\leq \mathds{E}\Big\{\sum_{c,i}-2Q^c_i(t)\Big(T\sum_{j}\hat{\beta}^c_{ij}(t)r_{ij}\nonumber\\&\qquad\qquad-T\sum_{l}\hat{\beta}^c_{li}(t)r_{li}
\left.-T\lambda^c_i-1\Big)\right|\mathbf{Q}(t)\Big\}+\alpha_1\label{eqn:lya4}
\end{align}
\end{Lemma}
\begin{Proof}
Given $\mathbf{Q}(t)$, the $T$-slot algorithm makes decisions to minimize the right hand side of (\ref{eqn:lya2}). Therefore, inequality (\ref{eqn:lya22}) holds for all realizations of the random quantities, and hence also holds when taking conditional expectations of both sides. Thus, we have (\ref{eqn:lya4}).
\end{Proof}
\begin{Corollary}\label{cor1}
A rate matrix $\{\boldsymbol{\lambda}+\boldsymbol{\epsilon}\}$ is in the capacity region $\Lambda$ if and only if there exists a stationary (possibly randomized) algorithm that chooses control decisions subject to constraint (\ref{beta_con})-(\ref{beta_con2}) and independent of current queue backlog to yield
\begin{align}\label{eqn:stable}
\mathds{E}\Big\{\sum_{j}\beta^c_{ij}r_{ij}-\sum_{l}\beta^c_{li}r_{li}-\lambda^c_i\Big\}\geq \epsilon\quad \forall i\neq c.
\end{align}
\end{Corollary}
\begin{Proof}
The result is an immediate consequence of Theorem~\ref{thm1}. The intuition is to think $\mathds{E}\left\{\beta^c_{ij}\right\}r_{ij}$ as $\mu^c_{ij}$ in (\ref{eqn:cap1})-(\ref{eqn:cap5}). The necessary part is obtained directly. The sufficient part will be shown in the following section.
\end{Proof}
For any $\{\boldsymbol{\lambda}+\boldsymbol{\epsilon}\}\in \Lambda$, Lemma~\ref{lemma:min} shows that the $T$-slot algorithm minimizes the right hand side of (\ref{eqn:lya4}) for any alternative policy satisfying (\ref{beta_con})-(\ref{beta_con2}). On the other hand, Corollary~\ref{cor1} implies that such policy can be constructed in a randomized way which is independent of current queue status in the network and satisfying (\ref{eqn:stable}).
Combining Lemma~\ref{lemma:min} and Corollary~\ref{cor1}, we have
\begin{align*}
&\mathds{E}\{\Delta_T(t)|\mathbf{Q}(t)\}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(\Big(\hat{\beta}^c_{ij}(t)r_{ij}-\sum_{l}\hat{\beta}^c_{li}(t)r_{li}-\lambda^c_i\Big)T-1\Big)+\alpha_1\\
&\leq \sum_{c,i}-2Q^c_i(t)\left(\epsilon T-1\right)+\alpha_1
\end{align*}
Taking expectations of the above inequality over the distribution of $\mathbf{Q}(t)$, we have
\begin{align}
\mathds{E}\{\Delta_T(t)\}&\leq \sum_{c,i}-2\mathds{E}\{Q^c_i(t)\}\left(\epsilon T-1\right)+\alpha_1.
\end{align}
Summing terms over $t=0,T,\ldots,(M-1)T$ for positive integer $M$ yields
\begin{align*}
&\frac{\mathds{E}\{L(\mathbf{Q}(MT))-L(\mathbf{Q}(0))\}}{M}\\
&\leq \frac{\sum_{m=0}^{M-1}\sum_{c,i}-2\mathds{E}\{Q^c_i(mT)\}\left(\epsilon T-1\right)}{M}+\alpha_1,
\end{align*}
i.e.,
\begin{align}
\sum_{m=0}^{M-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT)\}&\leq \frac{ L(\mathbf{Q}(0))-L(\mathbf{Q}(MT))+M\alpha_1}{2\left(\epsilon T-1\right)}\nonumber\\
&\leq \frac{ L(\mathbf{Q}(0))+M\alpha_1}{2\left(\epsilon T-1\right)}.\label{eqn:mT}
\end{align}
We drop $L(\mathbf{Q}(MT))$ in (\ref{eqn:mT}) since $L(\mathbf{Q}(MT))\geq 0$ based on the definition of Lyaponov function.
On the other hand, for $t=0,T,2T,\ldots$ and $0<\tau<T$, we have
\begin{align}
Q^c_i(t+\tau)\leq Q^c_i(t)+(\mu_{max}+A_{max})\tau
\end{align}
where $A_{max}$ is the maximum arrival rates and $\mu_{max}$ is the maximum number of decoded packets in a slot for any node. Therefore,
\begin{align*}
\sum_{m=0}^{M-1}\sum_{\tau=1}^{T-1}\sum_{c,i}&\mathds{E}\{Q^c_i(mT+\tau)\}\leq \sum_{m=0}^{M-1}\sum_{c,i}(T-1)\mathds{E}\{Q^c_i(mT)\}\nonumber\\
&\quad+MT(T-1)KN(\mu_{max}+A_{max})/2
\end{align*}
Combining with (\ref{eqn:mT}), we have
\begin{align*}
&\frac{1}{MT}\sum_{m=0}^{M-1}\sum_{\tau=0}^{T-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT+\tau)\}\nonumber\\
&\leq \frac{1}{M}\sum_{m=0}^{M-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT)\}+KN(T-1)(\mu_{max}+A_{max})/2\\
&\leq \frac{ L(\mathbf{Q}(0))+M\alpha_1}{2M\left(\epsilon T-1\right)}+\frac{KN(T-1)(\mu_{max}+A_{max})}{2}
\end{align*}
Letting $M\rightarrow \infty$, we have
\begin{align*}
&\lim_{t\rightarrow \infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\sum_{c,i}\mathds{E}\{Q^c_i(\tau)\}\nonumber\\
&\leq \frac{\alpha_1}{2(\epsilon T-1)}+\frac{KN(T-1)(\mu_{max}+A_{max})}{2}
\end{align*}
Therefore, when $T> \frac{1}{\epsilon}$, the average expected backlog in the network is bounded. Thus, the system is stable.
Since our algorithm operates under assumptions A1-A2, the total number of partial packets in the system, $\sum_{c,i}P^c_i(t)$, is always upper bounded by $\sum_{c,i}{Q^c_i(t)}$. Therefore, if the $\{Q^c_i(t)\}$ are stable, the $\{P^c_i(t)\}$ must be stable, and the overall system is stable.
\section{Proof of Lemma~\ref{lemma:1drift}}\label{apx:lemma_1drift}
Based on (\ref{eqn:u})-(\ref{eqn:p}), we have
\begin{align}
(U^c_i(t+1))^2&\leq (U_i^c(t))^2+\Big(\sum_{j}\beta_{ij}^{1c}(t)\Big)^2\nonumber\\
&-2U^c_i(t)\Big(\sum_{j}\beta_{ij}^{1c}(t)-\sum_{l}f^c_{li}(t)-A^c_i(t)\Big)\nonumber\\
&\quad +\Big(\sum_{l}f^c_{li}(t)+A^c_i(t)\Big)^2\label{eqn:u2}\\
(V_{ij}^c(t+1))^2&\leq (V^c_{ij}(t))^2+2V^c_{ij}(t)\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)\nonumber\\
&\quad+\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)^2\label{eqn:v2}\\
(P_{ij}^c(t+1))^2&\leq (P^c_{ij}(t))^2+2P^c_{ij}(t)(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))\nonumber\\
&\quad+(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))^2\label{eqn:p2}\\
P^c_{ij}(t+1)&\geq P^c_{ij}(t)-f^c_{ij}(t)
\end{align}
Thus,
\begin{align*}
\Delta(t)&\leq \sum_{c,i}-2U^c_i(t)\Big( \sum_{j}\beta_{ij}^{1c}(t)-\sum_{l}f^c_{li}(t)-A^c_i(t)\Big)\nonumber\\
&\quad+\sum_{c,i,j}2V^c_{ij}(t)\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)\nonumber\\
&\quad+\sum_{c,i,j}[2P^c_{ij}(t)(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))+2\eta f^c_{ij}(t)]+C
\end{align*}
where
\begin{align*}
C&=\sum_{c,i}\Big( \sum_{j}\beta_{ij}^{1c}(t)\Big)^2+\sum_{c,i}\Big(\sum_{l}f^c_{li}(t)+A^c_i(t)\Big)^2\\
&\quad+\sum_{c,i,j}\Big[\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)^2+(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))^2\Big]
\end{align*}
Because of constraints (\ref{eqn:bcon})-(\ref{eqn:fcon}), we have
\begin{align}
C&\leq KN(d+A_{max})^2+2N+KNd:=\alpha_2.
\end{align}
Combining items with respect to link $(i,j)$, we have (\ref{eqn:delta}).
\section{Proof of Theorem~\ref{thm:virtual}}\label{apx:thm_virtual}
\begin{Corollary}\label{cor:f}
A rate vector $\boldsymbol{\lambda}+\boldsymbol{\epsilon}$ is in the capacity region $\Lambda'$ if and only if there exists a stationary (possibly randomized) algorithm that chooses control decisions (independent of current queue backlog) subject to constraints (\ref{eqn:bcon}), to yield
\begin{align}
\mathds{E}\{\beta^c_{ij}\}r_{ij}&=\mathds{E}\{\beta^{1c}_{ij}\}+\epsilon\\
\mathds{E}\{f^c_{ij}\}&=\mathds{E}\{\beta^c_{ij}\}r_{ij}+\epsilon\\
\mathds{E}\Big\{\sum_{j}f^c_{ij}-\sum_{l}\beta^{1c}_{li}-\lambda^c_i\Big\}&\geq \epsilon\quad \forall i\neq c
\end{align}
\end{Corollary}
\begin{Proof}
The result is an immediate consequence of Theorem~\ref{thm:capacity2}. The intuition is to let $\mathds{E}\{\beta^c_{ij}\}r_{ij}=\mu_{ij}^{2c}$, $\mathds{E}\{\beta^{1c}_{ij}\}=\mu_{ij}^{1c}$ and $\mathds{E}\{f^c_{ij}\}=\mu_{ij}^{3c}$. The necessary part is obtained directly. The sufficient part will be shown in the following proof.
\end{Proof}
\begin{Lemma}\label{lemma:virtual_drift}
Under the virtual queue based algorithm,
\begin{align}
&\sum_{c,i,j}\Big\{[U^c_i(t)-V^c_{ij}(t)]\beta^{1c}_{ij}(t)+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\beta^c_{ij}
(t)\nonumber\\
&\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]f^c_{ij}(t)\Big\}\nonumber\\
&\geq \sum_{c,i,j}\Big\{[U^c_i(t)-V^c_{ij}(t)]\hat{\beta}^{1c}_{ij}(t)+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\hat{\beta}^c_{ij}(t)\nonumber\\
&\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]\hat{f}^c_{ij}(t)\Big\}\label{eqn:minVirtual}
\end{align}
for any other binary control policy $\{\hat{\beta}^{1c}_{ij},\hat{\beta}^c_{ij}, \hat{f}^c_{ij}\}$ satisfying (\ref{eqn:bcon}).
\end{Lemma}
\begin{Proof}
This lemma is an immediate consequence of the fact that the virtual queue based algorithm maximizes the left hand side of (\ref{eqn:minVirtual}) while satisfying (\ref{eqn:bcon}). The constraint (\ref{eqn:fcon}) is satisfied automatically.
\end{Proof}
Based on Lemma~\ref{lemma:1drift} (\ref{eqn:delta}) and Lemma~\ref{lemma:virtual_drift}, we have
\begin{align*}
&\mathds{E}\{\Delta(t)|\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t)\}\\&\leq -2\sum_{c,i,j}\mathds{E}\left\{[U^c_i(t)-V^c_{ij}(t)]\hat{\beta}^{1c}_{ij}+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\hat{\beta}^c_{ij}\right.\\%\end{align*}
&\left.\left.\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]\hat{f}^c_{ij}(t)\right|\mathbf{U}(t),\mathbf{T}(t),\mathbf{P}(t)\right\}\\
&\quad+2\sum_{c,i}U^c_i(t)\lambda^c_i+\alpha_2\\
&\leq \mathds{E}\Big\{\sum_{c,i}-2U^c_i(t)\Big( \sum_{j}\hat{\beta}_{ij}^{1c}(t)-\sum_{l}\hat{f}^c_{li}(t)-\lambda^c_i(t)\Big)\Big\}\\
&\quad+\mathds{E}\Big\{\sum_{c,i,j}2V^c_{ij}(t)\left(\hat{\beta}_{ij}^{1c}(t)-\hat{\beta}_{ij}^{c}(t)r_{ij}\right)\Big\}\nonumber\\
&\quad+\mathds{E}\Big\{\sum_{c,i,j}2P^c_{ij}(t)(\hat{\beta}^c_{ij}(t)r_{ij}(t)-\hat{f}^c_{ij}(t))+2\eta \hat{f}^c_{ij}(t)\Big\}+\alpha_2
\end{align*}
Since for $\boldsymbol{\lambda}+\boldsymbol{\epsilon} \in \Lambda$, we have $\boldsymbol{\lambda}+\boldsymbol{\epsilon}/(2d+1) \in \Lambda'$, thus
\begin{align*}
&\mathds{E}\{\Delta(t)|\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t)\}\\
&\leq -2\Big(\sum_{c,i}U^c_i(t)+\sum_{c,i,j}V^c_{ij}(t)+\sum_{c,i,j}P^c_{ij}(t)\Big)\frac{\epsilon}{2d+1}\\
&\quad+2\eta KNd+KN(d+A_{max})^2+2N+KNd.
\end{align*}
Therefore, the system is stable, and the average backlog is upper bounded by
\begin{align*}
&\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\Big(\sum_{c,i}U^c_i(\tau)+\sum_{c,i,j}V^c_{ij}(\tau)+\sum_{c,i,j}P^c_{ij}(\tau)\Big)\\
&\leq \frac{(2d+1)(KN(d+A_{max})^2+2N+(2\eta+1)KNd)}{\epsilon}
\end{align*}
The proof is completed.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
In this paper we consider undirected graphs. The node set of a graph
$G=(V,E)$ is sometimes also denoted by $V(G)$, and similarly, the edge
set is sometimes denoted by $E(G)$. A \textbf{subgraph} of a graph
$G=(V,E)$ is a pair $(V', E')$ where $V'\subseteq V$ and $E'\subseteq
E\cap (V' \times V')$.
A graph is called \textbf{subcubic} if every node is
incident to at most 3 edges,
and it is called \textbf{subquadratic} if every node is incident to at most 4 edges.
By a \textbf{cut} in a graph we mean the set
of edges leaving a nonempty proper subset $V'$ of the nodes (note that
we do not require that $V'$ and $V-V'$ induces a connected graph). We use standard terminology and
refer the reader to \cite{frankbook} for what is not defined here.
We consider 3 types of decision problems with 7 types of
objects. The three types of problems are: packing, covering and
partitioning, and the seven types of objects are the following: paths
(denoted by a $\ensuremath{\mathrm{P}}$), paths with specified endvertices (denoted by
$\ensuremath{\mathrm{P}}_{st}$, where $s$ and $t$ are the prescribed endvertices), (simple)
circuits (denoted by $\ensuremath{\mathrm{C}}$: by that we mean a closed walk of length
at least 2, without edge- and node-repetition), forests ($\ensuremath{\mathrm{F}}$), spanning trees
($\ensuremath{SpT}$), (not necessarily spanning) trees ($\ensuremath{\mathrm{T}}$), and cuts (denoted
by $\ensuremath{Cut}$).
Let $G=(V,E)$ be a \textbf{connected} undirected graph (we
assume connectedness in order to avoid trivial case-checkings) and
$\ensuremath{\mathrm{A}}$ and $\ensuremath{\mathrm{B}}$ two (not necessarily different) object types from the 7
possibilities above. The general questions we ask are the following:
\begin{itemize}
\item \textbf{Packing problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{find two edge-disjoint subgraphs} in $G$, one of type $\ensuremath{\mathrm{A}}$ and the other of type $\ensuremath{\mathrm{B}}$?
\item \textbf{Covering problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{cover the edge set} of $G$ with an object of type $\ensuremath{\mathrm{A}}$ and an object of type $\ensuremath{\mathrm{B}}$?
\item \textbf{Partitioning problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{partition the edge set} of $G$ into an object of type $\ensuremath{\mathrm{A}}$ and an object of type $\ensuremath{\mathrm{B}}$?
\end{itemize}
Let us give one example of each type.
A typical partitioning problem is the following: decide whether the
edge set of $G$ can be partitioned into a spanning tree and a
forest. Using our notations this is Problem $\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$. This problem
is in \textbf{NP $\cap$ co-NP} by the results of Nash-Williams \cite{nw},
polynomial algorithms for deciding the problem were given by Kishi and
Kajitani \cite{kishi3}, and Kameda and Toida \cite{kameda}.
A typical packing problem is the following: given four (not
necessarily distinct) vertices $s,t,s',t'\in V$, decide whether there
exists an $s$-$t$ path $P$ and an $s'$-$t'$-path $P'$ in $G$, such that
$P$ and $P'$ do not share any edge. With our notations this is Problem
$\ensuremath{\mathrm{P}}_{st} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{P}}_{s't'} $. This
problem is still
solvable in polynomial time, as was shown by Thomassen
\cite{thomassen} and Seymour \cite{seymour}.
A typical covering problem is the following: decide whether the edge
set of $G$ can be covered by a path and a
circuit. In our notations this is Problem $P\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} C$. Interestingly we found
that this simple-looking problem is NP-complete.
Let us introduce the following short formulation for the partitioning
and covering problems. If the edge set of a graph $G$ can be
partitioned into a type $A$ subgraph and a type $B$ subgraph then we
will also say that \textbf{\boldmath the edge set of $G$ is $A\textbf{\boldmath{\ensuremath{\, + \,}}}
B$}. Similarly, if there is a solution of Problem $A\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} B$ for a
graph $G$ then we say that \textbf{\boldmath the edge set of $G$ is
$A\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} B$}.
\begin{table}[!ht]
\begin{center}
\caption{25 PARTITIONING PROBLEMS}
\label{tab:part}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace}& Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part}) &\textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace}& Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part})& \textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part}) &
\textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & P\'alv\"olgyi \cite{Dome} & planar graphs? \\ \hline
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:3}
& planar graphs? \\ \hline
$\ensuremath{\mathrm{F}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{P\xspace} & Kishi and Kajitani \cite{kishi3}, & in \textbf{P\xspace} for matroids:\\
& & Kameda and Toida \cite{kameda} & Edmonds \cite{edmonds}\\
&& (Nash-Williams \cite{nw}) & \\ \hline
$\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{P\xspace} &Kishi and Kajitani \cite{kishi3}, & in \textbf{P\xspace} for matroids:\\
& & Kameda and Toida \cite{kameda}, & Edmonds \cite{edmonds}\\
& & (Nash-Williams \cite{nw61}, &\\
& & Tutte \cite{tutte}) & \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$&\textbf{P\xspace}& if and only if bipartite &\\
& &(and $|V|\ge 3$) & \\ \hlin
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut+F} & planar graps? \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
\end{tabular}
\end{center}
\end{table}
The setting outlined above gives us 84 problems. Note however that
some of these can be omitted. For example $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ is trivial for each
possible type $A$ in question, because $P$ may consist of only one vertex. By the same reason, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ and $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ type problems are also
trivial. Furthermore, observe that the edge-set $E(G)$ of a graph $G$
is $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}} $ $\Leftrightarrow$ $E(G)$ is $ \ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}}$ $ \Leftrightarrow$
$E(G)$ is $ \ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}} \Leftrightarrow$ $E(G)$ is $ \ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}}$:
therefore we will only consider the problems of form $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}}$ among
these for any $\ensuremath{\mathrm{A}}$. Similarly, the edge set $E(G)$ is $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}} $
$\Leftrightarrow $ $E(G)$ is $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}} $ $\Leftrightarrow$ $E(G)$ is $
\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$:
again we choose to deal with $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
We can also omit the problems $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ because a
cut and a spanning tree can never be disjoint.
The careful calculation gives that we are left with 44 problems. We
have investigated the status of these. Interestingly, many of these
problems turn out to be NP-complete. Our results are summarized in
Tables \ref{tab:part}-\ref{tab:cov}. We
note that in our NP-completeness proofs we always show that the
considered problem is NP-complete even if the input graph is simple.
On the other hand, the polynomial algorithms given here always work
also for multigraphs (we allow parallel edges, but we forbid loops).
Some of the results shown in the tables were already proved in the
preliminary version \cite{quickpf} of this paper: namely we have
already shown the NP-completeness of
Problems $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$,
and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ there.
\begin{table}[!ht]
\begin{center}
\caption{9 PACKING PROBLEMS}
\label{tab:pack}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{P\xspace} &Seymour \cite{seymour}, Thomassen \cite{thomassen} & \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace} & see Section \ref{sec:alg1} & \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} &Theorem \ref{thm:3}
& planar graphs?\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace} & Bodlaender \cite{bodl} & \textbf{NPC\xspace} in linear matroids\\
&&(see also Section \ref{sec:alg2})& (Theorem \ref{thm:CdC}) \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} &Theorem \ref{thm:3}
& polynomial in \\
&&& planar graphs, \cite{marcin}\\ \hline
$\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{P\xspace} &Imai \cite{imai}, (Nash-Williams \cite{nw61},& in \textbf{P\xspace} for matroids: \\
& & Tutte \cite{tutte}) & Edmonds \cite{edmonds}\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$&\textbf{P\xspace}&always, if $G$ has two & \textbf{NPC\xspace} in linear matroids\\
& &non-adjacent vertices & (Corollary \ref{cor:CutdCut})\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{P_{st}}$&\textbf{P\xspace}&always, except if the graph is an & \\
& & $s$-$t$ path (with multiple copies & \\
& & for some edges)&\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace}&always, except if the graph is & \textbf{NPC\xspace} in linear matroids \\
& & a tree, a circuit, or a bunch of & ($\Leftrightarrow$ the matroid
is not\\
&¶llel edges & uniform, Theorem \ref{thm:CutdC})\\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\caption{10 COVERING PROBLEMS}
\label{tab:cov}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$C \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{Cut}$&\textbf{NPC\xspace}& if and only if 4-colourable & always in planar \\
& & & Appel et al. \cite{4szin}, \cite{4szin2} \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut} & \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar\\ \hline
\end{tabular}
\end{center}
\end{table}
Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ were posed in the open
problem portal called ``EGRES Open'' \cite{egresopen} of the Egerv\'ary
Research Group. Most of the NP-complete problems remain NP-complete
for planar graphs, though we do not know yet the status of Problems $\ensuremath{\mathrm{T}}
\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ for
planar graphs, as indicated in the table.
We point out to an interesting phenomenon: planar duality and the
NP-completeness of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ gives that deciding whether the
edge set of a planar graph is the disjoint union of two \emph{simple}
cuts is NP-complete (a \textbf{simple cut}, or \textbf{bond} of a
graph is an inclusionwise minimal cut). In contrast, the edge set of a
graph is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$ if and only if \ the graph is bipartite on at least 3
nodes\footnote{It is easy to see that the edge set of a connected
bipartite graph on at least 3 nodes is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$. On the other
hand, the intersection of a cut and a circuit contains an even
number of edges, therefore the edge set of a non-bipartite graph
cannot be $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$.}, that is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$ is polinomially
solvable even for non-planar graphs.
Some of the problems can be formulated as a problem in the graphic
matroid and therefore also have a natural matroidal generalization.
For example the matroidal generalization of $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$ is the following:
can we find two disjoint circuits in a matroid (given with an independence
oracle, say)?
Of course, such a matroidal question is only interesting here if it
can be solved for graphic matroids in polynomial time. Some of these
matroidal questions is known to be solvable (e.g., the matroidal
version of $\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$), and some of them was unknown (at least for
us): the best example being the (above mentioned) matroidal version of
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.
In the table above we indicate these matroidal generalizations, too,
where the meaning of the problem is understandable. The matroidal
generalization of spanning trees, forests, circuits is
straightforward. We do not want to make sense of trees, paths, or
$s$-$t$-paths in matroids. On the other hand, cuts deserve some
explanation. In matroid theory, a \textbf{cut} (also called
\textbf{bond} in the literature) of a matroid is defined as an
inclusionwise minimal subset of elements that intersects every
base. In the graphic matroid this corresponds to a simple cut of
the graph defined above.
So we will only consider
packing problems for cuts in matroids: for example the problem of type
$\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$ in graphs is equivalent to the problem of packing $A$ and a
simple cut in the graph, therefore the matroidal generalization is
understandable. We will discuss these matroidal generalizations in
Section \ref{sec:matroid}.
\section{NP-completeness proofs}\label{sect:npc}
\newcommand{{\sc Planar3\-Reg\-Ham}}{{\sc Planar3\-Reg\-Ham}}
\newcommand{{\sc Planar3\-Reg\-Ham}$-e$}{{\sc Planar3\-Reg\-Ham}$-e$}
A graph $G=(V,E)$ is said to be \textbf{subcubic} if $d_G(v)\le 3$ for
every $v\in V$. In many proofs below we will use Problem
{\sc Planar3\-Reg\-Ham}\ and Problem {\sc Planar3\-Reg\-Ham}$-e$\ given below.
\begin{prob}[{\sc Planar3\-Reg\-Ham}]
Given a $3$-regular planar graph $G=(V,E)$,
decide whether there is a Hamiltonian circuit in $G$.
\end{prob}
\begin{prob}[{\sc Planar3\-Reg\-Ham}$-e$]
Given a $3$-regular planar graph $G=(V,E)$ and an edge $e\in E$,
decide whether there is a Hamiltonian circuit in $G$ through edge $e$.
\end{prob}
It is well-known that Problems {\sc Planar3\-Reg\-Ham}\ and {\sc Planar3\-Reg\-Ham}$-e$\ are
NP-complete (see Problem [GT37] in \cite{gj}).
\subsection{NP-completeness proofs for subcubic planar graphs
}\label{sec:cut+c}
\begin{figure}[!ht]
\begin{center}
\input{k4min.tex}
\caption{An illustration for the proof of Theorem \ref{thm:cut}.}
\label{fig:k4min}
\end{center}
\end{figure}
\begin{thm}\label{thm:cut}
The following problems are NP-complete, even if restricted to subcubic
planar graphs: $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$,
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$,
$\ensuremath{P_{st}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
\end{thm}
\begin{proof}
All the problems are clearly in NP. First we prove the completeness
of $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ using a reduction from Problem
{\sc Planar3\-Reg\-Ham}.
Given an instance of the Problem {\sc Planar3\-Reg\-Ham}\ with the 3-regular
planar graph $G$, construct the following graph $G'$. First subdivide
each edge $e=v_1v_2\in E(G)$ with 3 new nodes $x_e^{v_1},x_e,x_e^{v_2}$ such that
they form a path in the order $v_1,x_e^{v_1},x_e,x_e^{v_2},v_2$. Now for any node
$u\in V(G)$ and any pair of edges $e,f\in E(G)$ incident to $u$
connect $x_e^u$ and $x_f^u$ with a new edge. Finally, delete all the
original nodes $v\in V(G)$ to get $G'$.
Informally speaking, $G'$ is obtained from $G$ by blowing a triangle
into every node of $G$ and subdividing each original edge with a new
node: see Figure \ref{fig:k4min} \textit{a,b,} for an illustration. Note that by
contracting these triangles in $G'$ and undoing the subdivision
vertices of form $x_e$ gives back $G$.
Clearly, the resulting graph $G'$ is still planar and has maximum
degree 3 (we mention that the subdivision nodes of form $x_e$ are only needed for
the Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$).
We will prove that $G$ contains a Hamiltonian circuit
if and only if $G'$ contains a circuit covering odd circuits (i.e., the edge-set of
$G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{Cut}$)
if and only if the edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$
if and only if $G'$ contains a circuit covering all the circuits (i.e., the edge
set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$). First let $C$ be a Hamiltonian circuit in
$G$. We define a circuit $C'$ in $G'$ as follows. For any $v\in V(G)$,
if $C$ uses the edges $e, f$ incident to $v$ then let $C'$ use the 3
edges $x_ex_e^v, x_e^vx_f^v, x_f^vx_f$ (see Figure \ref{fig:k4min} \textit{a,b,} for an
illustration). Observe that $G'-C'$ is a forest, proving that the
edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$. Similarly, the edge set of $G'-C'$ is a
cut of $G'$, proving that the edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$. Finally
we show that if the edge set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{F}}$ then $G$ contains a
Hamiltonian circuit: this proves the sequence of equivalences stated
above (the remaining implications being trivial). Assume that $G'$ has
a circuit $C'$ that intersects the edge set of every odd circuit of
$G'$. Contract the triangles of $G'$ and undo the subdivision nodes of
form $x_e$ and observe that $C'$ becomes a Hamiltonian circuit of $G$.
For the rest of the problems we use {\sc Planar3\-Reg\-Ham}$-e$. Given
the 3-regular planar graph $G$ and an edge
$e=v_1v_2\in E(G)$, first construct the graph $G'$ as above. Next modify $G'$
the following way: if $x_e^{v_1},x_e,x_e^{v_2}$ are the nodes of $G'$ arising from the subdivision of the original edge $e\in E(G)$
then let $G''=(G'-x_e)+\{x_e^{v_i}a_i, a_ib_i, b_ic_i, c_ia_i: i=1,2\}$,
where $a_i,b_i,c_i (i=1,2)$ are 6 new nodes (informally, ``cut off'' the path
$x_e^{v_1},x_e,x_e^{v_2}$ at $x_e$ and substitute
the arising two vertices of degree 1 with two triangles).
An illustration can be seen in Figure \ref{fig:k4min} \textit{a,c}.
Let $s=c_1$ and $t=c_2$.
The following chain of equivalences settles the NP-completeness of the
rest of the problems promised in the theorem. The proof is similar to
the one above and is left to the reader.
There exists a Hamiltonian circuit in $G$ using the edge $e$ $\Leftrightarrow$ the
edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$ $\Leftrightarrow$ the edge set of
$G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$ $\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}}
T$ $\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
\end{proof}
\subsection{NP-completeness proofs based on Kotzig's theorem}
Now we prove the NP-completeness of many other problems in our
collection using
the following elegant result
proved by Kotzig \cite{kotzig}.
\begin{thm}\label{thm:kotzig}
A 3-regular graph contains a Hamiltonian circuit if and only if the
edge set of its line graph can be decomposed into two
Hamiltonian circuits.
\end{thm}
This theorem was used to prove NP-completeness results by Pike in
\cite{pike}.
Another useful and well known observation is the following:
the line graph of a planar 3-regular graph is 4-regular and planar.
\begin{thm}\label{thm:part}
The following problems are NP-complete, even if restricted to subquadratic planar graphs: $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{st}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}},$ $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{s't'}, \; \ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\; \ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\; \ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{SpT}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{st}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{s't'}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$,
$\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$.
\end{thm}
\begin{proof}
The problems are clearly in NP. Let $G$ be a planar 3-regular
graph. Since $L(G)$ is 4-regular, it is decomposable to two circuits
if and only if it is decomposable to two Hamiltonian circuit s. This together with Kotzig's
theorem shows that $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ is NP-complete. For every other problem of
type $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}}$ use $L=L(G)\!-\!st$ with an arbitrary edge $st$ of $L(G)$.
Let $C$ be a circuit of $L$ and observe that (by the number of edges
of $L$ and the degree conditions) $L\!-\!C$ is circuit-free if and only if
$C$ is a Hamiltonian circuit and $L\!-\!C$ is a Hamiltonian path connecting $s$ and $t$.
For the rest of the partitioning type problems we need one more trick. Let us be given
a $3$-regular planar graph $G=(V,E)$ and an edge $e=xy\in E$. We
construct another $3$-regular planar graph $G'=(V',E')$ as
follows. Delete edge $xy$, add vertices $x', y'$, and add edges $xx',
yy'$ and add two parallel edges between $x'$ and $y'$, namely $e_{xy}$
and $f_{xy}$ (note that $G'$ is planar, too). Clearly $G$ has a Hamiltonian circuit
through edge $xy$ if and only if $G'$ has a Hamiltonian circuit. Now consider $L(G')$,
the line graph of $G'$, it is a $4$-regular planar graph.
Note, that in $L(G')$ there are two parallel edges
between nodes $s=e_{xy}$ and $t=f_{xy}$, call these edges $g_1$ and
$g_2$. Clearly, $L(G')$ can be decomposed into two Hamiltonian circuit s if and only if
$L'=L(G')\!-\!g_1\!-\!g_2$ can be decomposed into two Hamiltonian path s. Let $P$ be a path
in $L'$ and notice again that the number of edges of $L'$ and the
degrees of the nodes in $L'$ imply that $L'\!-\!P$ is circuit free
if and only if \ $P$ and $L'\!-\!P$ are two Hamiltonian path s in $L'$.
Finally, the NP-completeness of the problems of type $\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{B}}$ is an
easy consequence of the NP-completeness of the corresponding
partitioning problem $\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{B}}$: use the same construction and observe
that the number of edges enforce the two objects in the cover to be
disjoint.
\end{proof}
We remark that the above theorem gives a new proof of the
NP-completeness of Problems $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ and $\ensuremath{P_{st}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, already
proved in Theorem \ref{thm:cut}.
\subsection{NP-completeness of Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$}
First we show the NP-completeness of Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{SpT}$, and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$. Problem $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ was proved to be NP-complete
by P\'alv\"olgyi in \cite{Dome}
(the NP-completeness of this problem with the additional requirement that
the two trees have to be of equal size was proved by Pferschy,
Woeginger and Yao \cite{woeg}). Our reductions here are similar to the
one used by P\'alv\"olgyi in \cite{Dome}. We remark that our first
proof for the NP-completeness of Problems $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{P_{st}}
\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ used a variant of the
construction below (this can be found in \cite{quickpf}), but later we
found that using Kotzig's result (Theorem \ref{thm:kotzig}) a simpler
proof can be given for these.
For a subset of edges $E'\subseteq E$ in a graph $G=(V,E)$, let
$V(E')$ denote the subset of nodes incident to the edges of $E'$,
i.e., $V(E')=\{v\in V: $ there exists an $f\in E' $ with $v\in f\}$.
\begin{thm}\label{thm:3}
Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ are NP-complete
even for graphs with maximum degree 3.
\end{thm}
\begin{proof}
It is clear that the problems are in NP. Their completeness will be
shown by a reduction from the well known NP-complete problems
\textsc{3SAT}\ or the problem \textsc{One-In-Three 3SAT}\ (Problems LO2 and LO4 in
\cite{gj}). Let $\varphi$ be a 3-CNF formula with variable set
$\{x_1,x_2,\dots,x_n\}$ and clause set $\ensuremath{\mathcal{C}}=\{C_1,C_2,\dots,C_m\}$
(where each clause contains exactly 3 literals). Assume that literal
$x_j$ appears in $k_j$ clauses
$C_{a^j_{1}},C_{a^j_{2}},\dots,C_{a^j_{k_j}}$, and literal $\ov{x_j}$
occurs in $l_j$ clauses $C_{b^j_{1}},C_{b^j_{2}},\dots,C_{b^j_{l_j}}$.
Construct the following graph $G_\varphi=(V,E)$.
For an arbitrary clause $C\in \ensuremath{\mathcal{C}}$ we will introduce a new node $u_C$,
and for every literal $y$ in $C$ we introduce two more nodes $v(y,C), w(y,C)$.
Introduce the edges $u_Cw(y,C), w(y,C)v(y,C)$
for every clause $C$ and every literal $y$ in $C$
(the nodes $w(y,C)$ will have degree 2).
For every variable $x_j$ introduce 8 new nodes $z^j_1, z^j_2,$ $ w^j_1,
\ov{w^j_1},$ $w^j_2,$ $ \ov{w^j_2},$ $w^j_3, \ov{w^j_3}$. For every
variable $x_j$, let $G_\varphi$ contain a circuit on the $k_j+l_j+4$
nodes $z^{j}_1,$ $ v(x_j,C_{a^j_{1}}),$ $v(x_j,C_{a^j_{2}}),$ $\dots,$ $
v(x_j,C_{a^j_{k_j}}),$ $ w^j_1, $ $z^j_2,$ $ \ov{w^j_1},$ $
v(\ov{x_j},C_{b^j_{l_j}}),$ $v(\ov{x_j},C_{b^j_{l_j-1}}), $ $\dots,$ $
v(\ov{x_j},C_{b^j_{1}})$ in this order.
We say that this circuit is \textbf{ associated to variable
$x_j$}. Connect the nodes $z^j_2$ and $z^{j+1}_1$ with an edge for
every $j=1,2,\dots,n\!-\!1$. Introduce furthermore a path on nodes
$w^1_3,\ov{w^1_3},w^2_3,\ov{w^2_3}, \dots, w^n_3,\ov{w^n_3}$ in this
order and add the edges $w^j_1w^j_2, w^j_2w^j_3, \ov{w^j_1}\ov{w^j_2},
\ov{w^j_2}\ov{w^j_3}$ for every $j=1,2,\dots, n$. Let $s=z^1_1$ and
$t=z^n_2$.
\begin{figure}[!ht]
\begin{center}
\input{gfi4.tex}
\caption{Part of the construction of graph $G_\varphi$ for clause
$C=(\ov{x_1}\vee {x_2}\vee \ov{x_3})$.}\label{fig:G_phi}
\end{center}
\end{figure}
The construction of the graph $G_\varphi$ is
finished. An illustration can be found in Figure \ref{fig:G_phi}.
Clearly, $G_\varphi$ is simple and has maximum degree three.
If $\tau$ is a truth assignment to the variables $x_1,x_2,\dots,x_n$
then we define an $s$-$t$ path $P_\tau$ as follows: for every
$j=1,2,\dots,n$, if $x_j$ is set to TRUE then let $P_\tau$ go through
the nodes $z^j_1, v(\ov{x_j},C_{b^j_{1}}),
v(\ov{x_j},$ $C_{b^j_{2}}),\dots ,$ $v(\ov{x_j},$ $C_{b^j_{l_j}}),$ $\ov{w^j_1},z^j_2$,
otherwise (i.e., if $x_j$ is set to FALSE) let $P_{\tau}$ go through $z^j_1,
v({x_j},$ $C_{a^j_{1}}), $ $v({x_j},$ $C_{a^j_{2}}),\dots
,$ $v({x_j},$ $C_{a^j_{k_j}}),{w^j_1}, z^j_2$.
We need one more concept. An $s$-$t$ path $P$ is called an
\emph{assignment-defining path} if
$v\in V(P),\ d_G(v)=2$ implies $v\in \{s,t\}$.
For such a path $P$ we define the truth assignment $\tau_P$ such that
$P_{\tau_P}=P$.
\begin{cl}
There is an $s$-$t$ path $P\subseteq E$ such that $(V,\; E\!-\!P)$ is
connected if and only if $\varphi\in 3SAT$. Consequently,
Problem $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
If $\tau$ is a truth assignment showing that $\varphi\in 3SAT$ then
$P_\tau$ is a path satisfying the requirements, as one can check. On
the other hand, if $P$ is an $s$-$t$ path such that $(V,\; E\!-\!P)$ is connected
then $P$ cannot go through nodes of degree 2, therefore $P$ is
assignment-defining,
and $\tau_P$ shows
$\varphi\in 3SAT$.
\end{proof}
To show the NP-completeness of Problem
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, modify $G_{\varphi}$ the following way: subdivide the two
edges incident to $s$ with two new nodes $s'$ and $s''$ and connect
these two nodes with an edge. Repeat this with $t$: subdivide the two
edges incident to $t$ with two new nodes $t'$ and $t''$ and connect
$t'$ and $t''$. Let the graph obtained this way be $G=(V,E)$.
Clearly, $G$ is subcubic and simple. Note that the definition of an
assignment defining path and that of $P_\tau$ for a truth assignment
$\tau $ can be obviously modified for the graph $G$.
\begin{cl}
There exists a truth assignment $\tau$ such that every clause in $\varphi$
contains exactly one true literal if and only if there exists a set
$T\subseteq E$ such that $(V(T),\, T)$ is a tree and $(V,\; E\!-\!T)$ is a
spanning tree. Consequently,
Problem $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
If $\tau$ is a truth assignment as above then one can see that
$T=P_\tau$ is an edge set satisfying the requirements.
On the other hand, assume that $T\subseteq E$ is such that $(V(T),\,
T)$ is a tree and $T^*=(V,\; E\!-\!T)$ is a spanning tree. Since $T^*$
cannot contain circuits, $T$ must contain at least one of the 3 edges
$ss',s's'',s''s$ (call it $e$), as well as at least one of the 3 edges
$tt',t't'',t''t$ (say $f$). Since $(V(T),\, T)$ is connected, $T$
contains a path $P\subseteq T$ connecting $e$ and $f$ (note that since
$(V, E\!-\!T)$ is connected, $|T\cap \{ss',s's'',s''s\}|=|T\cap
\{tt',t't'',t''t\}|=1$).
Since $(V,\; E\!-\!P)$ is connected, $P$ cannot go through nodes of
degree 2 (except for the endnodes of $P$), and the edges $e$ and $f$
must be the last edges of $P$ (otherwise $P$ would disconnect $s$ or
$t$ from the rest). Thus, without loss of generality we can assume
that $P$ connects $s$ and $t$ (by locally changing $P$ at its ends),
and we get that $P$ is assignment defining. Observe that in fact $T$
must be equal to $P$, since $G$ is subcubic (therefore $T$ cannot
contain nodes of degree 3). Consider the truth assignment $\tau_P$
associated to $P$, we claim that $\tau_P$ satisfies our requirements.
Clearly, if a clause $C$ of $\varphi$ does not contain a true literal
then $u_C$ is not reachable from $s$ in $G\!-\!T$, therefore every clause
of $\varphi$ contains at least one true literal. On the other hand
assume that a clause $C$ contains at least 2 true literals (say $x_j$
and $\ov{x_k}$ for some $j\ne k$), then one can see that there exists
a circuit in $G\!-\!T$ (because $v(x_j,C)$ is still reachable from
$v(\ov{x_k},C)$ in $G\!-\!T\!-\!u_C$ via the nodes $w_j^1, w_j^2,w_j^3$ and
$\ov{w_k^1}, \ov{w_k^2},\ov{w_k^3}$).
\end{proof}
Finally we prove the NP-completeness of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$. For the
3CNF formula $\varphi$ with variables $x_1,x_2,\dots,x_n$ and clauses
$C_1,C_2,\dots,C_m$, let us associate the 3CNF formula $\varphi'$ with
the same variable set and clauses $(x_1 \vee x_1 \vee \ov{x_1}),\;
(x_2 \vee x_2 \vee \ov{x_2}), \dots,\; (x_n \vee x_n \vee \ov{x_n}),\;
C_1,\; C_2,\dots, C_m$. Clearly, $\varphi$ is satisfiable if and only if \ $\varphi'$
is satisfiable.
Construct the graph $G_{\varphi'}=(V,E)$ as
above (the construction is clear even if some clauses contain only 2
literals), and let $G=(V,E)$ be obtained from $G_{\varphi'}$ by adding
the edge $st$.
\begin{cl}
The formula $\varphi'$ is satisfiable if and only if there exists a
set $K\subseteq E$ such that $(V(K),\, K)$ is a circuit and $G\!-\!K=(V,\;
E\!-\!K)$ is connected. Consequently, Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
First observe that if $\tau$ is a truth assignment satisfying
$\varphi'$ then $K=P_\tau\cup\{st\}$ is an edge set satisfying the
requirements. On the other hand, if $K$ is an edge set satisfying the
requirements then $K$ cannot contain nodes of degree 2, since $G\!-\!K$ is
connected. We claim that $K$ can neither be a circuit associated to a
variable $x_i$, because in this case the node $u_C$ associated to
clause $C=(x_i \vee x_i \vee \ov{x_i})$ would not be reachable in
$G\!-\!K$ from $s$. Therefore $K$ consists of the edge $st$ and an
assignment defining path $P$. It is easy to check (analogously to the
previous arguments) that $\tau_P$ is a truth assignment satisfying
$\phi'$.
\end{proof}
\noindent As we have proved the NP-completeness of all three problems,
the theorem is proved.
\end{proof}
We note that the construction given in our original proof of the above
theorem (see \cite{quickpf})
was used by Bang-Jensen and Yeo in \cite{bjyeo}. They
settled an open problem raised by Thomass\'e in 2005. They proved
that it is NP-complete to decide $\mathrm{SpA}\wedge \ensuremath{SpT}$ in
digraphs, where $\mathrm{SpA}$ denotes a spanning arborescence and
$\ensuremath{SpT}$ denotes a spanning tree in the underlying undirected graph.
We also point out that the planarity of the graphs in the above proofs
cannot be assumed. We do not know the status of any of the Problems
$\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, and $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ in planar graphs. It was
shown in \cite{marcin} that Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is polynomially
solvable in planar graphs. We also mention that planar duality gives
that Problem $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ in a planar graph is eqivalent to finding a
cut in a planar graph that contains no circuit: by the results of
\cite{marcin}, this problem is also polynomially solvable. However
van den Heuvel \cite{heuvel} has shown that this problem is
NP-complete for general (i.e., not necessarily planar) graphs.
We point out to an interesting connection towards the Graphic TSP
Problem. This problem can be formulated as follows. Given a connected
graph $G=(V,E)$, find a connected Eulerian subgraph of $2G$ spanning
$V$ with minimum number of edges (where $2G=(V,2E)$ is the graph
obtained from $G$ by doubling its edges). The connection is the
following. Assume that $F\subseteq 2E$ is a feasible solution to the
problem. A greedy way of improving $F$ would be to delete edges from it, while
maintaining the feasibility. It is thus easy to observe that this
greedy improvement is possible if and only if the graph $(V,F)$
contains an edge-disjoint circuit and a spanning tree (which is
Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ in our notations). However, slightly modifying
the proof above it can be shown that Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is also
NP-complete in Eulerian graphs (details can be found in
\cite{marcin}).
\begin{thm}\label{thm:cut+F}
Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ is NP-complete.
\end{thm}
\begin{proof}
The problem is clearly in NP. In order to show its completeness let
us first rephrase the problem. Given a graph, Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ asks
whether we can colour the nodes of this graph with two colours such
that no monochromatic circuit exists.
Consider the NP-complete Problem
2-COLOURABILITY OF A 3-UNIFORM HYPERGRAPH
This problem is the following: given a 3-uniform hypergraph $H=(V,
\ensuremath{\mathcal E})$, can we colour the set $V$ with two colours (say red and blue)
such that there is no monochromatic hyperedge in $\ensuremath{\mathcal E}$ (the problem is
indeed NP-complete, since Problem GT6 in \cite{gj} is a special case
of this problem). Given the instance $H=(V,\ensuremath{\mathcal E})$ of this problem,
construct the following graph $G$. The node set
of $G$ is $V\cup V_{\ensuremath{\mathcal E}}$, where $V_{\ensuremath{\mathcal E}}$ is disjoint from $V$ and it
contains 6 nodes for every hyperedge in $\ensuremath{\mathcal E}$: for an arbitrary
hyperedge $e=\{v_1,v_2,v_3\}\in \ensuremath{\mathcal E}$, the 6 new nodes associated to it
are $x_{v_1,e}, y_{v_1,e}, x_{v_2,e}, y_{v_2,e}, x_{v_3,e}, y_{v_3,e}$.
The edge set of $G$ contains the following edges: for the hyperedge
$e=\{v_1,v_2,v_3\}\in \ensuremath{\mathcal E}$, $v_i$ is connected with $x_{v_i,e}$ and
$y_{v_i,e}$ for every $i=1,2,3$, and among the 6 nodes associated to $e$
every two is connected with an edge
except for the 3 pairs of form $x_{v_i,e},y_{v_i,e}$ for $i=1,2,3$
(i.e., $|E(G)|=18|\ensuremath{\mathcal E}|$).
The construction of $G$ is finished. An illustration can be found in
Figure \ref{fig:cutplF}. Note that in any two-colouring of $V\cup
V_{\ensuremath{\mathcal E}}$ the 6 nodes associated to the hyperedge $e=\{v_1,v_2,v_3\}\in
\ensuremath{\mathcal E}$ do not induce a monochromatic circuit if and only if there exists
a permutation $i,j,k$ of $1,2,3$ so that they are coloured the
following way: $x_{v_i,e},y_{v_i,e}$ is blue, $x_{v_j,e},y_{v_j,e}$ is
red and $x_{v_k,e},y_{v_k,e}$ are of different colour.
\begin{figure}[!ht]
\begin{center}
\input{cutplF.tex}
\caption{Part of the construction of the graph $G$ in the proof of
Theorem \ref{thm:cut+F}.}\label{fig:cutplF}
\end{center}
\end{figure}
One can check that $V$ can be coloured with 2 colours such that there
is no monochromatic hyperedge in $\ensuremath{\mathcal E}$ if and only if \ $V\cup V_{\ensuremath{\mathcal E}}$ can be
coloured with 2 colours such that there is no monochromatic circuit in
$G$.
\end{proof}
We note that we do not know the status of Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{\mathrm{F}}$ in planar graphs.
\section{Algorithms} \label{sec:alg1}\label{sec:alg2}
\paragraph{Algorithm for $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.}
Assume we are given a connected multigraph $G=(V,E)$ and two nodes
$s,t\in V$, and we want to decide whether an $s$-$t$-path
$P\subseteq E$ and a circuit $C\subseteq E$ exists with $P\cap
C=\emptyset$. We may even assume that both $s$ and $t$ have degree
at least two. If $v\in V\!-\!\{s,t\}$ has degree at most two then we
can eliminate it. If there is a cut-vertex $v\in V$ then we can
decompose the problem into smaller subproblems by checking whether
$s$ and $t$ fall in the same component of $G\!-\!v$, or not. If they do
then $P$ should lie in that component, otherwise $P$ has to go
through $v$.
If there is a non-trivial two-edge $s$-$t$-cut (i.e., a set $X$ with
$\{s\}\subsetneq X\subsetneq V\!-\!t$, and $d_G(X)=2$), then we can again
reduce the problem in a similar way:
the circuit to be found cannot use both edges entering $X$ and we have
to solve two smaller problems obtained by contracting $X$ for the
first one, and contracting $V\!-\!X$ for the second one.
So we can assume that $|E|\ge n+\lceil n/2
\rceil-1$, and that $G$ is $2$-connected and $G$
has no non-trivial two-edge $s$-$t$-cuts.
Run a BFS
from $s$ and associate levels to vertices ($s$ gets $0$). If $t$ has
level at most $\lceil n/2 \rceil -1$ then we have a path of length at
most $\lceil n/2 \rceil -1$ from $s$ to $t$, after deleting its edges,
at least $n$ edges remain, so we are left with a circuit.
So we may assume that the level of $t$ is at least $\lceil n/2
\rceil$. As $G$ is $2$-connected, we must have at least two vertices
on each intermediate level. Consequently $n$ is even, $t$ is on level
$n/2$, and we have exactly two vertices on each intermediate level,
and each vertex $v\in V\!-\!\{s,t\}$ has degree $3$, or, otherwise for a
minimum $s$-$t$ path $P$ we have that $G\!-\!P$ has at least $n$ edges,
i.e., it contains a circuit. We have no non-trivial two-edge $s$-$t$-cuts,
consequently there can only be two cases: either $G$ equals to $K_4$ with
edge $st$ deleted, or $G$ arises from a $K_4$ such that two opposite
edges are subdivided (and these subdivision nodes are $s$ and $t$). In
either cases we have no solution.
\medskip
\paragraph{Algorithm for $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.}
We give a simple polynomial time algorithm for deciding whether two
edge-disjoint circuits can be found in a given connected multigraph
$G=(V,E)$. We note that a polynomial (but less elegant) algorithm for
this problem was also given in \cite{bodl}.
If any vertex has degree at most two, we can eliminate it, so we
may assume that the minimum degree is at least $3$. If $G$ has at
least $16$ vertices, then it has a circuit of length at most $n/2$
(simply run a BFS from any node and observe that there must be a
non-tree edge between some nodes of depth at most $\log(n)$, giving us
a circuit of length at most $2\log(n)\le n/2$), and after deleting the
edges of this circuit, at least $n$ edges remain, so we are left with
another circuit. For smaller graphs we can check the problem in constant
time.
\section{Matroidal generalizations}\label{sec:matroid}
In this section we will consider the matroidal generalizations for the
problems that were shown to be polynomially solvable in the graphic
matroid. In fact we will only need linear matroids, since it turns out
that the problems we consider are already NP-complete in them. We
will use the following result of Khachyan.
\begin{thm}[Khachyan \cite{khachyan}]\label{thm:khac}
Given a $D\times N$ matrix over the rationals, it is NP-complete to
decide whether there exist $D$ linearly dependent columns.
\end{thm}
First we consider the matroidal generalization of Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.
\begin{thm}\label{thm:CutdC}
It is NP-complete to decide whether an (explicitly given) linear matroid
contains a cut and a circuit that are disjoint.
\end{thm}
\begin{proof}
Observe that there is no disjoint cut and circuit in a matroid if and
only if every circuit contains a base, that is equivalent with the
matroid being uniform.
Khachyan's Theorem \ref{thm:khac} is equivalent with the uniformness
of the linear matroid determined by the coloumns of the matrix in
question, proving our theorem.
\end{proof}
Finally we consider the matroidal generalization of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$
and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$.
\begin{thm}\label{thm:CdC}
The problem of deciding whether an (explicitly given) linear matroid
contains two disjoint circuits is NP-complete.
\end{thm}
\begin{proof}
We will prove here that Khachyan's Theorem \ref{thm:khac} is
true even if $N=2D+1$, which implies our theorem, since there are two
disjoint circuits in the linear matroid represented by this $D\times
(2D+1)$ matrix if and only if there are $D$ linearly dependent columns
in it.
Khachyan's proof of Theorem \ref{thm:khac} was simplified by Vardy
\cite{vardy}, we will follow his line of proof. Consider the following
problem.
\begin{prob}\label{prob:sum}
Given different positive integers $a_1,a_2,\dots,a_n,b$ and a positive
integer $d$, decide whether there exist $d$ indices $1\le
i_1<i_2<\dots<i_d\le n$ such that $b=a_{i_1}+a_{i_2}+\dots+a_{i_d}$.
\end{prob}
\newcommand{{\sc Subset-Sum}}{{\sc Subset-Sum}}
Note that Problem \ref{prob:sum} is very similar to the
{\sc Subset-Sum}\ Problem (Problem SP13 in \cite{gj}), the only difference
being that in the {\sc Subset-Sum}\ problem we do not specify $d$, and the
numbers $a_1,a_2,\dots,a_n$ need not be different. On the other hand,
here we will strongly need that the numbers $a_1,a_2,\dots,a_n$ are
all different. Vardy has shown the following claim (we include a
proof for sake of completeness).
\begin{cl}\label{cl:vardy}
There is solution to Problem \ref{prob:sum} if and only if there are
$d+1$ linearly dependent columns (above the rationals) in the
$(d+1)\times (n+1)$ matrix
\[
\begin{pmatrix}
1 & 1 & \cdots & 1 & 0 \\
a_{1} & a_{2} & \cdots & a_{n}& 0 \\
\vdots & \vdots & \ddots & \vdots \\
a_{1}^{d-2} & a_{2}^{d-2} & \cdots & a_{n}^{d-2} & 0 \\
a_{1}^{d-1} & a_{2}^{d-1} & \cdots & a_{n} ^{d-1} & 1 \\
a_{1}^{d} & a_{2}^{d} & \cdots & a_{n} ^{d} & b
\end{pmatrix}.\]
\end{cl}
\begin{proof}
We use the following facts about determinants. Given real numbers
$x_1,x_2,\dots,x_k$,
we have the following well-known relation for the Vandermonde
determinant:
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-1} & x_{2}^{k-1} & \cdots & x_{k} ^{k-1}
\end{pmatrix}=\prod_{i<j}(x_j-x_i).
\]
Therefore the Vandermonde determinant is not zero, if the numbers
$x_1,x_2,\dots,x_k$ are different. Furthermore, we have the following
relation for an alternant of the Vandermonde determinant (see
Chapter V in \cite{muir}, for example):
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-2} & x_{2}^{k-2} & \cdots & x_{k} ^{k-2} \\
x_{1}^{k} & x_{2}^{k} & \cdots & x_{k} ^{k}
\end{pmatrix}=(x_1+x_2+\dots+x_k)\prod_{i<j}(x_j-x_i).
\]
W include a proof of this last fact: given an arbitrary $k\times k$
matrix $X=((x_{ij}))$ and numbers $u_1,\dots,u_k$, observe (by checking the coefficients of the $u_i$s on each side) that
\begin{eqnarray*}
\det\begin{pmatrix}
u_1 x_{11} & u_2x_{12} & \cdots & u_kx_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}+
\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
u_1 x_{21} & u_2 x_{22} & \cdots & u_k x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}
+\dots +\\
\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
u_1 x_{k1} & u_2x_{k2} & \cdots & u_k x_{kk} \\
\end{pmatrix}
=
(u_1+u_2+\dots u_k)\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}
\end{eqnarray*}
Now apply this to the Vandermonde matrix $X=\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-1} & x_{2}^{k-1} & \cdots & x_{k} ^{k-1}
\end{pmatrix}$ and numbers $u_i=x_i$ for every $i=1,2,\dots,k$.
We will use these two facts. The first one implies that if $d+1$
columns of our matrix are dependent then they have to include the last
column. By the second fact, if $1\le
i_1<i_2<\dots<i_d\le n$ are arbitrary indices then
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 & 0 \\
a_{i_1} & a_{i_2} & \cdots & a_{i_d}& 0 \\
\vdots & \vdots & \ddots & \vdots \\
a_{i_1}^{d-2} & a_{i_2}^{d-2} & \cdots & a_{i_d}^{d-2} & 0 \\
a_{i_1}^{d-1} & a_{i_2}^{d-1} & \cdots & a_{i_d} ^{d-1} & 1 \\
a_{i_1}^{d} & a_{i_2}^{d} & \cdots & a_{i_d} ^{d} & b
\end{pmatrix}=(b-a_{i_1}-a_{i_2}-\dots-a_{i_d})\prod_{k<l}(a_{i_l}-a_{i_k}).\]
This implies the claim.
\end{proof}
Vardy also claimed that Problem \ref{prob:sum} is NP-complete: our
proof will be completed if we show that this is indeed the case even
if $n=2d+2$. Since we have not found a formal proof of this claim of
Vardy, we will give a full proof of the following claim. For a set $V$
let ${V \choose 3}=\{X\subseteq V: |X|=3\}$.
\begin{cl}\label{cl:2dpl2}
Problem \ref{prob:sum} is NP-complete even if $n=2d+2$.
\end{cl}
\begin{proof}
\newcommand{\threeDM}{{\sc Exact-Cover-by-3-Sets}} We will reduce the
well-known NP-complete problem \threeDM\ (Problem SP2 in \cite{gj}) to
this problem. Problem \threeDM\ is the following: given a 3-uniform
family $\ensuremath{\mathcal E}\subseteq {V \choose 3}$, decide whether
there exists a subfamily $\ensuremath{\mathcal E}'\subseteq \ensuremath{\mathcal E}$ such that every
element of $V$ is contained in exactly one member of $\ensuremath{\mathcal E}'$. We assume
that 3 divides $|V|$, and let $d=|V|/3$, so Problem \threeDM\ asks
whether there exist $d$ disjoint members in $\ensuremath{\mathcal E}$. First we show
that this problem remains NP-complete even if $|\ensuremath{\mathcal E}|=2d+2$. Indeed, if
$|\ensuremath{\mathcal E}|\ne 2d+2$ then let us introduce $3k$ new nodes $\{u_i,v_i,w_i:
i=1,2,\dots,k\}$ where
\begin{itemize}
\item $k$ is such that ${3k \choose 3}-2k\ge 2d+2-|\ensuremath{\mathcal E}|$ if $|\ensuremath{\mathcal E}| < 2d+2$, and
\item $k=|\ensuremath{\mathcal E}|-(2d+2)$, if $|\ensuremath{\mathcal E}| > 2d+2$.
\end{itemize}
Let $V^*=V\cup\{u_i,v_i,w_i: i=1,2,\dots,k\}$ and let $\ensuremath{\mathcal E}^*=\ensuremath{\mathcal E}\cup
\{\{u_i,v_i,w_i\}: i=1,2,\dots,k\}$ (note that $ |V^*|=3(d+k)$). If
$|\ensuremath{\mathcal E}| < 2d+2$ then include furthermore $ 2(d+k) + 2 - (|\ensuremath{\mathcal E}|+k)$ arbitrary new
sets of size 3 to $\ensuremath{\mathcal E}^*$ from $ {V^*-V \choose 3}$, but so that $\ensuremath{\mathcal E}^*$
does not contain a set twice (this can be done by the choice of
$k$). It is easy to see that $|\ensuremath{\mathcal E}^*|= 2|V^*|/3+2$, and $V$ can be
covered by disjoint members of $\ensuremath{\mathcal E}$ if and only if $V^*$ can be
covered by disjoint members of $\ensuremath{\mathcal E}^*$.
Finally we show that \threeDM\ is a special case of Problem
\ref{prob:sum} in disguise. Given an instance of \threeDM\ by a
3-uniform family $\ensuremath{\mathcal E}\subseteq {V \choose 3}$, consider the
characteristic vectors of these 3-sets as different positive
integers written in base 2 (that is, assume a fixed ordering of the
set $V$, then the characteristic vectors of the members in $\ensuremath{\mathcal E}$
are 0-1 vectors corresponding to different binary integers, conatining
3 ones in their representation). These will be the numbers
$a_1,a_2,\dots,a_{|\ensuremath{\mathcal E}|}$. Let $b=2^{|V|}-1$ be the number
corresponding to the all 1 characteristic vector, and let
$d=|V|/3$. Observe that there exist $d$ disjoint members in $\ensuremath{\mathcal E}$
if and only if there are indices $1\le i_1<i_2<\dots<i_d\le |\ensuremath{\mathcal E}|$
such that $b=a_{i_1}+a_{i_2}+\dots+a_{i_d}$.
(You will need to prove a small claim about the maximal number of ones
in the binary representation of a sum of positive integers.) This
together with the previous observation proves our claim.
\end{proof}
By combining Claims \ref{cl:vardy} and \ref{cl:2dpl2} we obtain the
proof of Theorem \ref{thm:CdC} as follows. Consider an instance of
Problem \ref{prob:sum} with $n=2d+2$ and let $D=d+1$. Claim
\ref{cl:vardy} states that this instance has a solution if and only if
the $(d+1)\times (n+1)=D\times (2D+1)$ matrix defined in the claim has
$D$ linearly dependent columns, which must be NP-hard to decide by
Claim \ref{cl:2dpl2}.
\end{proof}
\begin{cor}\label{cor:CutdCut}
The problem of deciding whether an (explicitly given) linear matroid
contains two disjoint cuts is NP-complete.
\end{cor}
\begin{proof}
Since the dual matroid of the linear matroid is also linear, and we
can construct a representation of this dual matroid from the
representation of the original matroid, this problem is equivalent
to the problem of deciding whether a linear matroid contains two
disjoint circuits, which is NP-complete by Theorem \ref{thm:CdC}.
\end{proof}
\bibliographystyle{amsplain} |
\section{Introduction} \label{sec:intro}
We consider a one-dimensional semilinear hyperbolic system of the form
\begin{align}
\partial_t p(x,t) + \partial_x u(x,t) &= 0, \qquad x \in (0,1), \ t>0, \label{eq:sys1} \\
\partial_t u(x,t) + \partial_x p(x,t) + a(u(x,t)) &= 0, \qquad x \in (0,1), \ t>0, \label{eq:sys2}
\end{align}
which models, for instance, the damped vibration of a string or the propagation of pressure waves in a gas pipeline.
In this latter application, which we consider as our model problem in the sequel,
$p$ denotes the pressure, $u$ the velocity or mass flux, and
the nonlinear damping term $a(u)$ accounts for the friction at the pipe walls.
The two equations describe the conservation of mass and the balance of momentum
and they can be obtained under some simplifying assumptions from the one dimensional Euler equations with friction; see e.g. \cite{BrouwerGasserHerty11,Guinot08,LandauLifshitz6}.
Similar problems also arise as models for the vibrations of elastic multistructures \cite{LagneseLeugeringSchmidt}
or in the propagation of heat waves on microscopic scales \cite{JosephPreziosi89}.
The system \eqref{eq:sys1}--\eqref{eq:sys2} is complemented by boundary conditions
\begin{align}
u(0,t) = g_0(t), \quad u(1,t) = g_1(t), \qquad t>0, \label{eq:sys3}
\end{align}
and we assume the initial values to be known and given by
\begin{align}
p(x,0) = p_0(x), \quad u(x,0) = u_0(x), \qquad x \in (0,1). \label{eq:sys4}
\end{align}
Motivated by well-known friction laws in fluid dynamics for pipes \cite{LandauLifshitz6},
we will assume here that there exist positive constants $a_1,a_2$ such that
\begin{align} \label{eq:a1}
0 < a_1 \le a'(\xi) \le a_2 \qquad \text{for all } \xi \in \mathbb{R}.
\end{align}
In particular, friction forces are monotonically increasing with velocity.
This condition allows us to establish well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4}.
It is also reasonable to assume that $a(-\xi)=-a(\xi)$, i.e.,
the magnitude of the friction force does not depend on the flow direction,
and consequently we will additionally assume that $a(0)=0$.
\medskip
In this paper, we are interested in the inverse problem of determining an unknown
friction law $a(u)$ in \eqref{eq:sys1}--\eqref{eq:sys4} from additional observation
of the pressure drop
\begin{align}
h(t)=\triangle p(t):= p(0,t) - p(1,t) , \qquad t > 0 \label{eq:sys5}
\end{align}
along the pipe. Such data are readily available in applications, e.g., gas pipeline networks.
\medskip
Before proceeding, let us comment on previous work for related coefficient inverse problems.
By combination of the two equations \eqref{eq:sys1}--\eqref{eq:sys2}, one obtains
the second order form
\begin{align} \label{eq:second}
\partial_{tt} u - \partial_{xx} u + a'(u) \partial_t u = 0, \qquad x \in (0,1), \ t>0,
\end{align}
of a wave equation with nonlinear damping.
The corresponding linear problem with coefficent $a'(u)$ replaced by $c(x)$ has been considered in \cite{Baudouin13,Bukgheim01,ImanuvilovYamamoto01};
uniqueness and Lipschitz stability for the inverse coefficient problem
have been established in one and multiple space dimensions.
A one-dimensional wave equation with nonlinear source term $c(u)$ instead of $a'(u) \partial_t u$
has been investigated in \cite{CannonDuChateau83};
numerical procedures for the identification and some comments on the identifiability
have been provided there.
In \cite{Kaltenbacher07}, the identification of the parameter function $c(\partial_x u)$
in the quasilinear wave equation $\partial_{tt} u - \partial_x ( c(\partial_x u) \partial_x u) = 0$
has been addressed in the context of piezo-electricity;
uniqueness and stability has been established for this inverse problem.
Several results are available for related coefficient inverse problems for
nonlinear parabolic equations; see e.g.
\cite{CannonDuChateau73,DuChateau81,EggerEnglKlibanov05,EggerPietschmannSchlottbom15,Isakov93,Lorenzi86}.
Let us also refer to \cite{Isakov06,KlibanovTimonov} for an overview of available results and further references.
To the best of our knowledge, the uniqueness and stability for the nonlinear coefficient problem
\eqref{eq:sys1}--\eqref{eq:sys5} considered in this paper are still open.
Following arguments proposed in \cite{EggerPietschmannSchlottbom15} for the analysis of
a nonlinear inverse problem in heat conduction, we will derive \emph{approximate stability
results} for the inverse problem stated above, which can be obtained by comparison with
the linear inverse problem for the corresponding stationary problem.
This allows us to obtain quantitative estimates for the reconstruction errors in dependence of the experimental setup,
and provides a theoretical basis for the hypothesis that uniqueness holds,
if the boundary fluxes $g_i(t)$ are chosen appropriately.
For the stable numerical solution in the presence of measurement errors,
we consider a variational regularization defined by
\begin{align}
J(a;p,u) &= \int_0^T |\triangle p(t) - h^\delta(t)|^2 dt + \alpha \|a-a^*\|^2 \to \min \label{eq:min1}\\
& \text{subject to } \quad \eqref{eq:sys1}-\eqref{eq:sys4} \quad \text{and} \quad \eqref{eq:a1}. \label{eq:min2}
\end{align}
This allows us to appropriately address the ill-posedness of the inverse coefficient problem \eqref{eq:sys1}--\eqref{eq:sys5}.
Here $\alpha>0$ is the regularization parameter,
$a^*\in \mathbb{R}$ is an a-priori guess for the damping law, and $h^\delta$ denotes the measurements of the pressure drop
across the pipe for the time interval $[0,T]$. The precise form of regularization term will be specified below.
\medskip
As a first step, we establish the well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4}
and prove uniform a-priori estimates for the solution. Semigroup theory for semilinear evolution
problems will be used for that.
In addition, we also show the continuous dependence and differentiability of the states $(u,p)$ with respect to the parameter $a(\cdot)$.
We then study the optimal control problem \eqref{eq:min1}--\eqref{eq:min2}.
Elimination of $(p,u)$ via solution of \eqref{eq:sys1}--\eqref{eq:sys4}
leads to reduced minimization problem corresponding to Tikhonov regularization for
the nonlinear inverse problem $F(a)=h^\delta$ where $F$ is the parameter-to-measurment mapping
defined implicitly via the differential equations.
Continuity, compactness, and differentiability of the forward operator $F$ are investigated,
providing a guidline for the appropriate functional analytic setting for the inverse problem.
The existence and stability of minimizers for \eqref{eq:min1}--\eqref{eq:min2} then follows with standard arguments.
In addition, we derive quantitative estimates for the error between the reconstructed
and the true damping parameter using an \emph{approximate source condition},
which is reasonable for the problem under consideration.
Such conditions have been used successfully for the analysis of Tikhonov regularization
and iterative regularization methods in \cite{EggerSchlottbom11,HeinHofmann05}.
As a third step of our analysis, we discuss in detail the meaning and the plausibility of this approximate source condition.
We do this by showing that the nonlinear inverse problem is actually close to a linear inverse
problem, provided that the experimental setup is chosen appropriately.
This allows us to derive an approximate stability estimate for the inverse problem,
and to justify the validity of the approximate source condition.
These results suggest the hypothesis of uniqueness for the inverse problem under investigation,
and they allow us to make predictions about the results that can be expected in practice and that are actually observed in our numerical tests.
\medskip
The remainder of the manuscript is organized as follows:
In Section~\ref{sec:prelim} we fix our notation and briefly discuss the underlying linear wave equation without damping.
The well-posedness of the state system \eqref{eq:sys1}--\eqref{eq:sys4} is established in Section~\ref{sec:state}
via semigroup theory. For convenience of the reader, some auxiliary results are summarized in an appendix.
In Section~\ref{sec:forward}, we then investigate the basic properties of the parameter-to-measurement mapping $F$.
Section~\ref{sec:min} is devoted to the analysis of the regularization method \eqref{eq:min1}--\eqref{eq:min2}
and provides a quantitative estimate for the reconstruction error.
The required approximate source condition and the approximate stability of the inverse problem are discussed in Section~\ref{sec:hyp}
in detail.
Section~\ref{sec:num} presents the setup and the results of our numerical tests.
We close with a short summary of our results and a discussion of possible directions for future research.
\section{Preliminaries} \label{sec:prelim}
Throughout the manuscript, we use standard notation for Lebesgue and Sobolev spaces and for classical functions spaces, see e.g. \cite{Evans98}. For the analysis of problem \eqref{eq:sys1}--\eqref{eq:sys4},
we will employ semigroup theory.
The evolution of this semilinear hyperbolic system is driven by the linear wave equation
\begin{align}
\partial_t p(x,t) + \partial_x u(x,t) &= 0, \quad x \in (0,1), \ t>0, \\
\partial_t u(x,t) + \partial_x p(x,t) &= 0, \quad x \in (0,1), \ t>0,
\end{align}
with homogeneous boundary values
\begin{align}
u(0,t)=u(1,t)=0, \quad t>0,
\end{align}
and initial conditions given by $p(\cdot,0)=p_0$ and $u(\cdot,0)=u_0$ on $(0,1)$.
This problem can be written in compact form as an abstract evolution equation
\begin{align} \label{eq:abstract}
y'(t) + A y(t) = 0, \ t>0, \qquad y(0)=y_0,
\end{align}
with state vector $y=(p,u)$, initial value $y_0=(p_0,u_0)$, and operator $A=\begin{pmatrix} 0 & \partial_x \\ \partial_x & 0\end{pmatrix}$. \\[-1ex]
The starting point for our analysis is the following
\begin{lemma}[Generator] \label{lem:generator}
Let $X=L^2(0,1) \times L^2(0,1)$ and $D(A)=H^1(0,1) \times H_0^1(0,1)$. \\
Then the operator $A : D(A) \subset X \to X$ generates a $C^0$-semigroup of contractions on $X$.
\end{lemma}
\begin{proof}
One easily verifies that $A$ is a densly defined and closed linear operator on $X$.
Moreover, $(A y, y)_X = 0$ for all $y \in D(A)$; therefore, $A$ is dissipative.
By direct calculations, one can see that for any $\bar f, \bar g \in L^2(0,1)$, the boundary value problem\begin{align*}
\bar p(x) + \partial_x \bar u(x) &= \bar f(x), \quad x \in (0,1),\\
\bar u(x) + \partial_x \bar p(x) &= \bar g(x), \quad x \in (0,1),
\end{align*}
with $\bar u(0)=\bar u(1)=0$ is uniquely solvable with solution $(\bar p,\bar u) \in H^1(0,1) \times H_0^1(0,1)$.
The assertion hence follows by the Lumer-Phillips theorem \cite[Ch~1, Thm~4.3]{Pazy83}.
\end{proof}
The analysis of the model problem \eqref{eq:sys1}--\eqref{eq:sys4} can now be done in the framework of semigroups.
For convenience, we collect some of the required results in the appendix.
\section{The state system} \label{sec:state}
Let us return to the semilinear wave equation under consideration.
For proving well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4},
and in order to establish some additional regularity of the solution, we will assume that
\begin{itemize}\setlength\itemsep{1ex}
\item[(A1)] $a \in W_{loc}^{3,\infty}(\mathbb{R})$ with $a(0)=0$, $a_0 \le a'(\cdot) \le a_1$, $|a''(\cdot)| \le a_2$, and $|a'''(\cdot)| \le a_3$
\end{itemize}
for some positive constants $a_0,a_1,a_2,a_3>0$.
Since the damping law comes from a modelling process involving several approximation steps,
these assumptions are not very restrictive in practice.
In addition, we require the initial and boundary data to satisfy
\begin{itemize}\setlength\itemsep{1ex}
\item[(A2)] $u_0=0$ and $p_0=c$ with $c \in \mathbb{R}$;
\item[(A3)] $g_0,g_1 \in C^4([0,T])$ for some $T>0$, $g_0(0)=g_1(0)=0$, and $g_0'(0)=g_1'(0)=0$.
\end{itemize}
The system thus describes the smooth departure from a system at rest.
As will be clear from our proofs,
the assumptions on the initial conditions and the regularity requirements for the parameter and the initial and boundary data
could be relaxed without much difficulty.
Existence of a unique solution can now be established as follows.
\begin{theorem}[Classical solution] \label{thm:classical} $ $\\
Let (A1)--(A3) hold.
Then there exists a unique classical solution
\begin{align*}
(p,u) \in C^1([0,T];L^2(0,1) \times L^2(0,1)) \cap C([0,T]; H^1(0,1) \times H^1(0,1))
\end{align*}
for the initial boundary value problem \eqref{eq:sys1}--\eqref{eq:sys4} and its norm can be bounded by
\begin{align*}
\|(p,u)\|_{C([0,T];H^1\times H^1)} + \|(p,u)\|_{C^1([0,T];L^2 \times L^2)}
\le C'
\end{align*}
with constant $C'$ only depending on the bounds for the coefficients and the data and the time horizon $T$.
Moreover, $\triangle p := p(0,\cdot)-p(1,\cdot) \in C^\gamma([0,T])$, for any $0 \le \gamma < 1/2$, and
\begin{align*}
\|\triangle p\|_{C^{\gamma}([0,T])} \le C'(\gamma).
\end{align*}
\end{theorem}
\begin{proof}
The proof follows via semigroup theory for semilinear problems \cite{Pazy83}.
For convenience of the reader and to keep track of the constants, we sketch the basics steps:
{\em Step 1:}
We define $\hat u(x,t) = (1-x) g_0(t) + x g_1(t)$ and set
$\hat p(x,t) = \int_0^x \hat p_x(s,t) dx$ with $\hat p_x(x,t) = (x-1) (a(g_0(t))+g_0'(t)) -x (a(g_1(t))+g_1'(t))$.
Then we decompose the solution into $(p,u)=(\hat p,\hat u) + (\tilde p,\tilde u)$
and note that $(\hat p,\hat u) \in C^1([0,T];H^1 \times H^1)$ by construction and assumption (A3).
The second part $(\tilde p, \tilde u)$ solves
\begin{align*}
&&&&\partial_t \tilde p + \partial_x \tilde u &= f_1, && \tilde p(\cdot,0) = \tilde p_0,&&&&\\
&&&&\partial_t \tilde u + \partial_x \tilde p &= f_2, && \tilde u(\cdot,0) = \tilde u_0,&&&&
\end{align*}
with $f_1(t)=-\partial_t \hat p(t) - \partial_x \hat u(t)$, $f_2(t,\tilde u(t))=-\partial_t \hat u(t)-\partial_x \hat p(t)- a(\hat u(t) + \tilde u(t))$
and initial values $\tilde p_0 = p_0 - \hat p(0)$, $\tilde u_0 = u_0 - \hat u(0)$.
In addition, we have $\tilde u(0,t)=\tilde u(1,t)=0$ for $t>0$.
This problem can be written as
abstract evolution
\begin{align*}
y'(t) + A y(t) = f(t,y(t)), \qquad y(0)=y_0,
\end{align*}
on $X=L^2 \times L^2$ with $y=(\tilde p,\tilde u)$, $f(t,y)=(f_1(t),f_2(t,y_2))$, and $D(A)=H^1 \times H_0^1$.
{\em Step 2:}
We now verify the conditions of Lemma~\ref{lem:classical2} stated in the appendix.
By assumptions (A2) and (A3) one can see that $y_0 = (\tilde p_0,\tilde u_0) \in H^1(0,1) \times H_0^1(0,1)$.
For every $y \in H^1(0,1) \times H_0^1(0,1)$, we further have $f(t,y) = (f_1(t),f_2(t,y_2)) \in H^1(0,1) \times H_0^1(0,1)$ by construction of $\hat u$ and $\hat p$. Moreover, $f$ is continuous w.r.t. time.
Denote by $|u|_{H^1}=\|\partial_x u\|_{L^2}$ the seminorm of $H^1$. Then
\begin{align*}
|f_2(t,v) - f_2(t,w)|_{H^1}
&= |a(\hat u(t) + v) - a(\hat u(t)+w)|_{H^1} \\
&= \int_0^1 |a'(\hat u(t) + (1-s) v + s w) (v-w)|_{H^1} ds \\
&\le a_1 |v-w|_{H^1} + a_2 |\hat u(t) + (1-s) v + s w|_{H^1} |v-w|_{H^1}.
\end{align*}
Here we used the embedding of $H^1(0,1)$ into $L^\infty(0,1)$ and the bounds for the coefficients.
This shows that $f$ is locally Lipschitz continuous with respect to $y$, uniform on $[0,T]$.
By Lemma~\ref{lem:classical2}, we thus obtain local existence and uniqueness of a classical solution.
{\em Step 3:}
To obtain the global existence of the classical solution,
note that
\begin{align*}
\|\tfrac{d}{dt} f(t,y(t))\|_X
&\le \|\tfrac{d}{dt} f_1(t)\|_{L^2} + \|\tfrac{d}{dt} f_2(t,\tilde u(t))\|_{L^2} \\
&\le C_1 + C_2 + \|\partial_t \tilde u(t)\|_{L^2}),
\end{align*}
where the first term comes from estimating $f_1$ and the other three terms from the estimate for $f_2$.
The constants $C_1,C_2$ here depend only on the bounds for the data.
Global existence of the classical solution and the uniform bound now follow from Lemma~\ref{lem:classical3}.
\end{proof}
Note that not all regularity assumptions for the data and for the parameter were required so far.
The conditions stated in (A1)--(A3) allow us to prove higher regularity of the solution,
which will be used for instance in the proof of Theorem~\ref{thm:lipschitz} later on.
\begin{theorem}[Regularity] \label{thm:regularity}
Under the assumptions of the previous theorem, we have
\begin{align*}
\|(p,u)\|_{C^1([0,T];H^1\times H^1) \cap C^2([0,T];L^2 \times L^2)} \le C''
\end{align*}
with $C''$ only depending on the bounds for the coefficient and data, and the time horizon.
\end{theorem}
\begin{proof}
To keep track of the regularity requirements, we again sketch the main steps:
{\em Step 1:}
Define $(r,w)=(\partial_t p,\partial_t u)$ and $(r,w) = (\hat r,\hat w) + (\tilde r,\tilde w)$ with $(\hat r,\hat w)=(\partial_t \hat p,\partial_t \hat u)$ and $(\tilde r,\tilde w)=(\partial_t \tilde p,\partial_t \tilde u)$ as in the previous proof.
The part $z=(\tilde r,\tilde w)$ can be seen to satisfy
\begin{align} \label{eq:z}
\partial_t z(t) + A z(t) = g(t,z(t)), \qquad z(0)=z_0,
\end{align}
with right hand side $g(t,z)=(-\partial_t \hat r(t) -\partial_x \hat w(t),-\partial_t \hat w(t)-\partial_x \hat r(t)-a'(u(t)) z_2)$
and initial value $z_0=(\partial_t p(0)-\partial_t \hat p(0),\partial_t u(0)-\partial_t \hat u(0)) = (-\partial_x u_0-\partial_t \hat p(0),-\partial_x p_0 - a(u_0) - \partial_t \hat u(0))$.
{\em Step 2:}
Using the assumptions (A1)--(A3) for the coefficient and the data, and the bounds for the solution
of Theorem~\ref{thm:classical}, and the definition of $\hat p$ and $\hat u$,
one can see that $z_0 \in Y=H^1 \times H_0^1$ and that
$g : [0,T] \times H^1(0,1) \times Y \to Y$ satisfies the conditions of Lemma~\ref{lem:classical2}.
Thus $z(t)$ is a local classical solution.
{\em Step 3:}
Similarly as in the previous proof, one can show that\begin{align*}
\|\tfrac{d}{dt} g(t,z(t))\|_X \le C_1 + C_2 \|z'(t)\|_X + C_3 \|A z(t)\|_X
\end{align*}
for all sufficiently smooth function $z$.
The global existence and uniform bounds for the classical solution then follow again by Lemma~\ref{lem:classical3}.
\end{proof}
\section{The parameter-to-output mapping} \label{sec:forward}
Let $u_0,p_0,g_0,g_1$ be fixed and satisfy assumptions (A2)--(A3).
Then by Theorem~\ref{thm:classical}, we can associate to any damping parameter $a$
satisfying the conditions (A1) the corresponding solution $(p,u)$ of problem
\eqref{eq:sys1}--\eqref{eq:sys4}.
By the uniform bounds of Theorem~\ref{thm:classical} and the embedding of $H^1(0,1)$ in $C[0,1]$,
we know that
\begin{align} \label{eq:bounds}
\underline{u} \le u(x,t) \le \overline{u}, \qquad x \in \omega, \ 0 \le t \le T,
\end{align}
for some constants $\underline{u}$, $\overline{u}$ independent of the choice of $a$.
Without loss of generality, we may thus restrict the parameter function $a$ to the interval $[\underline{u},\overline{u}]$.
We now define the parameter-to-measurement mapping, in the sequel also called \emph{forward operator}, by
\begin{align} \label{eq:forward}
F : D(F) \subset H^2(\underline{u},\overline{u}) \to L^2(0,T),
\qquad a \mapsto \triangle p
\end{align}
where $\triangle p=p(0,\cdot)-p(1,\cdot)$ is the pressure drop across the pipe and $(p,u)$ is the solution of \eqref{eq:sys1}--\eqref{eq:sys4} for parameter $a$. As domain for the operator $F$, we choose
\begin{align} \label{eq:domain}
D(F)=\{ a \in H^2(\underline{u},\overline{u}) : (A1) \text{ holds}\},
\end{align}
which is a closed and convex subset of $H^2(\underline{u},\overline{u})$.
By Theorem~\ref{thm:classical}, the parameter-to-measurment mapping is well-defined on $D(F)$.
In the following, we establish several further properties of this operator, which will be required for our analysis later on.
\begin{theorem}[Lipschitz continuity] \label{thm:lipschitz}
The operator $F$ is Lipschitz continuous, i.e.,
\begin{align*}
\|F(a) - F(\tilde a)\|_{L^2(0,T)} \le C_L \|a-\tilde a\|_{H^2(\underline{u},\overline{u})}, \qquad \forall a, \tilde a \in D(F)
\end{align*}
with some uniform Lipschitz constant $C_L$ independent of the choice of $a$ and $\tilde a$.
\end{theorem}
\begin{proof}
Let $a,\tilde a \in D(F)$ and let $(p,u)$, $(\tilde p,\tilde u)$ denote the corresponding classical solutions of problem \eqref{eq:sys1}--\eqref{eq:sys4}.
Then the function $(r,w)$ defined by $r=\tilde p-p$, $w=\tilde u-u$ satisfies
\begin{align*}
\partial_t r + \partial_x w &= 0, \\
\partial_t w + \partial_x r &= a(u) - \tilde a (\tilde u) =:f_2,
\end{align*}
with initial and boundary conditions $r(x,0)=w(x,0)=w(0,t)=w(1,t)=0$.
By Theorem~\ref{thm:classical}, we know the existence of a unique classical solution $(r,w)$.
Moreover,
\begin{align*}
\|\tfrac{d}{dt} f_2\|_{L^2}
&\le \|(a'(u)-a'(\tilde u)) \partial_t u\|_{L^2} + \|(a'(\tilde u) - \tilde a'(\tilde u)) \partial_t u\|_{L^2}
+ \|\tilde a'(\tilde u) (\partial_t u - \partial_t \tilde u)\|_{L^2} \\
&\le a_2 \|w\|_{L^2} \|\partial_t u\|_{L^\infty} + \|a'-\tilde a'\|_{L^\infty} \|\partial_t u\|_{L^2} + a_1 \|\partial_t w\|_{L^2}
\end{align*}
Using the uniform bounds for $u$ provided by Theorem~\ref{thm:classical} and \ref{thm:regularity} and similar estimates as in the proof of Lemma~\ref{lem:classical3}, one obtains
$\|(r,w)\|_{C([0,T];H^1 \times H_0^1)} \le C \|a' - \tilde a'\|_{L^\infty}$ with $C$ only depending on the bounds for the coefficients and the data and on the time horizon. The assertion then follows by noting that $F(\tilde a) -F(a) = r(0,\cdot)-r(1,\cdot)$ and the continuous embedding of $H^1(0,1)$ in $L^\infty(0,1)$ and $H^2(\underline{u},\overline{u})$ to $W^{1,\infty}(\underline{u},\overline{u})$.
\end{proof}
By careful inspection of the proof of Theorem~\ref{thm:lipschitz}, we also obtain
\begin{theorem}[Compactness] \label{thm:compact}
The operator $F$ maps sequences in $D(F)$ weakly converging in $H^2(\underline{u},\overline{u})$ to strongly convergent sequences in $L^2(0,T)$. In particular, $F$ is compact.
\end{theorem}
\begin{proof}
The assertion follows from the estimates of the previous proof by noting that the embedding of $H^2(\underline{u},\overline{u})$ into $W^{1,\infty}(\underline{u},\overline{u})$ is compact. The forward operator is thus a composition of a continuous and a compact operator.
\end{proof}
As a next step, we consider the differentiability of the forward operator.
\begin{theorem}[Differentiability] \label{thm:differentiability} $ $\\
The operator $F$ is Frechet differentiable with Lipschitz continuous derivative, i.e.,
\begin{align*}
\|F'(a) - F'(\tilde a)\|_{H^2(\underline{u},\overline{u}) \to L^2(0,T)} \le L \|a-\tilde a\|_{H^2(\underline{u},\overline{u})}
\qquad \text{for all } a,\tilde a \in D(F).
\end{align*}
\end{theorem}
\begin{proof}
Denote by $(p(a),u(a))$ the solution of \eqref{eq:sys1}--\eqref{eq:sys4}
for parameter $a$ and let $(r,w)$ be the directional derivative of $(p(a),u(a))$ with respect to $a$ in direction $b$, defined by
\begin{align} \label{eq:directional}
r = \lim_{s \to 0} \frac{1}{s} (p(a+sb)-p(a))
\quad \text{and} \quad
w = \lim_{s \to 0} \frac{1}{s} (u(a+sb)-u(a)).
\end{align}
Then $(r,w)$ is characterized by the {\em sensitivity system}
\begin{align}
\partial_t r + \partial_x w &= 0, \label{eq:sen1} \\
\partial_t w + \partial_x r &= -a'(u(a)) w - b(u(a))=:f_2\label{eq:sen2}
\end{align}
with homogeneous initial and boundary values
\begin{align}
r(x,0) = w(x,0) = w(0,t) = w(1,t) &= 0. \label{eq:sen34}
\end{align}
The right hand side $f_2(t,w)=-a'(u(a;t)) w - b(u(a;t))$ can be shown to be continuously differentiable
with respect to time, by using the previous results and (A1)--(A3).
Hence by Lemma~\ref{lem:classical} there exists a unique classical solution $(r,w)$ to \eqref{eq:sen1}--\eqref{eq:sen34}.
Furthermore
\begin{align*}
\|\tfrac{d}{dt} f_2\|_{L^2}
&\le \|a''(u)\|_{L^\infty} \|\partial_t u\|_{L^\infty} \|w\|_{L^2} + \|a'(u)\|_{L^\infty} \|\partial_t w\|_{L^2} + \|b'(u)\|_{L^\infty} \|\partial_t u\|_{L^2}.
\end{align*}
By Lemma~\ref{lem:classical3} we thus obtain uniform bounds for $(w,r)$.
The directional differentiability of $(p(a),u(a))$ follows by verifying \eqref{eq:directional},
which is left to the reader.
The function $(r,w)$ depends linearly and continuously on $b$ and continuously on $a$ which yields the continuous differentiability of $(p(a),u(a))$ with respect to the parameter $a$.
The differentiability of the forward operator $F$ then follows by noting that $F'(a) b = r(0,\cdot)-r(1,\cdot)$.
For the Lipschitz estimate, we repeat the argument of Theorem~\ref{thm:lipschitz}. An additional derivative
of the parameter $a$ is required for this last step.
\end{proof}
\section{The regularized inverse problem} \label{sec:min}
The results of the previous section allow us to rewrite the constrained minimization problem
\eqref{eq:min1}--\eqref{eq:min2} in reduced form as \begin{align} \label{eq:tikhonov}
J_\alpha^\delta(a) := \|F(a) - h^\delta\|_{L^2(0,T)}^2 + \alpha \|a-a^*\|_{H^2(\underline{u},\overline{u})}^2 \to \min_{a \in D(F)},
\end{align}
which amounts to Tikhonov regularization for the nonlinear inverse problem $F(a)=h^\delta$.
As usual, we replaced the exact data $h$ by perturbed data $h^\delta$ to account for measurement errors.
Existence of a minimizer can now be established with standard arguments \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}.
\begin{theorem}[Existence of minimizers]
Let (A2)--(A3) hold.
Then for any $\alpha>0$ and any choice of data $h^\delta \in L^2(0,T)$,
the problem \eqref{eq:tikhonov} has a minimizer $a_\alpha^\delta \in D(F)$.
\end{theorem}
\begin{proof}
The set $D(F)$ is closed, convex, and bounded. In addition, we have shown that $F$ is weakly continuous,
and hence the functional $J_\alpha^\delta$ is weakly lower semi-continuous.
Existence of a solution then follows as in \cite[Thm.~10.1]{EnglHankeNeubauer96}.
\end{proof}
\begin{remark}
Weak continuity and thus existence of a minimizer can be shown without the bounds for the second and third derivative of the parameter in assumption (A1).
\end{remark}
Let us assume that there exists a true parameter $a^\dag \in D(F)$
and denote by $h=F(a^\dag)$ the corresponding exact data.
The perturbed data $h^\delta$ are required to satisfy
\begin{align} \label{eq:noise}
\|h^\delta - h\|_{L^2(0,T)} \le \delta
\end{align}
with $\delta$ being the noise level.
These are the usual assumptions for the inverse problem.
In order to simplify the following statements about convergence, we also assume
for the moment that the solution of the inverse problem is unique, i.e., that
\begin{align} \label{eq:unique}
F(a) \ne F(a^\dag) \quad \text{for all } a \in D(F) \setminus \{a^\dag\}.
\end{align}
This assumption is only made for convenience here,
but we also give some justification for its validity in the following section.
Under this uniqueness assumption, we obtain the following result about the convergence of the regularized solutions; see \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}.
\begin{theorem}[Convergence]
Let \eqref{eq:unique} hold and $h^\delta$ be a sequence of data satisfying \eqref{eq:noise} for $\delta \to 0$.
Further, let $a_\alpha^\delta$ be corresponding minimizeres of \eqref{eq:tikhonov} with $\alpha=\alpha(\delta)$ chosen such that $\alpha \to 0$ and $\delta^2/\alpha \to 0$.
Then $\|a_\alpha^\delta - a^\dag\|_{H^2(\underline{u},\overline{u})} \to 0$ with $\delta \to 0$.
\end{theorem}
\begin{remark}
Without assumption \eqref{eq:unique} about uniqueness, convergence holds for subsequences and
towards an $a^*$-minimum norm solution $a^\dag$; see \cite[Sec.~10]{EnglHankeNeubauer96} for details.
\end{remark}
To obtain quantitative estimates for the convergence, some additional conditions on the nonlinearity of the operator $F$ and on the solution $a^\dag$ are required.
Let us assume that
\begin{align} \label{eq:source}
a^\dag - a^* = F'(a^\dag)^* w + e
\end{align}
holds for some $w \in L^2(0,T)$ and $e \in H^2(\underline{u},\overline{u})$.
Note that one can always choose $w=0$ and $e=a^\dag-a^*$, so this condition is no
restrictio of generalty. However, good bounds for $\|w\|$ and $\|e\|$ are required
in order to really take advantage of this splitting later on.
Assumption \eqref{eq:source} is called a \emph{approximate source condition}, and has been
investigated for the convergence analysis of regularization methods for instance in \cite{EggerSchlottbom11,HeinHofmann05}
By a slight modification of the proof of \cite[Thm~10.4]{EnglHankeNeubauer96}, one can obtain
\begin{theorem}[Convergence rates]
Let \eqref{eq:source} hold and let $L \|w\|_{H^2(\underline{u},\overline{u})} < 1$. Then
\begin{align}
\|a^\dag - a_\alpha^\delta\|_{H^2(\underline{u},\overline{u})}
\le C \big( \delta^2/\alpha + \alpha \|w\|^2 + \delta \|w\|_{H^2(\underline{u},\overline{u})} + \|e\|_{L^2(0,T)}^2).
\end{align}
The constant $C$ in the estimate only depends on the size of $L\|w\|_{H^2(\underline{u},\overline{u})}$.
\end{theorem}
\begin{proof}
Proceeding as in \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}, one can see that
\begin{align*}
\|F(a_\alpha^\delta) - h^\delta\|^2 + \alpha \|a_\alpha^\delta - a^\dag\|^2
\le \delta^2 + 2\alpha (a^\dag-a_\alpha^\delta, a^\dag - a^* ).
\end{align*}
Using the approximate source condition \eqref{eq:source}, the last term can be estimated by
\begin{align*}
(a^\dag - a_\alpha^\delta, a^\dag - a^* )
&= (F'(a^\dag) (a^\dag -a_\alpha^\delta) , w) + (a^\dag-a_\alpha^\delta,e) \\
&\le \|F'(a^\dag) (a^\dag-a_\alpha^\delta)\| \|w\| + \|a^\dag-a_\alpha^\delta\|\|e\|.
\end{align*}
By elementary manipulations and the Lipschitz continuity of the derivative, one obtains
\begin{align*}
\|F'(a^\dag) (a^\dag - a_\alpha^\delta)\|
&\le \|F(a_\alpha^\delta)-h^\delta\| + \delta + \tfrac{L}{2} \|a_\alpha^\delta - a^\dag\|^2.
\end{align*}
Using this in the previous estimates and applying Young inequalities leads to
\begin{align*}
&\|F(a_\alpha^\delta) - h^\delta\|^2 + \alpha \|a_\alpha^\delta - a^\dag\|^2 \\
&\le \delta^2 + 2 \alpha^2 \|w\|^2 + 2 \alpha \delta \|w\| + C' \alpha \|e\|^2
+ \frac{1}{2} \|F(a_\alpha^\delta)-h^\delta\|^2 + \alpha \|a_\alpha^\delta-a^\dag\|^2 (L \|w\| + \tfrac{1}{C'}).
\end{align*}
If $L\|w\|<1$, we can choose $C'$ sufficienlty large such that $L \|w\| + \tfrac{1}{C'} < 1$ and the last two terms can be absorbed in the left hand side, which yields the assertion.
\end{proof}
\begin{remark}
The bound of the previous theorem yields a quantitative estimate for the error.
If the source condition \eqref{eq:source} holds with $e=0$ and $L \|w\| < 1$,
then for $\alpha \approx \delta$ one obtains $\|a_\alpha^\delta - a^\dag\| = O(\delta^{1/2})$,
which is the usual convergence rate result \cite[Thm.~10.4]{EnglHankeNeubauer96}.
The theorem however also yields estimates and a guidline for the choice of the regularization parameter in the general case.
We refer to \cite{HeinHofmann05} for an extensive discussion of the approximate source condition \eqref{eq:source} and its relation to more standard conditions.
\end{remark}
\begin{remark}
If the deviation from the classical source condition is small, i.e., if \eqref{eq:source} holds with
$\|e\| \approx \delta^{1/2}$ and $L \|w\| < 1$,
then for $\alpha \approx \delta$ one still obtains the usual estimate $\|a_\alpha^\delta - a^\dag\| = O(\delta^{1/2})$.
As we will illustrate in the next section, the assumption that $\|e\|$ is small is realistic in practice,
if the experimental setup is chosen appropriately.
The assumption that $\|e\|$ is sufficiently small in comparison to $\alpha$ also allows to show
that the Tikhonov functional is locally convex around minimizers and to prove convergence of iterative schemes;
see \cite{EggerSchlottbom15,ItoJin15} for some recent results in this direction.
\end{remark}
Numerical methods for minimizing the Tikhonov functional usually require the application of the adjoint derivative operator.
For later reference, let us therefore briefly give a concrete representation of the adjoint that can be used for the implementation.
\begin{lemma}
Let $\psi \in H^1(0,T)$ with $\psi(T)=0$ and let $(q,v)$ denote the solution of
\begin{align}
\partial_t q + \partial_x v &= 0 \qquad \qquad x \in (0,1), \ t<T, \label{eq:adj1}\\
\partial_t v + \partial_x q &= a'(u) v, \quad \ x \in (0,1), \ t<T, \label{eq:adj2}
\end{align}
with terminal conditions $v(x,T)=q(x,T)=0$ and boundary conditions
\begin{align}
v(0,t)=v(1,t)=\psi(t), \quad t<T. \label{eq:adj3}
\end{align}
Then the action of the adjoint operator $\phi=F'(a)^* \psi$ is given by
\begin{align} \label{eq:adj}
(\phi,b)_{H^2(\underline{u},\overline{u})} = \int_0^T (b(u), v)_{L^2(0,1)} dt , \qquad \forall b \in H^2(\underline{u},\overline{u}).
\end{align}
\end{lemma}
\begin{proof}
By definition of the adjoint operator, we have
\begin{align*}
(b,F'(a)^* \psi)_{H^2(\underline{u},\overline{u})} = (F'(a) b, \psi)_{L^2(0,T)}.
\end{align*}
Using the characterization of the derivative via the solution $(r,w)$ of the sensitivity equation \eqref{eq:sen1}--\eqref{eq:sen2} and the definition of the adjoint state $(q,v)$ via \eqref{eq:adj1}--\eqref{eq:adj2}, we obtain
\begin{align*}
&(F'(a) b, \psi)_{L^2(0,T)}
= \int\nolimits_0^T r(0,t) v(0,t) - r(1,t) v(1,t) dt \\
&= \int\nolimits_0^T -(\partial_x r,v) - (r, \partial_x v) dt
=\int\nolimits_0^T (\partial_t w+a'(u) w + b(u), v) + (r,\partial_t q) dt \\
&= \int\nolimits_0^T -(\partial_t v - a'(u) v,w) + (b(u),v) + (\partial_x w,q) dt
=\int\nolimits_0^T (b(u), v) dt.
\end{align*}
For the individual steps we only used integration-by-parts and made use of the boundary and initial conditions.
This already yields the assertion.
\end{proof}
\begin{remark}
Existence of a unique solution $(q,v)$ of the adjoint system \eqref{eq:adj1}--\eqref{eq:adj2}
with the homogeneous terminal condition $v(x,T)=q(x,T)=0$ and boundary condition $q(0,t)=q(1,t)=\psi(t)$ follows
with the same arguments as used in Theorem~\ref{thm:classical}.
The presentation of the adjoint formally holds also for $\psi \in L^2(0,T)$, which can be proved by a limiting process.
The adjoint problem then has to be understood in a generalized sense.
\end{remark}
\section{Remarks about uniqueness and the approximate source condition} \label{sec:hyp}
We now collect some comments about the uniqueness hypothesis \eqref{eq:unique} and the approximate source condition \eqref{eq:source}. Our considerations are based on the fact that the nonlinear inverse problem is actually close to a linear inverse problem provided that the experimental setup is chosen appropriately.
We will only sketch the main arguments here with the aim to illustrate the plausibility of these assumptions and to
explain what results can be expected in the numerical experiments presented later on.
\subsection{Reconstruction for a stationary experiment}
Let the boundary data \eqref{eq:sys3} be chosen such that
$g_0(t)=g_1(t)=\bar g \in \mathbb{R}$ for $t \ge t_0$.
By the energy estimates of \cite{GattiPata06}, which are derived for an equivalent problem in second order form \eqref{eq:second} there, one can show that the solution
$(p(t),u(t))$ of the system \eqref{eq:sys1}--\eqref{eq:sys4} converges exponentially fast to a steady state
$(\bar p,\bar u)$, which is the unique solution of
\begin{align}
\partial_x \bar u &= 0, \quad x \in (0,1), \label{eq:stat1}\\
\partial_x \bar p + a(\bar u) &= 0, \quad x \in (0,1), \label{eq:stat2}
\end{align}
with boundary condition $\bar u(0)=\bar u(1)=\bar g$.
From equation \eqref{eq:stat1}, we deduce that the steady state $\bar u$ is constant,
and upon integration of \eqref{eq:stat2}, we obtain
\begin{align} \label{eq:statsol}
a(\bar g) = a(\bar u) = \int_0^1 a(\bar u) dx = -\int_0^1 \partial_x \bar p(x) dx = \bar p(0)-\bar p(1) = \triangle \bar p.
\end{align}
The value $a(\bar g)$ can thus be determined by a stationary experiment.
As a consequence, the friction law $a(\cdot)$ could in principle be determined from an infinite number of
stationary experiments.
We will next investigate the inverse problem for these \emph{stationary experiments} in detail.
In a second step, we then use these results for the analysis of the inverse problem for the instationary experiments
that are our main focus.
\subsection{A linear inverse problem for a series of stationary experiments}
Let us fix a smooth and monotonic function $g : [0,T] \to [\underline{u},\overline{u}]$ and denote by $\triangle \bar p(t)=a(g(t))$ the pressure difference obtained from the stationary system \eqref{eq:stat1}--\eqref{eq:stat2} with boundary flux $\bar g=g(t)$.
The forward operator for a sequence of stationary experiments is then given by
\begin{align} \label{eq:statop}
K : H^2(\underline{u},\overline{u}) \to L^2(0,T), \qquad a \mapsto a(g(\cdot)),
\end{align}
and the corresponding inverse problem with exact data reads
\begin{align} \label{eq:statinv}
(K a)(t) = a(g(t)) = \triangle \bar p(t), \qquad t \in [0,T].
\end{align}
This problem is linear and its solution is given by the simple formula \eqref{eq:statsol}
with $\bar g$ and $\triangle \bar p$ replaced by $g(t)$ and $\triangle \bar p(t)$ accordingly.
From this representation, it follows that
\begin{align*}
\|a - \tilde a\|^2_{L^2(\underline{u},\overline{u})}
&= \int_0^T |a(g(t)) - \tilde a(g(t))|^2 |g'(t)| dt
\le C \|K a - K \tilde a\|_{L^2(0,T)}^2,
\end{align*}
where we assumed that $|g'(t)| \le C$ for all $t$.
Using the uniform bounds iny assumption (A1),
embedding, and interpolation, one can further deduce that
\begin{align} \label{eq:stathoelder}
\|a - \tilde a\|_{H^2(\underline{u},\overline{u})} \le C_\gamma \|K a-K\tilde a\|_{L^2(0,T)}^\gamma.
\end{align}
This shows the Hölder stability of the inverse problem \eqref{eq:statinv} for stationary experiments.
As a next step, we will now extend these results to the instationary case by a perturbation argument
as proposed in \cite{EggerPietschmannSchlottbom15} for a related inverse heat conduction problem.
\subsection{Approximate stability for the instationary inverse problem}
If the variation of the boundary data $g(t)$ with respect to time is sufficiently small,
then from the exponential stability estimates of \cite{GattiPata06}, one may deduce that
\begin{align} \label{eq:eps}
\|p(t)-\bar p(t)\|_{H^1} + \|u(t)-\bar u(t)\|_{H^1} \le \varepsilon.
\end{align}
Hence the solution $(p(t),u(t))$ is always close to the stationary state $(\bar p(t),\bar u(t))$
with the corresponding boundary data $\bar u(0,t)=\bar u(1,t)=g(t)$.
Using $\triangle p(t) = p(0,t) - p(1,t) = -\int_0^1 \partial_x p(y,t) dy$ and the Cauchy-Schwarz inequality leads to
\begin{align} \label{eq:est}
|\triangle p(t) - a(g(t))| = |\triangle p(t) - \triangle \bar p(t)| \le \|p(t)-\bar p(t)\|_{H^1(0,1)} \le \varepsilon.
\end{align}
From the definition of the nonlinear and the linear forward operators, we deduce that
\begin{align} \label{eq:estOps}
F(a) = K a + O(\varepsilon).
\end{align}
\begin{remark}
As indicated above, the error $\varepsilon$ can be made arbitrarily small by a proper design of the experiment, i.e., by
slow variation of the boundary data $g(t)$.
The term $O(\varepsilon)$ can therefore be considered as an additional measurement error,
and thus the parameter $a$ can be determined approximately with the formula \eqref{eq:statsol}
for the stationary experiments.
As a consequence of the stability of the linear inverse problem, we further obtain
\begin{align} \label{eq:hoelder}
\|a-\tilde a\|_{H^2(\underline{u},\overline{u})} \le C'_\gamma \|F(a)-F(\tilde a)\|_{L^2(0,T)}^\gamma + C''_\gamma \varepsilon^\gamma.
\end{align}
In summary, we may thus expect that the identification from the nonlinear experiments is stable and unique,
provided that the experimental setup is chosen appropriately.
\end{remark}
\subsection{The approximate source condition}
With the aid of the stability results in \cite{GattiPata06} and similar reasoning as above,
one can show that the linearized operator satisfies
\begin{align*}
F'(a) h = K h + O(\varepsilon \|h\|_{H^2(\underline{u},\overline{u})}).
\end{align*}
A similar expansion is then also valid for the adjoint operator, namely
\begin{align*}
F'(a)^* w = K^*w + O(\varepsilon \|w\|_{L^2(0,T)}).
\end{align*}
This follows since $L=F'(a)-K$ is linear and bounded by a multiple of $\varepsilon$,
and so is the adjoint $L^*=F'(a)^*-K^*$.
In order to verify the approximate source condition \eqref{eq:source}, it thus suffices to
consider the condition $z = K^* w$ for the linear problem.
From the explicit respresentation \eqref{eq:statop} of the operator $K$ this can be translated directly to a smoothness condition on $z$ in terms of weighted Sobolev spaces and some boundary conditions; we refer to \cite{EggerPietschmannSchlottbom14} for a detailed derivation in a similar context.
\begin{remark}
The observations made in this section can be summarized as follows:
(i) If the true parameter $a$ is sufficiently smooth, and if the boundary data are varied sufficiently slowly ($\varepsilon$ small),
such that the instationary solution at time $t$ is close to the steady state corresponding to the boundary data $g(t)$,
then the parameter can be identified stably with the simple formula for the linear inverse problem.
The same stable reconstructions will also be obtained with Tikhonov regularization \eqref{eq:tikhonov}.
(ii) For increasing $\varepsilon$, the approximation \eqref{eq:estOps} of the nonlinear problem by the linear problem deteriorates.
In this case, the reconstruction by the simple formula \eqref{eq:statsol} will get worse while the solutions obtained
by Tikhonov regularization for the instationary problem can be expected to still yield good and stable reconstructions.
\end{remark}
\begin{remark}
Our reasoning here was based on the \emph{approximate stability estimate} \eqref{eq:hoelder} that is inherited from the
satationary problem by a perturbation argument.
A related analysis of Tikhonov regularization under \emph{exact} conditional stability assumptions
can be found \cite{ChengYamamoto00,HofmannYamamoto10} together with some applications.
\end{remark}
\section{Numerical tests} \label{sec:num}
For illustration of our theoretical considerations discussed in the previous section,
let us we present some numerical results which provide additional evidence for the
uniqueness and stability of the inverse problem.
\subsection{Discretization of the state equations}
For the space discretization of state system \eqref{eq:sys1}--\eqref{eq:sys4},
we utilize a mixed finite element method based on a weak formulation of the problem.
The pressure $p$ and the velocity $u$ are approximated with continuous piecewise linear
and discontinuous piecewise constant finite elements, respectively. For the time discretization, we employ a one step scheme in which the differential terms are treated implicitly and the nonlinear damping term is integrated explicitly.
A single time step of the resulting numerical scheme then has the form
\begin{align*}
\tfrac{1}{\tau} (p^{n+1}_h,q_h) - (u_h^{n+1},\partial_x q_h) &= \tfrac{1}{\tau} (p_h^n,q_h) + g_0^{n+1} q_h(0) - g_1^{n+1} q_h(1), \\
\tfrac{1}{\tau} (u_h^{n+1},v_h) + (\partial_x p_h^{n+1},v_h) &= \tfrac{1}{\tau} (u_h^n,v_h) - (a(u_h^n),v_h),
\end{align*}
for all test functions $q_h \in P_1(T_h) \cap C[0,1]$ and $v_h \in P_0(T_h)$.
Here $T_h$ is the mesh of the interval $(0,1)$, $P_k(T_h)$ denotes the space of piecewise polynomials on $T_h$, $\tau>0$ is the time-step, and $g_i^n=g_i(t^n)$ are the boundary fluxes at time $t^{n}=n \tau$.
The functions $(p_h^n,u_h^n)$ serve as approximations for the solutions $(p_h(t^n),u_h(t^n))$
at the discrete time steps.
Similar schemes are used to approximate the sensitivity system \eqref{eq:sen1}--\eqref{eq:sen34} and the adjoint problem \eqref{eq:adj}--\eqref{eq:adj3} in a consistent manner.
The spatial and temporal mesh size were chosen so small such that
approximation errors due to the discretization can be neglected;
this was verified by repeating the tests with different discretization parameters.
\subsection{Approximation of the parameter}
The parameter function $a(\cdot)$ was approximated by cubic interpolating splines over a uniform grid of the interval $[\underline{u},\overline{u}]$.
The splines were parametrized by the interpolation conditions $s(u_i)=s_i$, $i=0,\ldots,m$ and knot-a-knot conditions was used to obtain a unique representation. To simplify the implementation, the $L^2$, $H^1$, and $H^2$ norm in the parameter space were approximated by difference operators acting directly on the interpolation points $s_i$, $i=0,\ldots,m$.
To ensure mesh independence, the tests were repeated for different
numbers $m$ of interpolation points.
\subsection{Minimization of the Tikhonov functional}
For minimization of the Tikhonov functional \eqref{eq:tikhonov}, we utilized a projected iteratively regularized Gau\ss-Newton method with regularization parameters $\alpha^n = c q^n$, $q<1$.
The bounds in assumption (A1) for the parameters were satisfied automatically for all iterates in our tests such that the projection step was never carried out.
The iteration was stopped by a discrepancy principle, i.e., when $\|F(a^n) - h^\delta\| \le 1.5 \delta$ was valid the first time.
The regularization parameter $\alpha^n$ of the last step was interpreted as the regularization parameter $\alpha$ of the Tikhonov functional \eqref{eq:tikhonov}.
We refer to \cite{EggerSchlottbom11} for details concerning such a strategy for the iterative minimization of the Tikhonov functional.
The discretizations of the derivative and adjoint operators $F'(a)$ and $F'(a)^*$ were implemented consistently, such that $(F'(a) h, \psi) = (h, F'(a)^* \psi)$ holds exactly also on the discrete level.
The linear systems of the Gau\ss-Newton method were then solved by a preconditioned conjugate gradient algorithm.
\subsection{Setup of the test problem}
As true damping parameter, we used the function
\begin{align} \label{eq:adag}
a^\dag(u) = u \sqrt{1+u^2} .
\end{align}
The asymptotic behaviour here is
$a(u) \approx u$ for $|u| \ll 1$ and $a(u) \approx u |u|$ for $|u| \gg 1$,
which corresponds to the expected behaviour of the friction forces in pipes \cite{LandauLifshitz6}.
Restricted to any bounded interval $[\underline{u},\overline{u}]$, the function $a^\dag$ satisfies the assumptions (A1).
For our numerical tests, we used the initial data $u_0 \equiv 0$, $p_0 \equiv 1$,
and we chose
\begin{align} \label{eq:g}
g_0(t)=g_1(t)=g(t)=2 \sin(\tfrac{\pi}{2T} t)^2
\end{align}
as boundary fluxes.
A variation of the time horizon $T$ thus allows us to tune the speed of variation in the boundary data,
while keeping the interval $[\underline{u},\overline{u}]$ of fluxes that arise at the boundary fixed.
\subsection{Simulation of measurement data}
The boundary data $g(t;T)$ and the resulting pressure drops $\triangle p(t;T)$ across the pipe resulting are displayed
in Figure~\ref{fig:1} for different choices of $T$.
For comparison, we also display the pressure drop $\triangle \bar p$ obtained with the
linear forward model.
\begin{figure}[ht!]
\medskip
\includegraphics[height=3.2cm]{data1} \hspace*{0.75cm}
\includegraphics[height=3.2cm]{data2} \hspace*{0.75cm}
\includegraphics[height=3.2cm]{data5} \\[0.5cm]
\includegraphics[height=3.2cm]{data10}
\caption{Boundary flux $g(t)$ and pressure drops $\triangle p(t)$ and $\triangle \bar p(t)$ for the instationary and the linearized model for time horizon $T=1,2,5,10$. \label{fig:1}}
\end{figure}
The following observations can be made:
For small values of $T$, the pressure drop $\triangle p$ varies rapidly all over the time interval $[0,T]$ and
therefore deviates strongly from the pressure drop $\triangle \bar p$ of the linearized model corresponding to stationary
experiments.
In contrast, the pressure drop $\triangle p$ is close to that of the linearized model on the whole time interval $[0,T]$, when $T$ is large and therefore the variation in the boundary data $g(t)$ is small.
As expected from \eqref{eq:estOps}, the difference between $\triangle p$ and $\triangle \bar p$ becomes smaller when $T$ is increased.
A proper choice of the parameter $T$ thus allows us to tune our experimental setup
and to verify the conclusions obtained in Section~\ref{sec:hyp}.
\subsection{Convergence to steady state}
We next demonstrate in more detail that the solution
$(p(t),u(t))$ of the instationary problem is close to the steady states $(\bar p(t),\bar u(t))$ for
boundary data $\bar u(0)=\bar u(1)=g(t)$, provided that $g(t)$ varies sufficiently slowly; cf \eqref{eq:eps}.
In Table~\ref{tab:1}, we list the errors
\begin{align*}
e(T) := \max_{0 \le t \le T} \|p(t;T)-\bar p(t;T)\|_{L^2} + \|u(t;T)-\bar u(t;T)\|_{L^2}
\end{align*}
between the instationary and the corresponding stationary states
for different values of $T$ in the definition of the boundary data $g(t;T)$.
In addition, we also display the difference
\begin{align*}
d(T)=\max_{0 \le t \le T} |\triangle p(t) - \triangle \bar p(t)|
\end{align*}
in the measurements corresponding to the nonlinear and the linearized model.
\begin{table}[ht!]
\centering
\small
\begin{tabular}{c||c|c|c|c|c|c}
$T$ & $1$ & $2$ & $5$ & $10$ & $20$ & $50$ \\
\hline
$e(T)$ & $1.016$ & $0.647$ & $0.207$ & $0.105$ & $0.054$ & $0.022$ \\
\hline
$d(T)$ & $1.044$ & $1.030$ & $0.479$ & $0.225$ & $0.114$ & $0.045$
\end{tabular}
\medskip
\caption{Error $e(T)$ between instationary and stationary solution and difference $d(T)$ in the corresponding
measurements.\label{tab:1}}
\end{table}
The speed of variation in the boundary data decreases when $T$ becomes larger,
and we thus expect a monotonic decrease of the distance $e(T)$ to steady state
with increasing time horizon. The same can be expected for the error $d(T)$
in the measurements. This is exactly the behaviour that we observe in our numerical tests.
\subsection{Reconstructions for nonlinear and linearized model}
Let us now turn to the inverse problem
and compare the reconstructions for the nonlinear inverse problem obtained by Tikhonov regularization with that computed by the simple formula \eqref{eq:statsol} for the linearized inverse problem corresponding to stationary experiments.
The data for these tests are generated by simulation as explained before, and then perturbed with random noise such that $\delta=0.001$.
Since the noise level is rather small, the data perturbations do not have any visual effect on the reconstructions here;
see also Figure~\ref{fig:3} below.
In Figure~\ref{fig:2}, we display the corresponding results for measurements $h=\triangle p(\cdot;T)$ obtained for different time horizons $T$ in the definition of the boundary data $g(\cdot;T)$.
\begin{figure}[ht!]
\includegraphics[height=5cm]{a1}
\includegraphics[height=5cm]{a10} \\
\includegraphics[height=5cm]{a2}
\includegraphics[height=5cm]{a20} \\
\includegraphics[height=5cm]{a5}
\includegraphics[height=5cm]{a50}
\caption{True parameter $a^\dag$, reconstruction $a_\alpha^\delta$ obtained by Tikhonov regularization with initial guess $a^*$, and result $\bar a$ obtained by formula \eqref{eq:statsol}.
The data $h^\delta$ are perturbed by random noise of size $\delta=0.001$.
The images correspond to time horizons $T=1,2,5$ (left) and $T=10,20,50$ (right).\label{fig:2}}
\end{figure}
As can be seen from the plots, the reconstruction with Tikhonov regularization works well in all test cases.
The results obtained with the simple formula \eqref{eq:statsol} however show some systematic deviations due to model errors,
which however become smaller when increasing $T$.
Recall that for large $T$, the speed of variation in the boundary fluxes $g(t;T)$ is small,
so that the system is close to steady state on the whole interval $[0,T]$.
The convergence of the reconstruction $\bar a$ for the linearized problem towards the true solution $a^\dag$
with increasing $T$ is thus in perfect agreement with our considerations in Sections~\ref{sec:hyp}.
\subsection{Convergence and convergence rates}
In a last sequence of tests, we investigate the stability and accuracy of the
reconstructions $a_\alpha^\delta$ obtained with Tikhonov regularization in the presence of data noise.
Approximations for the minimizers $a_\alpha^\delta$ are computed numerically via the projected
iteratively regularized Gau\ss-Newton method as outlined above.
The iteration is stopped according to the discrepancy principle.
Table~\ref{tab:2} displays the reconstruction errors for different time horizons $T$ and different noise levels $\delta$.
\begin{table}[ht!]
\centering
\small
\begin{tabular}{c||c|c|c|c|c|c}
$\delta \backslash T$
& $1$ & $2$ & $5$ & $10$ & $20$ & $50$ \\
\hline
\hline
$0.10000$ & $0.8504$ & $0.3712$ & $0.1027$ & $0.0417$ & $0.0324$ & $0.0092$ \\
\hline
$0.05000$ & $0.6243$ & $0.2742$ & $0.0706$ & $0.0239$ & $0.0081$ & $0.0055$ \\
\hline
$0.02500$ & $0.3911$ & $0.1616$ & $0.0496$ & $0.0096$ & $0.0066$ & $0.0032$ \\
\hline
$0.01250$ & $0.2264$ & $0.1050$ & $0.0355$ & $0.0065$ & $0.0024$ & $0.0019$ \\
\hline
$0.00625$ & $0.1505$ & $0.0630$ & $0.0316$ & $0.0030$ & $0.0015$ & $0.0012$
\end{tabular}
\medskip
\caption{Reconstruction error $\|a_\alpha^\delta - a^\dag\|_{L^2(\underline{u},\overline{u})}$ for Tikhonov regularization for different noise levels $\delta$ and various time horizons $T$. \label{tab:2}}
\end{table}
Convergence is observed for all experimental setups, but the absolut errors decrease
monotonically with increasing time horizon $T$, which is partly explained by
our considerations in Section~\ref{sec:hyp}.
The reconstructions for time horizon $T=2$, corresponding to the third column of Table~\ref{tab:2},
are depicted in Figure~\ref{fig:3}; also compare with Figure~\ref{fig:2}.
\begin{figure}[ht!]
\includegraphics[height=5cm]{rec1}
\includegraphics[height=5cm]{rec4} \\
\includegraphics[height=5cm]{rec2}
\includegraphics[height=5cm]{rec5} \\
\includegraphics[height=5cm]{rec3}
\includegraphics[height=5cm]{rec6}
\caption{True parameter $a^\dag$, reconstruction $a_\alpha^\delta$ obtained by Tikhonov regularization, and initial guess $a^*$ for time horizon $T=2$ and noise levels $\delta=0.1,0.05,0.025$ (left) and $\delta=0.0125,0.00625,0.003125$ (right).\label{fig:3}}
\end{figure}
Note that already for a small time horizon $T=2$ and large noise level $\delta$ of several percent, one can obtain good reconstructions of the damping profile. For larger time horizon or smaller noise levels, the reconstruction $a_\alpha^\delta$ visually coincides completely with the true solution $a^\dag$.
This is in good agreement with our considerations in Section~\ref{sec:hyp}.
\section{Discussion} \label{sec:sum}
In this paper, we investigated the identification of a nonlinear damping law in a
semilinear hyperbolic system from additional boundary measurements. Uniqueness and
stability of the reconstructions obtained by Tikhonov regularization was observed
in all numerical tests. This behaviour could be explained theoretically by considering
the nonlinear inverse problem as a perturbation of a nearby linear problem, for which
uniqueness and stability can be proven rigorously.
In the coefficient inverse problem under investigation, the distance to the approximating
linearization could be chosen freely by a proper experimental setup. A similar argument was
already used in \cite{EggerPietschmannSchlottbom15} for the identification of a nonlinear
diffusion coefficient in a quasi-linear heat equation.
The general strategy might however be useful in a more general context and for many other applications.
Based on the uniqueness and stability of the linearized inverse problem, we could obtain
stability results for the nonlinear problem \emph{up to perturbations}; see
Section~\ref{sec:hyp} for details. Such a concept might be useful as well for the convergence
analysis of other regularization methods and more general inverse problems.
In all numerical tests we observed global convergence of an iterative method for the minimization
of the Tikhonov functional. Since the minimizer is unique for the linearized problem, such
a behaviour seems not too surprising. At the moment, we can however not give a rigorous
explanation of that fact. Let us note however, that H\"older stability of the inverse
problem can be employed to prove convergence and convergence rates for Tikhonov regularization \cite{ChengHofmannLu14,ChengYamamoto00} and also global convergence of iterative regularization methods
\cite{deHoopQiuScherzer12} without further assumptions.
An extension of these results to inverse problems satisfying \emph{approximated stability conditions},
as the one considered here, might be possible.
\section*{Acknowledgements}
The authors are grateful for financial support by the German Research Foundation (DFG) via grants IRTG~1529, GSC~233, and TRR~154.
|
\section{Introduction}
Extending the well-known result of extremal graph theory by Tur\'an, E. Gy\H ori and A.V. Kostochka \cite{ek} and independently F.R.K Chung \cite{chung} proved the following theorem. For an arbitrary graph $G$, let $p(G)$ denote the minimum of $\sum|V(G_i)|$ over all decompositions of $G$ into edge disjoint cliques $G_1,G_2,...$. Then $p(G)\le2t_2(n)$ and equality holds if and only if $G\cong T_2(n)$. Here $T_2(n)$ is the $2$-partite Tur\'an graph on $n$ vertices and $t_2(n)=\lfloor n^2/4\rfloor$ is the number of edges of this graph. P. Erd\H os later suggested to study the weight function $p^*(G)=\min \sum(|V(G_i)|-1))$. The first author \cite{ervinsurvey} started to study this function and to prove the conjecture $p^*(G)\le t_2(n)$ just in the special case when $G$ is $K_4$-free. This 24 year old conjecture was worded equivalently as follows.
\begin{conj} \label{mainconj}
Every $K_4$-free graph on $n$ vertices and $t_2(n)+m$ edges contains at least $m$ edge disjoint triangles.
\end{conj}
This was only known if the graph is $3$-colorable i.e. $3$-partite.
In \cite{chinese} towards proving the conjecture, they proved that for every $K_4$-free graph there are always at least $32k/35\ge 0.9142k$ edge-disjoint triangles and if $k\ge 0.0766 n^2$ then there are at least $k$ edge-disjoint triangles.
Their main tool is a nice and simple to prove lemma connecting the number of edge-disjoint triangles with the number of all triangles in a graph. In this paper using this lemma and proving new bounds about the number of all triangles in $G$, we settle the above conjecture:
\begin{thm}\label{thm:main}
Every $K_4$-free graph on $n^2/4+k$ edges contains at least $\lceil k\rceil$ edge-disjoint triangles.
\end{thm}
This result is best possible, as there is equality in Theorem \ref{thm:main} for every graph which we get by taking a $2$-partite Tur\'an graph and putting a triangle-free graph into one side of this complete bipartite graph. Note that this construction has roughly at most $n^2/4+n^2/16$ edges while in general in a $K_4$-free graph $k\le n^2/12$, and so it is possible (and we conjecture so) that an even stronger theorem can be proved if we have more edges, for further details see section Remarks.
\section{Proof of Theorem \ref{thm:main}}
From now on we are given a graph $G$ on $n$ vertices and having $e=n^2/4+k$ edges.
\begin{defi}
Denote by $\te$ the maximum number of edge disjoint triangles in $G$ and by $\ta$ the number of all triangles of $G$.
\end{defi}
The idea is to bound $\te$ by $\ta$. For that we need to know more about the structure of $G$, the next definitions are aiming towards that.
\begin{defi}
A {\bf good partition} $P$ of $V(G)$ is a partition of $V(G)$ to disjoint sets $C_i$ (the cliques of $P$) such that every $C_i$ induces a complete subgraph in $G$.
The {\bf size} $r(P)$ of a good partition $P$ is the number of cliques in it. The cliques of a good partition $P$ are ordered such that their size is non-decreasing: $|C_0|\le|C_1|\le\dots \le|C_{r(P)}|$.
A good partition is a {\bf greedy partition} if for every $l\ge 1$ the union of all the parts of size at most $l$ induces a $K_{l+1}$-free subgraph, that is, for every $i\ge 1$, $C_0\cup C_1\cup\dots\cup C_i$ is $K_{|C_i|+1}$-free.
(See Figure \ref{fig:defgp} for examples.)
\end{defi}
{\it Remark.} In our paper $l$ is at most 3 typically, but in some cases it can be arbitrary.
Note that the last requirement in the definition holds also trivially for $i=0$.
The name greedy comes from the fact that a good partition is a greedy partition if and only if we can build it greedily in backwards order, by taking a maximal size complete subgraph $C\subset V(G)$ of $G$ as the last clique in the partition, and then recursively continuing this process on $V(G)\setminus C$ until we get a complete partition. This also implies that every $G$ has at least one greedy partition. If $G$ is $K_4$-free then a greedy partition is a partition of $V(G)$ to $1$ vertex sets, $2$ vertex sets spanning an edge and $3$ vertex sets spanning a triangle, such that the union of the size 1 cliques of $P$ is an independent set and the union of the size 1 and size 2 cliques of $P$ is triangle-free.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{greedypartitiondef.eps}
\caption{A greedy partition of an arbitrary graph and of a complete $3$-partite graph.}
\label{fig:defgp}
\end{figure}
\begin{lem}[\cite{chinese}]
Let $G$ be a $K_4$-free graph and $P$ be a greedy partition of $G$. Then
$$\te\ge \frac{\ta}{r(P)}.$$
\end{lem}
For sake of keeping the paper self-contained, we prove this lemma too.
\begin{proof}
Let $r=r(P)$ and the cliques of the greedy partition be $C_0,C_1,\dots C_{r-1}$. With every vertex $v\in C_i$ we associate the value $h(v)=i$ and with every triangle of $G$ we associate the value $h(T)=\sum _{v\in T} h(v) \mod r$. As there are $r$ possible associated values, by the pigeonhole principle there is a family $\cal T$ of at least $\ta/r$ triangles that have the same associated value. It's easy to check that two triangles sharing an edge cannot have the same associated value if $G$ is $K_4$-free, thus $\cal T$ is a family of at least $\ta/r$ edge-disjoint triangles in $G$, as required.
\end{proof}
It implies that $\te\ge \frac{\ta}{R(P)}$, moreover the inequality is true for every $P$.
Note that the next theorem holds for every graph, not only for $K_4$-free graphs.
\begin{thm}\label{thm:tbound}
Let $G$ be a graph and $P$ a greedy partition of $G$. Then $t\ge r(P)\cdot(e-n^2/4)$.
\end{thm}
By choosing an arbitrary greedy partition $P$ of $G$, the above lemma and theorem together imply that for a $K_4$-free $G$ we have $t_e\ge \frac{\ta}{r(P)}\ge e-n^2/4=k$, concluding the proof of Theorem \ref{thm:main}.
\bigskip
Before we prove Theorem \ref{thm:tbound}, we make some preparations.
\begin{lem}\label{lem:twocliques}
Given a $K_{b+1}$-free graph $G$ on vertex set $A\cup B$, $|A|=a\le b=|B|$, $A$ and $B$ both inducing complete graphs, there exists a matching of non-edges between $A$ and $B$ covering $A$. In particular, $G$ has at least $a$ non-edges.
\end{lem}
\begin{proof}
Denote by $\bar G$ the complement of $G$ (the edges of $\bar G$ are the non-edges of $G$).
To be able to apply Hall's theorem, we need that for every subset $A'\subset A$ the neighborhood $N(A')$ of $A'$ in $\bar G$ intersects $B$ in at least $|A'|$ vertices. Suppose there is an $A'\subset A$ for which this does not hold, thus for $B'=B\setminus N(A')$ we have $|B'|= |B|-|B\cap N(A')|\ge b-(a-1)$. Then $A'\cup B'$ is a complete subgraph of $G$ on at least $a+b-(a-1)=b+1$ vertices, contradicting that $G$ is $K_{b+1}$-free.
\end{proof}
\begin{obs}\label{obs:Pdoesnotmatter}
If $G$ is complete $l$-partite for some $l$ then it has essentially one greedy partition, i.e., all greedy partitions of $G$ have the same clique sizes and have the same number of cliques, which is the size of the biggest part (biggest independent set) of $G$.
\end{obs}
We regard the following function depending on $G$ and $P$ (we write $r=r(P))$: $$f(G,P)=r(e-n^2/4)-t.$$ We are also interested in the function $$g(G,P)=r(e-r(n-r))-t.$$ Notice that $g(G,P)\ge f(G,P)$ and $f$ is a monotone increasing function of $r$ (but $g$ is not!) provided that $e-n^2/4\ge 0$. Also, using Observation \ref{obs:Pdoesnotmatter} we see that if $G$ is complete multipartite then $r$, $f$ and $g$ do not depend on $P$, thus in this case, we may write simply $f(G)$ and $g(G)$.
\begin{lem}\label{lem:completepartite}
If $G$ is a complete $l$-partite graph then $g(G)\le 0$ and if $G$ is complete $3$-partite (some parts can have size $0$) then $g(G)= 0$.
\end{lem}
\begin{proof}
Let $G$ be a complete $l$-partite graph with part sizes $c_1\le \dots \le c_l$. By Observation \ref{obs:Pdoesnotmatter}, $r=r(P)=c_l$ for any greedy partition.
We have $n=\sum_i c_i$, $e=\sum_{i< j} c_ic_j$, $t=\sum_{i< j< m} c_ic_jc_m$ and so
$$g(G)= r(e-r(n-r))-t=c_l(\sum_{i< j} c_ic_j-c_l\sum_{i< l} c_i)-t=$$ $$=c_l\sum_{i<j<l} c_ic_j-\sum_{i<j<m} c_ic_jc_m=-\sum_{i<j<m<l} c_ic_jc_m\le 0.$$
Moreover, if $l\le 3$ then there are no indices $i<j<m<l$ thus the last equality also holds with equality.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=1]{ggpdef.eps}
\caption{A generalized greedy partition of an arbitrary graph (heavy edges represent complete bipartite graphs) .}
\label{fig:defggp}
\end{figure}
In the proof we need a generalization of a greedy partition, which is similar to a greedy partition, with the only difference that the first part $C_0$ in the partition $P$ is a blow-up of a clique instead of a clique. see Figure \ref{fig:defggp} for an example.
\begin{defi}
A $P$ {\bf generalized greedy partition} ({\bf ggp} in short) of some graph $G$ is a partition of $V(G)$ into the sequence of disjoint sets $C_0,C_1,\dots C_l$ such that $C_0$ induces a complete $l_0$-partite graph, $C_i, i\ge 1$ induces a clique and $l_0\le |C_1|\le \dots |C_l|$. We require that for every $i\ge 1$, $C_0\cup C_1\cup\dots\cup C_i$ is $K_{|C_i|+1}$-free.
We additionally require that if two vertices are not connected in $C_0$ (i.e., are in the same part of $C_0$) then they have the same neighborhood in $G$, i.e., vertices in the same part of $C_0$ are already symmetric.
The {\bf size} $r(P)$ of a greedy partition $P$ is defined as the size of the biggest independent set of $C_0$ plus $l-1$, the number of parts of $P$ besides $C_0$.
\end{defi}
Note that the last requirement in the definition holds also for $i=0$ in the natural sense that $C_0$ is $l_0+1$-free.
Observe that the requirements guarantee that in a ggp $P$ if we contract the parts of $C_0$ (which is well-defined because of the required symmetries in $C_0$) then $P$ becomes a normal (non-generalized) greedy partition (of a smaller graph).
Using Observation \ref{obs:Pdoesnotmatter} on $C_0$, we get that the size of a ggp $P$ is equal to the size of any underlying (normal) greedy partition $P'$ of $G$ which we get by taking any greedy partition of $C_0$ and then the cliques of $P\setminus\{C_0\}$. Observe that for the sizes of $P$ and $P'$ we have $r(P)=r(P')$, in fact this is the reason why the size of a ggp is defined in the above way.
Finally, as we defined the size $r(P)$ of a ggp $P$, the definitions of the functions $f(G,P)$ and $g(G,P)$ extend to a ggp $P$ as well. With this notation Lemma \ref{lem:completepartite} is equivalent to the following:
\begin{cor}\label{cor:onepartggp}
If a ggp $P$ has only one part $C_0$, which is a blow-up of an $l_0$-partite graph, then $r(G,P)\le 0$ and if $l_0\le 3$ then $r(G,P)=0$.
\end{cor}
\begin{proof}[Proof of Theorem \ref{thm:tbound}]
The theorem is equivalent to the fact that for every graph $G_0$ and greedy partiton $P_0$ we have $f(G_0,P_0)\le 0$.
Let us first give a brief summary of the proof. We will repeatedly do some symmetrization steps, getting new graphs and partitions, ensuring that during the process $f$ cannot decrease. At the end we will reach a complete $l$-partite graph $G_*$ for some $l$. However by Lemma \ref{lem:completepartite} for such graphs $g(G_*,P_*)\le 0$ independent of $P_*$, which gives $f(G_0,P_0)\le f(G_*)\le g(G_*)\le 0$. This proof method is similar to the proof from the book of Bollob\'as \cite{bollobas} (section VI. Theorem 1.7.) for a (not optimal) lower bound on $t$ by a function of $e,n$. An additional difficulty comes from the fact that our function also depends on $r$, thus during the process we need to maintain a greedy partition whose size is not decreasing either.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{proofalg.eps}
\caption{One step of the symmetrization algorithm SymmAlg (dashed lines denote non-edges).}
\label{fig:alg}
\end{figure}
Now we give the details of the symmetrization. The algorithm SymmAlg applies the symmetrization algorithms SymmAlgSubMatch and SymmAlgSubMerge alternately, for an example see Figure \ref{fig:alg}
{\bf SymmAlg:}
We start the process with the given $G_0$ and $P_0$. $P_0$ is a normal greedy partition which can be regarded also as a ggp in which in the first blown-up clique $C_0$ all parts have size $1$.
In a general step of SymmAlg before running SymmAlgSubMatch we have a $G$ and a ggp $P$ of $G$ such that $f(G_0,P_0)\le f(G,P)$. This trivially holds (with equality) before the first run of SymmAlgSubMatch.
{\bf SymmAlgSubMatch:}
If the actual ggp $P$ contains only one part $C_0$ (which is a blow-up of a clique) then we {\bf STOP} SymmAlg.
Otherwise we do the following.
Let the blown-up clique $C_0$ be complete $l$-partite. Temporarily contract the parts of $C_0$ to get a smaller graph in which $P$ becomes a normal greedy partition $P_{temp}$, let $A$ ($|A|=a$) be the first clique (the contraction of $C_0$) and $B=C_1$ ($a\le b=|B|$) be the second clique of $P_{temp}$. As $P$ is a greedy partition, $A\cup B$ must be $K_{b+1}$-free, so we can apply Lemma \ref{lem:twocliques} on $A$ and $B$ to conclude that there is a matching of non-edges between $A$ and $B$ that covers $A$. In $G$ this gives a matching between the parts of the blown-up clique $C_0$ and the vertices of the clique $C_1$ such that if a part $A_i\subset C_0$ is matched with $b_i\in C_1$ then there are no edges in $G$ between $A_i$ and $b_i$.
For every such pair $(A_i,b_i)$ we do the following symmetrization. Let $v\in A_i$ an arbitrary representative of $A_i$ and $w=b_i$. Fix $r_0=r(P)$ and let $f_v=r_0d_v-t_v$ where $d_v$ is the degree of $v$ in $G$ and $t_v$ is the number of triangles in $G$ incident to $v$, or equivalently the number of edges spanned by $N(v)$. Similarly $f_w=r_0d_w-t_w$. Clearly, $f(G,P)=r_0(e-n^2/4)-t=|A_i|f_v+f_w+f_0$ where $f_0$ depends only on the graph induced by the vertices of $V(G)\setminus (A_i\cup\{w\})$. Here we used that there are no edges between $A_i$ and $b_i$. If $f_v\ge f_w$ then we replace $w$ by a copy of $v$ to get the new graph $G_1$, otherwise we replace $A_i$ by $|A_i|$ copies of $w$ to get the new graph $G_1$. In both cases
$$r_0(e_1-n^2/4)-t_1=(|A_i|+1)\max(f_{v},f_{w})+f_0\ge$$$$\ge |A_i|f_v+f_w+f_0=r_0(e-n^2/4)-t.$$
Note that after this symmetrization $V(G)\setminus (A_i\cup\{w\})$ spans the same graph, thus we can do this symmetrization for all pairs $(A_i,b_i)$ one-by-one (during these steps for some vertex $v$ we define $f_v$ using the $d_v$ and $t_v$ of the current graph, while $r_0$ remains fixed) to get the graphs $G_2,G_3,\dots$. At the end we get a graph $G'$ for which
$$r_0(e'-n^2/4)-t'\ge r_0(e-n^2/4)-t=f(G,P).$$ Now we proceed with SymmAlgSubMerge, which modifies $G'$ further so that the final graph has a ggp of size at least $r_0$.
{\bf SymmAlgSubMerge:}
In this graph $G'$ for all $i$ all vertices in $A_i\cup \{b_i\}$ have the same neighborhood (and form independent sets). Together with the non-matched vertices of $C_1$ regarded as size-$1$ parts we get that in $G'$ the graph induced by $C_0\cup C_1$ is a blow-up of a (not necessarily complete) graph on $b$ vertices. To make this complete we make another series of symmetrization steps. Take an arbitrary pair of parts $V_1$ and $V_2$ which are not connected (together they span an independent set) and symmetrize them as well: take the representatives $v_1\in V_1$ and $v_2\in V_2$ and then $r_0(e'-n^2/4)-t'=|V_1|f_{v_1}+|V_2|f_{v_2}+f_1$ as before, $f_1$ depending only on the subgraph spanned by $G'\setminus (V_1\cup V_2)$. Again replace the vertices of $V_1$ by copies of $v_2$ if $f_2\ge f_1$ and replace the vertices of $V_2$ by copies of $v_1$ otherwise. In the new graph $G'_1$, we have
$$r_0(e_1'-n^2/4)-t_1'=(|V_i|+|V_j|)\max(f_{v_1},f_{v_2})+f_0\ge$$$$\ge |V_1|f_{v_1}+|V_2|f_{v_2}+f_0=r_0(e'-n^2/4)-t'.$$
Now $V_1\cup V_2$ becomes one part and in $G'_1$ $C_0\cup C_1$ spans a blow-up $C_0'$ of a (not necessarily complete) graph with $b-1$ parts. Repeating this process we end up with a graph $G''$ for which
$$r_0(e''-n^2/4)-t''\ge r_0(e'-n^2/4)-t'\ge f(G,P).$$
In $G''$ $C_0\cup C_1$ spans a blow-up $C_0''$ of a complete graph with at most $|C_1|$ parts. Moreover $V\setminus(C_0\cup C_1)$ spans the same graph in $G''$ as in $G$, thus $C_0''$ together with the cliques of $P$ except $C_0$ and $C_1$ have all the requirements to form a ggp $P''$. If the biggest part of $C_0$ was of size $c_l$ then in $C_0'$ this part became one bigger and then it may have been symmetrized during the steps to get $G''$, but in any case the biggest part of $C_0''$ is at least $c_l+1$ big. Thus the size of the new ggp $P''$ is $r(P'')\ge c_l+1+(r(P)-c_l-1)\ge r(P)=r_0$.
If $e''-n^2/4< 0$, then we {\bf STOP} SymmAlg and conclude that we have $f(G_0,P_0)\le f(G,P)\le 0$, finishing the proof.
Otherwise $$f(G'',P'')=r(P'')(e''-n^2/4)-t''\ge r_0(e''-n^2/4)-t''\ge f(G,P)\ge f(G_0,P_0),$$
and so $G'',P''$ is a proper input to SymmAlgSubMatch. We set $G:=G''$ and $P:=P''$ and {\bf GOTO} SymmAlgSubMatch. Note that the number of parts in $P''$ is one less than it was in $P$.
This ends the description of the running of SymmAlg.
As after each SymmAlgSubMerge the number of cliques in the gpp strictly decreases, SymmAlg must stop until finite many steps.
When SymmAlg STOPs we either can conclude that $f(G_0,P_0)\le 0$ or SymmAlg STOPped because in the current graph $G_*$ the current gpp $P_*$ had only one blow-up of a clique. That is, the final graph $G_*$ is a complete $l_*$-partite graph for some $l_*$ (which has essentially one possible greedy partition). We remark that if the original $G$ was $K_m$-free for some $m$ then $G_*$ is also $K_m$-free, i.e., $l_*\le m-1$. As $f$ never decreased during the process we get using Corollary \ref{cor:onepartggp} that $f(G_0,P_0)\le f(G_*,P_*)\le g(G_*,P_*)\le 0$, finishing the proof of the theorem.
\end{proof}
\section{Remarks}
In the proof of Theorem \ref{thm:tbound}, we can change $f$ to any function that depends on $r,n,e,t,k_4,k_5,\dots $, (where $k_i(G)$ is the number of complete $i$-partite graphs of $G$) and is monotone in $r$ and is linear in the rest of the variables (when $r$ is regarded as a constant) to conclude that the maximum of such an $f$ is reached for some complete multipartite graph. Moreover, as the symmetrization steps do not increase the clique-number of $G$, if the clique number of $G$ is $m$ then this implies that $f(G,P)$ is upper bounded by the maximum of $f(G_*)$ taken on the family of graphs $G_*$ that are complete $m$-partite (some parts can be empty).
Strengthening Theorem \ref{thm:tbound}, it is possible that we can change $f$ to $g$ and the following is also true:
\begin{conj}\label{conj:nice}
if $G$ is a $K_4$-free graph and $r=r(P)$ is the size of an arbitrary greedy partition of $G$ then $t\ge r(e-r(n-r))$ and so $t_e\ge e-r(n-r)$.
\end{conj}
This inequality is nicer than Theorem \ref{thm:tbound} as it holds with equality for all complete $3$-partite graphs. However, we cannot prove it using the same methods, as it is not monotone in $r$. Note that the optimal general bound for $t$ (depending on $e$ and $n$; see \cite{fishersolow} for $K_4$-free graphs and \cite{fisherpaper, razborov} for arbitrary graphs) does not hold with equality for certain complete $3$-partite graphs, thus in a sense this statement would be an improvement on these results for the case of $K_4$-free graphs (by adding a dependence on $r$). More specifically, it is easy to check that there are two different complete $3$-partite graphs with a given $e,n$ (assuming that the required size of the parts is integer), for one of them Fisher's bound holds with equality, but for the other one it does not (while of course Conjecture \ref{conj:nice} holds with equality in both cases).
As we mentioned in the Introduction, in the examples showing that our theorem is sharp, $k$ is roughly at most $n^2/16$ while in general in a $K_4$-free graph $k\le n^2/12$, thus for bigger $k$ it's possible that one can prove a stronger result. Nevertheless, the conjectured bound $t_e\ge e-r(n-r)$ is exact for every $e$ and $r$ as shown by graphs that we get by taking a complete bipartite graph on $r$ and $n-r$ vertices and putting any triangle-free graph in the $n-r$ sized side. For a greedy partition of size $r$ we have $e\le r(n-r)+(n-r)^2/4$ (follows directly from Claim \ref{claim:r2}, see below), thus these examples cover all combinations of $e$ and $r$, except when $e<r(n-r)$ in which case trivially we have at least $0$ triangles, while the lower bound $e-r(n-r)$ on the triangles is smaller than $0$.
\begin{claim}\label{claim:r2}
If $G$ is a $K_4$-free graph, $P$ is a greedy partition of $G$, $r=r(P)$ is the size of $P$ and $r_2$ is the number of cliques in $P$ of size at least $2$, then $e\le r(n-r)+r_2(n-r-r_2)$.
\end{claim}
\begin{proof}
Let $s_1,s_2,s_3$ be the number of size-$1,2,3$ (respectively) cliques of $P$. Then $r=s_1+s_2+s_3,n-r=s_2+2s_3,r_2=s_2+s_3,n-r-r_2=s_3$.
Applying Lemma \ref{lem:twocliques} for every pair of cliques in $P$ we get that the number of edges in $G$ is $e\le {s_1\choose 2}(1\cdot 1-1)+s_1s_2(1\cdot 2-1)+s_1s_3(1\cdot 3-1)+{s_2\choose 2}(2\cdot 2-2)+s_2s_3(2\cdot 3-2)+{s_3\choose 2}(3\cdot 3-3)+s_2+3s_3=s_1s_2+2s_1s_3+s_2^2+4s_2s_3+3s_3^2=(s_1+s_2+s_3)(s_2+2s_3)+(s_2+s_3)s_3=r(n-r)+r_2(n-r-r_2)$.
\end{proof}
Finally, as an additional motivation for Conjecture \ref{conj:nice} we show that Conjecture \ref{conj:nice} holds in the very special case when $G$ is triangle-free, that is $t=t_e=0$. Note that for a triangle-free graph the size-2 cliques of a greedy partition define a non-augmentable matching of $G$.
\begin{claim}
If $G$ is a triangle-free graph and $r=r(P)$ is the size of an arbitrary greedy partition of $G$, i.e., $G$ has a non-augmentable matching on $n-r$ edges, then $0\ge e-r(n-r)$.
\end{claim}
\begin{proof}
We need to show that $e\le r(n-r)$. By Claim \ref{claim:r2}, $e\le r(n-r)+r_2(n-r-r_2)$ where $r_2$ is the number of cliques in $P$ of size at least $2$. If $G$ is triangle-free, then $r_2=n-r$ and so $e\le r(n-r)$ follows.
Let us give another simple proof by induction. As $G$ is triangle-free, $P$ is a partition of $V(G)$ to sets inducing points and edges, thus $ r\le n$.
We proceed by induction on $n-r$. If $n-r=0$ then $P$ is a partition only to points. As $P$ is greedy, $G$ contains no edges, $e=0$ and we are done. In the inductive step, for some $n-r>0$ take a part of $P$ inducing an edge and delete these two points. Now we have a triangle-free graph $G'$ on $n-2$ points and a greedy partition $P'$ of $G'$ that has $r-1$ cliques, thus we can apply induction on $G'$ (as $n'-r'=n-2-(r-1)=n-r-1<n-r$) to conlcude that $G'$ has at most $(r-1)(n-1-r)$ edges. We deleted at most $n-1$ edges, indeed as the graph is triangle-free the deleted two vertices did not have common neighbors, so altogether they had edges to at most $n-2$ other points plus the edge between them. Thus in $G$ we had at most $n-1+(r-1)(n-1-r)=r(n-r)$ edges, finishing the inductive step.
\end{proof}
|
\section{Introduction}\label{intro}
The interaction between a quantum system and a quantum measurement apparatus, with only unitary evolution, would entangle the two initially uncorrelated systems so that information about the system is recorded in a set of apparatus states \cite{N32}. Because an entangled state exhibits correlations regardless of the system basis in which it is written, this seems to leave an ambiguity about which system observable the apparatus has actually measured. To get around this problem, Zurek noted that a macroscopic apparatus will be continuously interacting with its environment, and introduced the idea of a `pointer basis' for the quantum apparatus \cite{Zur81}. For an ideally engineered apparatus, this can be defined as the set of pure apparatus states which do not evolve and never enter into a superposition \cite{Zur81, ZHP93}. More realistically, the environmental interaction will cause \emph{decoherence}, which turns a quantum superposition of pointer states into a classical mixture, on a time scale faster than that on which any pointer state evolves. In such a context, the original notion has been modified to define the pointer states as the least unstable pure states [3], i.e. the pure states that have the slowest rate of entropy increase for a given coupling to the environment.
After an apparatus (or, more generally, any quantum system) has undergone decoherence its state will be, in general, mixed. It is represented by a state matrix $\rho$. Mathematically, there are infinitely many ways to write a mixed state as a convex combination of pure states $\{\pi_k\}_k$ (a basis) with corresponding weights $\{\wp_k\}_k$. We shall refer to the set of ordered pairs $\{ (\wp_k,\pi_k) \}_k$ as a pure-state ensemble. Each ensemble suggests an \emph{ignorance interpretation} for the mixed state: the system is in one of the pure states $\pi_k$, but with incomplete information, one cannot tell which one it is. However, Wiseman and Vaccarro have shown that not all such ensembles are physically equivalent \cite{WV01} --- only some ensembles are `physically realizable' (PR). A PR ensemble $\{ (\wp_k,\pi_k) \}_k$ is one such that an experimenter can find out which pure state out of $\{\pi_k\}_k$ the system is in at all time (in the long time limit), by monitoring the environment to which the system is coupled. Such ensembles exist for all environmental couplings that can be described by a Markovian master equation~\cite{WM10}, and different monitorings result in different `unravellings'~\cite{Car93} of the master equation into stochastic pure-state dynamics. PR ensembles thus make the ignorance interpretation meaningful at all times in the evolution of a single system, as a sufficiently skilled observer could know which state the system is in at any time, without affecting the system evolution.
Zurek's `pointer basis' concept is supposed to explain why we can regard the apparatus as `really' being in one of pointer states, like a classical object. In other words, it appeals to an ignorance interpretation of a particular decomposition of a mixed state $\rho$ because of the interaction with the environment. But as explained above, the ignorance interpretation does not work for all ensembles; it works only for PR ensembles. It is for this reason that it was proposed in~\cite{ABJW05} that the set of candidate pointer bases should be restricted to the set of PR ensembles. Furthermore, it was shown in~\cite{ABJW05} that different PR ensembles, induced by different unravellings, differ according to the extent in which they possess certain features of classicality. One measure of classicality, which is closely related to that used by Zurek and Paz~\cite{ZHP93}, is the robustness of an unravelling-induced basis against environmental noise. This is the ability of an unravelling to generate a set of pure states $\{ \pi_k \}_k$ with the longest mixing time~\cite{ABJW05}. This is the time it takes for the mixedness (or entropy or impurity) of the initial pure state to increase to some level, on average, when the system evolves unconditionally (i.e.~according to the master equation). Thus it is this set of states that should be regarded as the pointer basis for the system.
In this paper we are concerned with applying these ideas to quantum feedback control~\cite{WM10}. This field has gained tremendous interest recently and already been successfully applied in many experiments~\cite{SCZ11,VMS12,YHN12}. As in classical control, one needs to gain information about the system in order to design a suitable control protocol for driving the system towards a desired state. However, measurements on a quantum system will in general perturb its state while information is being extracted. This back-action of quantum measurements is a key element that sets quantum feedback protocols apart from classical ones and means that one should take additional care in the design of the in-loop measurement.
A class of open systems of special interest are those with linear Heisenberg equations of motion in phase space driven by Gaussian noise. We will refer to these as linear Gaussian (LG) systems. Such systems have received a lot of attention because of their mathematical simplicity and because a great deal of classical linear systems theory can be re-adapted to describe quantum systems~\cite{DHJ+00,WD05}. LG systems arise naturally in quantum optics, describing modes of the electromagnetic field, nanomechanical systems, and weakly excited ensembles of atoms~\cite{WM10}.
In this paper, we consider using measurement (an unravelling) and linear feedback control to stabilize the state of a LG system to one of the states in the unravelling-induced basis. In particular we show that when the control is strong compared to the decoherence rate (the reciprocal of the mixing time) of the unravelling-induced basis, the system state can be stabilized with a fidelity close to one. We will show also that choosing the unravelling which induces the pointer basis (as defined above) maximizes the fidelity between the actual controlled state and the target state, for a strong control. Furthermore, we find that even if the feedback control strength is only comparable to the decoherence rate, the optimal unravelling for this purpose still induces a basis very close to the pointer basis. However if the feedback control is weak, this is not the case.
The rest of this paper is organized as follows. In \sref{PRPB} we formalize the idea of PR ensembles in the context of Markovian evolution by presenting the necessary and sufficient conditions for an ensemble to be PR which were originally derived in~\cite{WV01}. Here we will also define the mixing time which in turn is used to define the pointer basis. In \sref{LGQS} we review LG systems for both unconditional and conditional dynamics. An expression for the mixing time of LG systems will be derived. In \sref{CLGsys}, we add a control input to the LG system and show that it is effective for producing a pointer state. We will take the infidelity of the controlled state as the cost function for our control problem, and show that this can be approximated by a quadratic cost, thus putting our control problem into the class of linear-quadratic-Gaussian (LQG) control problems. Finally in \sref{ExampleQBM} we illustrate our theory for the example of a particle in one dimension undergoing quantum Brownian motion.
\section{Physically realizable ensembles and the pointer basis}\label{PRPB}
\subsection{Steady state dynamics and conditions for physically realizable ensembles}
In this paper we restrict our attention to master equations that describe valid quantum Markovian evolution so that the time derivative of the system state, denoted by $\dot{\rho}$ has the Lindblad form. This means that there is a Hermitian operator $\hat{H}$ and vector operator $\hat{\bi c}$ such that
\begin{eqnarray}
\label{Lindblad}
\dot{\rho} \equiv {\mathcal L} \rho
= -i\big[\hat{H},\rho\big] + \hat{\bi c}^\top \rho \hat{\bi c}^\ddag - {1\over 2} \, \hat{\bi c}^\dagger \hat{\bi c} \;\! \rho - \frac{1}{2} \, \rho \;\! \hat{\bi c}^\dagger \hat{\bi c} \;.
\end{eqnarray}
Note that $\hat{H}$ invariably turns out to be a Hamiltonian or can be interpreted as one. We have defined $\hat{\bi c}^\ddag$ to be the column vector operator
\begin{eqnarray}
\label{TransposeDagger}
\hat{\bi c}^\ddag \equiv \big( \hat{\bi c}^\dag \big)^\top \;,
\end{eqnarray}
where $\hat{\bi c}^\dag$ is defined by transposing $\hat{\bi c}$ and then taking the Hermitian conjugate of each element~\cite{CW11a}:
\begin{eqnarray}
\hat{\bi c}^\dag \equiv \big( \hat{c}^\dag_1, \hat{c}^\dag_2, \ldots, \hat{c}^\dag_l \big) \;.
\end{eqnarray}
We have assumed $\hat{\bi c}$ to be $l \times 1$. This is equivalent to saying that the system has $l$ dissipative channels. For $l=1$ one usually refers to $\hat{c}$ as a Lindblad operator. Similarly we will call $\hat{\bf c}$ a Lindblad vector operator. We will follow the notation in appendix~A of~\cite{CW11a}, and also use the terms environment and bath interchangeably.
Lindblad evolution is, in general, entropy-increasing and will thus lead to a mixed state for the system~\cite{BP02}. Assuming then, the existence of a steady state $\rho_{\rm ss}$, defined by
\begin{eqnarray}
\label{sss}
{\cal L}\rho_{\rm ss}= 0 \;,
\end{eqnarray}
we may write
\begin{eqnarray}
\label{SteadyStateEns}
\rho_{\rm ss} = \sum_{k} \wp_k \, \pi_{k} \;,
\end{eqnarray}
for some ensemble $\{(\wp_k,\pi_k)\}_k$ where each $\pi_k$ is a projector (i.e.~a pure state) and $\wp_k$ is the corresponding probability of finding the system in state $\pi_k$.
As explained earlier in \sref{intro}, physical realizability for an ensemble means justifying the ignorance interpretation of it for all times after the system has reached the steady state. That is, an ensemble is PR if and only if there exists an unravelling ${\sf U}$ (an environmental monitoring scheme which an experimenter can perform) that reveals the system to be in state $\pi^{\sf U}_k$ with probability $\wp_k^{\sf U}$. Note that any ensemble used to represent the system state once it has reached steady state will remain a valid representation thereafter. Thus if the PR ensemble $\{(\wp_k^{\sf U},\pi^{\sf U}_k)\}$ is to represent $\rho_{\rm ss}$ where the probabilities $\{\wp_k^{\sf U}\}$ are time-independent, then each $\wp_k^{\sf U}$ must reflect the proportion of time that the system spends in the state $\pi_k$. We therefore have a graphical depiction of the system dynamics where it is randomly jumping between the states $\pi^{\sf U}_k$ over some observation interval $\Delta t$. The probability of finding the system to be in state $\pi^{\sf U}_k$ is given by the fraction of time it spends in $\pi^{\sf U}_k$ in the limit of $\Delta t \to \infty$. This is illustrated in~\fref{PRE}. Note that this makes the system state a stationary ergodic process. We now denote the PR ensemble as $\{(\wp_k^{\sf U}, \pi_{k}^{\sf U})\}$, since it depends on the continuous measurement represented by ${\sf U}$. Surprisingly, we can determine whether an ensemble is PR purely algebraically, without ever determining the unravelling $\sf U$ that induces it~\cite{WV01}. Such a method will be employed in \sref{LGQS}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure1.pdf}
\caption{A PR ensemble $\{(\wp_k^{\sf U},\pi^{\sf U}_k)\}$ makes the system state a stationary ergodic process. That is to say the ensemble average over $k$ in \eref{SteadyStateEns} can be obtained by counting, for each value of $k$, the fraction of time the system spends in the $k^{\rm th}$ state for a \emph{single} run of the monitoring $\sf U$ over a sufficiently long period $\Delta t$. The probability of finding the system to be in a particular state, say the state with $k=\nu$ is then $\wp_\nu = \sum_{m=1}^{\lambda} t^{(\nu)}_m / \Delta t$ where for each value of $m$, $t^{(\nu)}_m$ is the amount of time the system spends in state $\pi^{\sf U}_\nu$ before making a jump to a different state as illustrated.}
\label{PRE}
\end{figure}
\subsection{Mixing time and the pointer basis}
The pointer states as defined in \cite{ABJW05} are states which constitute a PR ensemble and, roughly, decohere the slowest. Specifically, Atkins \etal proposed the mixing time $\tau_{\rm mix}$ as the quantity which attains its maximum for the pointer states. This is defined as follows. We assume that an experimenter has been monitoring the environment with some unit-efficiency unravelling ${\sf U}$ for a long (effectively infinite) time so that the conditioned system state is some pure state, $\pi^{\sf U}_k$. We label this time as the initial time and designate it by $t=0$~\fref{Tmix}). Note the state so obtained belongs to some PR ensemble. The mixing time is defined as the time required on average for the purity to drop from its initial value (being 1) to a value of $1-\epsilon$ if the system were now allowed to evolve unconditionally under the master equation. Thus $\tau_{\rm mix}$ is given by the smallest solution to the equation
\begin{equation}
\label{MixingTimeDefn}
{\rm E}\Big\{ {\rm Tr}\Big[ \big\{ \exp( {\cal L} \, \tau_{\rm mix}) \, \pi^{\sf U}_{k} \, \big\}^2 \Big]\Big\}
= 1 - \epsilon \;,
\end{equation}
where ${\rm E}\{X\}$ denotes the ensemble average of $X$. Note that \eref{MixingTimeDefn} is a slightly more general definition for the mixing time than the one used in \cite{ABJW05} as $\epsilon$ in \eref{MixingTimeDefn} can be any positive number between 0 and 1. In the next section we will consider the limit of small $\epsilon$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure2.pdf}
\caption{ Illustration of the mixing time for a particular $\pi^{\sf U}_k$ (hence the label $\tau_{\rm mix}^{(k)}$ on the time axis). The purity of the system state is denoted by $P(t)$ and we have marked $1-\epsilon$ at a finite distance away from 1 for clarity.}
\label{Tmix}
\end{figure}
\section{Linear Gaussian quantum systems}
\label{LGQS}
\subsection{Unconditional dynamics}
A LG system is defined by linear quantum stochastic differential equations driven by Gaussian quantum noise in the Heisenberg \ picture for (i) the system configuration $\hat{\bi x}$ in phase space, and (ii) the measurement output $\hat{\bi y}$ (also referred to as a current):
\begin{eqnarray}
\label{LinSys1}
d \hat{\bi x} & = & A \, \hat{\bi x} \, dt + E \, d \hat{\bi v}_{\rm p} \;, \\
\label{LinSys2}
\hat{\bi y}\,\!dt & = & {\sf C} \, \hat{\bi x} \, dt + d \hat{\bi v}_{\rm m} \;.
\end{eqnarray}
Here the phase-space configuration is defined as 2$n$-dimensional vector operator
\begin{equation}
\label{SysConfig}
\hat{\bi x} \equiv (\hat{q}_1,\hat{p}_1, \hat{q}_2,\hat{p}_2, \ldots,\hat{q}_n,\hat{p}_n)^\top \;.
\end{equation}
Here $\hat{\bi q} = (\hat{q}_1,\hat{q}_2, \ldots, \hat{q}_n)^\top$ and $\hat{\bi p} = (\hat{p}_1,\hat{p}_2, \ldots, \hat{p}_n)^\top$ represent the canonical position and momentum of the system, defined by
\begin{equation}
\lfloor \hat{\bi q}, \hat{\bi p} \rceil \equiv \hat{\bi q} \hat{\bi p}^\top - \big( \hat{\bi p} \hat{\bi q}^\top \big)^\top = i \hat{\rm I}_n \;,
\end{equation}
where $\hbar \equiv 1$ and $\hat{\rm I}_n$ is an $n \times n$ diagonal matrix containing identity operators. All vector operators in \eref{LinSys1} and \eref{LinSys2} are time dependent but we suppressed the time argument, as we will do except when we need to consider quantities at two or more different times. We take \eref{LinSys1} and \eref{LinSys2} to be It${\rm \hat{o}}$ stochastic differential equations with constant coefficients \cite{Jac10a}, i.e.~$A$, $E$, and ${\sf C}$ are real matrices independent of $\hat{\bi x}$ and time $t$.
The non-commutative nature of $\hat{\bi q}$ and $\hat{\bi p}$ gives rise to the Schr\"{o}dinger-Heisenberg uncertainty relation \cite{Hol11}
\begin{equation}
\label{HeiUncert}
V + \frac{i}{2} \, Z \ge 0 \;,
\end{equation}
where
\begin{equation}
\label{DefnOfZ}
Z \equiv \bigoplus^{n}_{1} \bigg(\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array} \bigg) \;,
\end{equation}
and $V$ is the covariance matrix of the system configuration, defined by
\begin{equation}
\label{CovarianceDefn}
V = {\rm Re}\big[ \;\! \big\langle (\hat{\bi x} - \langle \hat{\bi x} \rangle) (\hat{\bi x} - \langle \hat{\bi x} \rangle)^\top \big\rangle \;\! \big] \;.
\end{equation}
We are defining the real part of any complex matrix $A$ by ${\rm Re}[A]= (A+A^*)/2$.
The process noise $E \;\! \hat{\bi v}_{\rm p}$ is the unavoidable back-action from coupling the system to the environment. It is a vector operator of Hermitian quantum Wiener increments with a mean and covariance satisfying, for all time,
\begin{eqnarray}
\eqalign
\langle E \, d \hat{\bi v}_{\rm p} \rangle = {\bf 0} \;, \\
\label{ItoProcess}
{\rm Re} \big[ E \, d\hat{\bi v}_{\rm p} \, d\hat{\bi v}_{\rm p}^\top E^\top \big] \equiv D \, dt \;,
\end{eqnarray}
where for any matrix operator $\hat{\rm A}$ we have defined ${\rm Re} [ \hat{\rm A} ]=(\hat{\rm A}+\hat{\rm A}^\ddagger)/2$ and $\hat{\rm A}^\ddagger$ is defined similarly to \eref{TransposeDagger}. The quantum average is taken with respect to the initial state $\rho(0)$ i.e.~$\langle \hat{\rm A}(t) \rangle = \Tr [\hat{\rm A}(t) \rho(0)]$ since we are in the Heisenberg picture. Note that \eref{ItoProcess} involves the process noise at only one time, second-order moments with $E \;\! d \hat{\bi v}_{\rm p}$ at different times vanish as well as any other higher-order moments. Similarly the measurement noise $d\hat{\bi v}_{\rm m}$ is a vector operator of Hermitian quantum Wiener increments satisfying
\begin{eqnarray}
\eqalign
\langle d\hat{\bi v}_{\rm m} \rangle = {\bf 0} \;, \\
\label{ItoMeasurement}
d\hat{\bi v}_{\rm m} \, d\hat{\bi v}_{\rm m}^\top = \hat{\rm I}_ R \, dt \;,
\end{eqnarray}
where we have assumed $d\hat{\bi v}_{\rm m}$ (and also $\hat{\bi y}$) to have $R$ components. As with $E\;\! d\hat{\bi v}_{\rm p}$, \eref{ItoMeasurement} is the only non-vanishing moment for $d\hat{\bi v}_{\rm m}$. The noise $d\hat{\bi v}_{\rm m}$ describes the intrinsic uncertainty in the measurement represented by $\hat{\bi y}$ and in general will be correlated with $E\;\! d\hat{\bi v}_{\rm p}$. We define their correlation by a constant matrix $\Gamma^\top$, i.e.
\begin{equation}
\label{Gamma}
{\rm Re}\big[ E \, d\hat{\bi v}_{\rm p} \, d\hat{\bi v}_{\rm m}^\top \big] = \Gamma^\top dt \;.
\end{equation}
For the above to describe valid quantum evolution, various inequalities relating $A$, $E$, ${\sf C}$ and $Z$ must be satisfied~\cite{WM10}.
Just as a classical Langevin equation corresponds to a Fokker-Planck equation, the quantum Langevin equation \eref{LinSys1} also corresponds to a Fokker-Planck equation for the Wigner function \cite{Sch01} of the system state. Such an evolution equation for the Wigner function can also be derived from the master equation \eref{Lindblad} \cite{Car02}:
\begin{equation}
\label{OUE_Wigner}
\dot{W}( \breve{\bi x} ) = \{- \nabla^{\top}A \breve{\bi x} +\frac{1}{2}\nabla^{\top}D\nabla \} W( \breve{\bi x} ) \;.
\end{equation}
This equation has a Gaussian function as its solution, with mean and covariance matrix obeying
%
\begin{eqnarray}
\label{x dynamics}
d{\langle \hat{\bi x} \rangle}/dt = A\langle \hat{\bi x} \rangle \\
\label{V dynamics}
d{V}/dt = A \;\! V + V \;\! A^\top + D \;.
\end{eqnarray}
We restrict to the case that $A$ is Hurwitz; that is, where the real part of each eigenvalue is negative. Then the steady state Wigner function will be a zero-mean Gaussian \cite{Ris89}
\begin{equation}
\label{Wss}
W_{\rm ss}( \breve{\bi x } ) = g( \breve{\bi x }; {\bf 0},V_{\rm ss}) \;.
\end{equation}
The notation $\breve{\bi x}$ denotes the realization of the random vector ${\bi x}$ and $g( \breve{\bi x }; {\bi \mu},V)$ denotes a Gaussian with mean ${\bi \mu}$ and covariance $V$ for ${\bi x}$. In this case $V_{\rm ss}$ is the steady-state solution to \eref{V dynamics}; that is, the unique solution of
\begin{equation}
\label{V steady}
A \;\! V + V \;\! A^\top + D = 0 \;.
\end{equation}
We saw above that a LG system is defined by the It\^{o} equation \eref{LinSys1} for $\hat{\bi x}$, the statistics of which are characterized by the matrices $A$ and $D$. However, our theory of PR ensembles in \sref{PRPB} was in the Schr\"{o}dinger picture for which the system evolution is given by the master equation \eref{Lindblad}. To apply the idea of PR ensembles to a LG system we thus need to relate $A$ and $D$ to the dynamics specified in the Schr\"{o}dinger picture by ${\cal L}$, which is in turn specified by $\hat{H}$ and $\hat{\bi c}$. One can in fact show that \eref{LinSys1} results from choosing an $\hat{H}$ and $\hat{\bi c}$ that is (respectively) quadratic and linear in $\hat{\bi x}$ \cite{WM10}, i.e.
\begin{equation}
\label{LinSysHamiltonian}
\hat{H} = \frac{1}{2} \, \hat{\bi x}^\top G \;\! \hat{\bi x} \;,
\end{equation}
for any $2n \times 2n$ real and symmetric matrix $G$, and
\begin{equation}
\label{LinSysLindbladOp}
\hat{\bi c} = \tilde{C} \, \hat{\bi x} \;,
\end{equation}
where $\tilde{C}$ is $l \times 2n$ and complex. It can then be shown that \eref{LinSysHamiltonian} and \eref{LinSysLindbladOp} leads to
\begin{eqnarray}
A & = & Z \big( G + \bar{C}^\top S \bar{C} \big) \label{feeda}; \\
D & = & Z \bar{C}^\top \bar{C} Z^\top \label{feedd},
\end{eqnarray}
where we have defined
\begin{eqnarray}
\label{SandCbar}
S = \left( \begin{array}{cc}
0 & {\rm I}_l \\
-{\rm I}_l & 0
\end{array} \right)\;, \quad \bar{C} = \left( \begin{array}{c}
{\rm Re}[\tilde{C}] \\ {\rm Im}[\tilde{C}]
\end{array} \right) \;.
\end{eqnarray}
The matrix $S$ has dimensions $2l \times 2l$, formed from $l \times l$ blocks while $\bar{C}$ has dimensions $2l \times 2n$. These definitions will turn out be useful later especially in \sref{ExampleQBM}.
\subsection{Conditional dynamics in the long-time limit}
\Eref{LinSys1} describes only the dynamics of the system due to its interaction with the environment while \eref{LinSys2} describes the dynamics of some bath observable $\hat{\bi y}$ being measured. Our goal in the end is to drive the system to a particular quantum state and this is achieved most effectively if one uses the information obtained from measuring $\hat{\bi y}$. In a continuous measurement of $\hat{\bi y}$ the measurement device will output a continuous stream of numbers over a measurement time $t$. This is typically called a measurement record \cite{JS06} and is defined by
\begin{equation}
\label{MmtRecord}
{\bi y}_{[0,t)} \equiv \{ {\bi y}(\tau) \, | \, 0 \le \tau < t \} \;,
\end{equation}
where $\bi{y}(\tau)$ is the result of a measurement of $\hat{\bi y}$ at time $\tau$. In this paper we adopt feedback control in which the controlling signal depends on the ${\bi y}_{[0,t)}$ in~\eref{MmtRecord}. Here we will first explain the system evolution conditioned on knowledge of ${\bi y}_{[0,t)}$ and then from this derive the mixing time using definition \eref{MixingTimeDefn}. The inclusion of a control input in the system dynamics will be covered in~\sref{CLGsys}.
The measured current is first fed into an estimator that uses this information to estimate the system configuration continuously in time. This is often referred to as filtering and the continuous-time estimator is called a filter (see~\fref{Filter}). The performance of the filter may be measured by the mean-square error and it is well known from estimation theory that the optimal estimate is the conditional mean of $\hat{\bi x}$ \cite{KS99}, given by
\begin{equation}
\langle \hat{\bi x} \rangle_{\rm c} = \Tr \big[ \hat{\bi x} \;\! \rho_{\rm c}(t) \big] \;,
\end{equation}
where $\rho_{\rm c}(t)$ is the system state conditioned on ${\bi y}_{[0,t)}$. States as such obey stochastic differential equations that are referred to as quantum trajectories \cite{Car93,Car08} in quantum optics. For control purpose only the evolution of $\langle \hat{\bi x} \rangle_{\rm c}$ matter, and its evolution equation in this case is known as the Kalman-Bucy filter \cite{KB61}. We are ultimately interested in stabilizing the system to some quantum state which, without loss of generality, we can take to have $\langle \hat{\bi x} \rangle_{\rm c} = {\bf 0}$. That is, once the system has reached $\langle \hat{\bi x} \rangle_{\rm c} = {\bf 0}$ we would like to keep it there, ideally indefinitely for as long as the feedback loop is running. Thus it is the behaviour of the system in the long-time limit that is of interest to us and it can be shown \cite{WM10} that the Kalman-Bucy filter in this limit is given by
\begin{equation}
\label{KB1}
d\langle \hat{\bi x} \rangle_{\rm c} = A \, \langle \hat{\bi x} \rangle_{\rm c} \, dt + {\rm F}^\top \, d{\bi w} \;.
\end{equation}
Here $d{\bi w}$ is a vector of Wiener increments known as the innovation \cite{HJS08}, while ${\rm F} \equiv {\sf C} \,\Omega_{\sf U} + \Gamma$, where $\Omega_{\sf U}$ is the solution of the matrix Riccati equation
\begin{equation}
\label{KB2}
A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D = {\rm F}^\top {\rm F} \;.
\end{equation}
The matrix $\Omega_{\sf U}$ is the steady-state value of $V_{\rm c}$ [given by \eref{CovarianceDefn} with the averages taken with respect to $\rho_{\rm c}$] and depends on the measurement as indicated by its subscript. It is well known in control theory that when $A$, ${\sf C}$, $E$, and $\Gamma$ [recall \eref{LinSys1},~\eref{LinSys2}, and \eref{Gamma}] have certain properties, $\Omega_{\sf U}$ is a unique solution to \eref{KB2} and is known as a stabilizing solution~\cite{WM10}. We will assume this to be the case in the following theory. As in unconditioned evolution, the conditioned state $\rho_{\rm c}$ also has a Gaussian Wigner function. This is given by
\begin{equation}
\label{Wc}
W^{\Omega_{\sf U}}_{\bar{\bi x}}(\breve{\bi x}) = g(\breve{\bi x};\bar{\bi x},\Omega_{\sf U}) \;,
\end{equation}
where we have defined the short-hand $\bar{\bi x}=\langle \hat{\bi x} \rangle_{\rm c}$. The uniqueness of $\Omega_{\sf U}$ means that the conditional states obtained in the long-time limit will all have the same covariance but with different means evolving according to \eref{KB1}. That is, the index $k$ which labels different members of an ensemble representing $\rho_{\rm ss}$ in \eref{SteadyStateEns} is now the vector $\bar{\bi x}$ which changed (continuously) when the system makes `transitions' between different members within an ensemble. Different ensembles are labelled by different values of $\Omega_{\sf U}$. Such an ensemble is referred to as an uniform Gaussian ensemble.
From \eref{Wss} and \eref{Wc} the ensemble representing the steady state $\rho_{\rm ss}$ of a LG system can be described in terms of Wigner functions as
\begin{equation}
\label{UniformGaussian}
W_{\rm ss}(\breve{\bi x}) = \int d\bar{\bi x} \; \wp(\bar{\bi x}) \; W^{\Omega_{\sf U}}_{\bar{\bi x}}(\breve{\bi x})
\end{equation}
where the distribution of conditional means is another Gaussian, given by
\begin{equation}
\label{P(x)}
\wp(\bar{\bi x}) = g(\bar{\bi x}; {\bf 0}, V_{\rm ss}-\Omega_{\sf U}) \;.
\end{equation}
This can be derived by using \eref{UniformGaussian} to calculate the characteristic function of $\wp(\bar{\bi x})$.
Since ${\rm F}^\top {\rm F}$ is positive semidefinite by definition, \eref{KB2} implies the linear-matrix inequality for $\Omega_{\sf U}$:
\begin{equation}
\label{PRconstraint}
A \;\! \Omega_{\sf U} + \Omega_{\sf U} \;\! A^\top + D \ge 0 \;.
\end{equation}
This constraint together with the Schr\"{o}dinger-Heisenberg relation for the conditional state [i.e.~\eref{HeiUncert} with $V$ replaced by $\Omega_{\sf U}$]
\begin{equation}
\label{QuantumConstraint}
\Omega_{\sf U} + \frac{i}{2} \, Z \ge {} 0 \;,
\end{equation}
are necessary and sufficient conditions for the uniform Gaussian ensemble~\footnote{This can be considered as the \emph{generalized coherent states}~(GCS) for the Heisengberg-Weyl group. See~\cite{KK08}} \eref{UniformGaussian} to be PR \cite{WV01,WM10}. This is the algebraic test for whether an ensemble is PR mentioned in~\sref{PRPB}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure3.pdf}
\caption{ A filter is a continuous-time estimator which accepts ${\bi y}_{[0,t)}$ as input and produces an estimate of the system configuration as its output. If the mean-square error is used as a performance measure for the filter estimate then the conditional average of $\hat{\bi x}$ is optimal and the filter is characterized by \eref{KB1} and \eref{KB2} in the long-time limit.}
\label{Filter}
\end{figure}
\subsection{Mixing time and the pointer basis}
\label{UforPointerBasis}
As mentioned above, conditioned evolution leads to a Gaussian state with mean $\langle \hat{\bi x} \rangle_{\rm c}$ and covariance matrix $\Omega_{\sf U}$ satisfying \eref{KB1} and \eref{KB2} in the long-time limit. The purity of any Gaussian state with a $2n$-component configuration $\hat{\bi x}$ and covariance $V$ at time $t$ is given by \cite{Oli12}
\begin{equation}
\label{purity formula}
P(t) = \frac{1}{\sqrt{ {\rm det}[2V(t)]}} \;,
\end{equation}
where ${\rm det}[A]$ denotes the determinant of an arbitrary matrix $A$. The mixing time [recall \eref{MixingTimeDefn}] is thus defined by
\begin{equation}
\label{DetVtmix1}
{\rm det}\big[ 2 V(\tau_{\rm mix}) \big] = \frac{1}{(1-\epsilon)^2} \;,
\end{equation}
where $V(\tau_{\rm mix})$ is the covariance matrix of the state evolved under unconditional evolution from the initial state $\pi^{\sf U}_k$, which has covariance $V(0) = \Omega_{\sf U}\,$. We have noted in \eref{DetVtmix1} that the ensemble average in \eref{MixingTimeDefn} plays no role since (i) the purity depends only on the covariance; (ii) the different initial states obtained at $t=0$ all have the same covariance $\Omega_{\sf U}$; and (iii) the evolution of the covariance is independent of the configuration $\langle \hat{\bi x} \rangle_{\rm c}$ at all times (not just in steady-state as per \eref{KB2}).
An expression for $\tau_{\rm mix}$ can be obtained in the limit $\epsilon \to 0$ by noting that in this limit $\tau_{\rm mix}$ will be small so we may Taylor expand $V(t)$ about $t=0$ to first order:
\begin{eqnarray}
\eqalign
V(\tau_{\rm mix}) & = & {} V(0) + \left.\frac{dV}{dt} \right|_{t=0} \tau_{\rm mix} \\
\label{Vtmix}
& = & {} \Omega_{\sf U} + ( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D ) \, \tau_{\rm mix} \;.
\end{eqnarray}
Note that we have used~\eref{V dynamics} in \eref{Vtmix}. Multiplying \eref{Vtmix} by $\Omega_{\sf U}^{-1}$ and taking the determinant gives
\begin{equation}
\label{DetV}
{\rm det}\big[2 V(\tau_{\rm mix})\big] = {\rm det}\big[ \, {\rm I}_{2n} + (A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D) \, \Omega_{\sf U}^{-1} \;\! \tau_{\rm mix} \big] \;,
\end{equation}
where we have noted that the initial state is pure so ${\rm det}\big[2\Omega_{\sf U}\big] = 1$. For any $n \times n$ matrix $X$ and scalar $\varepsilon$ one can show that
\begin{equation}
\label{DetId}
{\rm det}\big[ \,{\rm I}_{n} + \varepsilon X \big] \approx 1 + \tr [\,\varepsilon X] \;,
\end{equation}
for $\varepsilon \to 0\,$. Therefore \eref{DetV} becomes
\begin{equation}
\label{DetVtmix2}
{\rm det}\big[2V(\tau_{\rm mix})\big] = 1 + \omega \, \tau_{\rm mix} \;,
\end{equation}
where we have defined for ease of writing
\begin{equation}
\label{omegaDefn}
\omega (\Omega_{\sf U}) \equiv 2 \, {\rm tr}\big[A\big] + {\rm tr}\big[D\,\Omega_{\sf U}^{-1}\big] \;.
\end{equation}
Substituting \eref{DetVtmix2} back into \eref{DetVtmix1} and solving for $\tau_{\rm mix}$ we arrive at
\begin{equation}
\label{MixingTime}
\tau_{\rm mix} \approx \frac{2 \epsilon}{\omega} \;.
\end{equation}
From this expression we see that to maximize the mixing time one should minimize $\omega$. From the definition \eref{omegaDefn} this means that (given $A$ and $D$) $\Omega_{\sf U}$ should be chosen to minimize $\tr \big[D\,\Omega_{\sf U}^{-1}\big]$ subject to the constraints \eref{PRconstraint} and \eref{QuantumConstraint}. Since $\Omega_{\sf U}$ depends on the unravelling $\sf U$, once the $\omega$-minimizing $\Omega_{\sf U}$ is found, call it $\Omega_{\sf U}^\star$, we can then find the unravelling that generates $\Omega_{\sf U}^\star$ by a simple relation \cite{WD05}. The set of pure states that can be obtained by such a measurement therefore forms the pointer basis. In the following we will denote the longest mixing time that is PR by $\tau_{\rm mix}^\star$, formally defined by
\begin{equation}
\label{tmixStar}
\tau_{\rm mix}^\star \equiv \frac{2\epsilon}
{\underset{\Omega_{\sf U}}{\rm min} \; \omega(\Omega_{\sf U})}
\end{equation}
subject to
\begin{eqnarray}
\label{PRCons}
\eqalign
A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \ge 0 \;, \\
\label{QuantumCons}
\Omega_{\sf U} + \frac{i}{2} \, Z \ge 0 \;,
\end{eqnarray}
where we have repeated \eref{PRconstraint} and \eref{QuantumConstraint} for convenience. We will denote other quantities associated with $\tau_{\rm mix}^\star$ also with a star superscript; in particular,
\begin{equation}
\label{PointerBasis}
\Omega_{\sf U}^\star \equiv \underset{\Omega_{\sf U}}{\rm arg\;min} \left\{ \big[\omega(\Omega_{\sf U})\big] \right\} \;,
\end{equation}
[\;still subject to \eref{PRCons} and \eref{QuantumCons} of course\;] and ${\sf U}^\star$ for the unravelling that realizes the pointer basis. \Eref{PointerBasis} now defines the pointer basis of the system under continuous observation which has a decoherence rate characterized by $1/\tau_{\rm mix}^\star\,$. We will illustrate the use of \eref{tmixStar}--\eref{PointerBasis} in~\sref{ExampleQBM} with the example of quantum Brownian motion.
\section{Controlled linear Gaussian quantum systems}
\label{CLGsys}
We have said above that for LG systems the unconditioned steady state in phase space is a uniform Gaussian ensemble, where uniformity refers to the fact that each member of the ensemble has the same covariance matrix given by $\Omega_{\sf U}$. Of the different ensembles the one with $\Omega_{\sf U}^\star$ identifies the pointer basis and the unravelling ${\sf U}^\star$ that induces it. All that is left to do to put the system into a specific pointer state is to steer the mean of the system configuration $\langle \hat{\bi x} \rangle_{\rm c}$ (or, in other words, the centroid of the Wigner distribution in phase space) towards a particular point, say $\langle \hat{\bi x} \rangle_{\rm c} = {\bi a}$. This requires feedback control, described by adding a control input ${\bi u}(t)$ that depends on the measurement record ${\bi y}_{[0,t)}$ as shown in~\fref{FeedbackLoop}.
For simplicity we will define our target state to be at the origin of the phase space, i.e.~${\bi a} = {\bf 0}$. Choosing the phase-space origin will simplify our analysis for a system whose uncontrolled Wigner function does not have a systematic drift away from the origin. This is beneficial for a feedback that is designed to drive the system towards ${\bi a}={\bf{0}}$ simply because the uncontrolled drift does not act against the feedback. In this case one only has to mitigate the effects of diffusion, a process which leads to a greater uncertainty about the system configuration. As this increase in uncertainty can be quantified by the mixing time the effect of the feedback can be characterized by comparing the control strength to $\tau_{\rm mix}^\star$. This is illustrated in~\sref{ExampleQBM} using the example of quantum Brownian motion.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure4.pdf}
\caption{ Feedback loop.}
\label{FeedbackLoop}
\end{figure}
\subsection{Adding feedback}
To steer the system towards the origin in phase space we apply a classical input proportional to $\langle \hat{\bi x} \rangle_{\rm c}\,$ (effected by the actuator in~\fref{FeedbackLoop})
\begin{equation}
\label{ControlInput}
{\bi u}(t) = - K \, \langle \hat{\bi x} \rangle_{\rm c}(t) \;,
\end{equation}
where $K$ is a constant matrix, which we take to be
\begin{equation}
\label{FeedbackLG}
K = \frac{k \epsilon}{\tmix^\star} \: {\rm I}_{2n} \;.
\end{equation}
Here $k \ge 0$ is a dimensionless parameter which measures the feedback strength relative to the decoherence. This feedback scheme is similar to a special optimal control theory called the linear-quadratic-Gaussian (LQG). In fact, we show in the appendix C that our feedback scheme is equivalent to a limiting case of LQG control.
The long-time conditional dynamics of the system can thus be written as
\begin{equation}
\label{KBcontrol}
d\langle \hat{\bi x} \rangle_{\rm c} = N \langle \hat{\bi x} \rangle_{\rm c} \, dt + {\rm F}^\top \, d{\bi w}
\end{equation}
where,
\begin{equation}
\label{DefnN}
N \equiv A - K = A -\frac{k \epsilon}{\tmix^\star} \; {\rm I}_{2n} \;,
\end{equation}
while the equation for the covariance remains unchanged, still given by \eref{KB2}. The control input thus changes only the mean of $\hat{\bi x}$. One can derive from \eref{KBcontrol} the identity
\begin{equation}
\label{RelationM1}
N \, M + M N^\top + {\rm F}^\top {\rm F} = 0 \;,
\end{equation}
where, as long as $N$ is negative definite,
\begin{equation}
M \equiv {\rm E_{ss}}\big[ \langle \hat{\bi x} \rangle_{\rm c} \langle \hat{\bi x} \rangle_{\rm c}^\top \big] \;,
\end{equation}
with ${\rm E}_{\rm ss}[X]$ denoting the ensemble average of $X$ in the long-time limit (or ``steady state'' \footnote{When referring to $\langle \hat{\bi x} \rangle_{\rm c}$ we prefer the term long-time limit as opposed to steady state for the $t \to \infty$ limit since in this limit $\langle \hat{\bi x} \rangle_{\rm c}$ still follows a jiggly motion and is not constant as steady state would imply.}). It thus follows that the unconditioned steady state variance matrix in the presence of the feedback is given by
\begin{equation}
\label{RelationM2}
V_{\rm ss} = \Omega_{\sf U} + M \;.
\end{equation}
Relations \eref{RelationM1} and \eref{RelationM2} are useful for calculating the fidelity \cite{NC10,Uhl76} between the controlled state and the target state in the long-time limit.
\subsection{Performance of feedback}
\label{s42}
We take the fidelity between the target state and the state under control to be our performance measure for the feedback loop. The target state has the Wigner function
\begin{equation}
\label{TargetWigner}
W_\odot(\breve{\bi x}) = g(\breve{\bi x};{\bf 0},\Omega_{\sf U}) \;,
\end{equation}
while the controlled state is given by
\begin{equation}
\label{ControlledWigner}
W_{\rm ss}(\breve{\bi x}) = g(\breve{\bi x};{\bf 0},V_{\rm ss}) \;.
\end{equation}
The fidelity between states defined by \eref{TargetWigner} and \eref{ControlledWigner} can be shown to be
\begin{equation}
\label{Fidelity}
F = \frac{1}{\sqrt{{\rm det}\big[ V_{\rm ss} + \Omega_{\sf U} \big]}} \;.
\end{equation}
To calculate the determinant in the denominator we note that \eref{RelationM2} gives
\begin{equation}
\label{FidelityDenom}
\big( V_{\rm ss} + \Omega_{\sf U} \big) \, (2\Omega_{\sf U})^{-1} = {\rm I}_{2n} + M \, (2\Omega_{\sf U})^{-1}.
\end{equation}
Also note $\det[2\Omega_{\sf U}] = 1$, we thus have:
\begin{equation}
\label{det[Vss+qss]}
{\rm det}\big[V_{\rm ss} + \Omega_{\sf U} \big] = {\rm det}\big[ \,{\rm I}_{2n} + M \Omega_{\sf U}^{-1}/2 \big] \;.
\end{equation}
To simplify this further we need an expression for $M$, which can be derived by using \eref{RelationM1}. Substituting \eref{KB2} and \eref{DefnN} into \eref{RelationM1} we arrive at
\begin{equation}
\label{EqnForM}
M = \frac{\tau_{\rm mix}^\star}{2k \epsilon} \; \big( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \big) + \mathcal{O}((\tau_{\rm mix}^\star)^2) \;.
\end{equation}
Because $\tau_{\rm mix} \sim \epsilon$ for $\epsilon \ll1$, we may discard second-order terms in $\tau_{\rm mix}^\star$ in \eref{EqnForM} to get
\begin{equation}
\label{M(tmix)}
M \approx \frac{\tau_{\rm mix}^\star}{2k\epsilon} \: \big( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \big) \;.
\end{equation}
For strong control ($k \gg 1$), the determinant in \eref{det[Vss+qss]} can then be approximated by an expansion in $M$ to first order. Using \eref{DetId} this gives
\begin{equation}
\label{FidelityDet2}
\det \big[V_{\rm ss} + \Omega_{\sf U} \big] \approx 1 + \frac{1}{2} \, \tr\big[ M \Omega_{\sf U}^{-1} \big] \;.
\end{equation}
Multiplying \eref{M(tmix)} by $\Omega_{\sf U}^{-1}$ on the right and taking the trace we get
\begin{equation}
\label{TraceMOmegainv}
\tr \big[ M \Omega_{\sf U}^{-1} \big] \approx \frac{\omega(\Omega_{\sf U})}{2k \epsilon} \: \tau_{\rm mix}^\star = \frac{\tau_{\rm mix}^\star}{k\tau_{\rm mix}} \;.
\end{equation}
Substituting \eref{TraceMOmegainv} into \eref{FidelityDet2} and the resulting expression into the fidelity \eref{Fidelity} we find
\begin{equation}
F \approx 1 - \frac{1}{4k}\frac{\tau_{\rm mix}^\star}{\tau_{\rm mix}} \;.
\end{equation}
That is, the fidelity is close to one for $k$ large (i.e. strong control) as expected. One can also calculate the purity of the feedback-controlled steady state. An expression for this can be obtained from \eref{Fidelity} by replacing $\Omega_{\sf U}$ by $V_{\rm ss}$. Then following essentially the same method as for the fidelity calculation we find that it is given by
\begin{equation}
P \approx 1 - \frac{1}{2k}\frac{\tau_{\rm mix}^\star}{\tau_{\rm mix}} \;.
\end{equation}
In both cases we see that the best performance is achieved when $\tau_{\rm mix} = \tau_{\rm mix}^\star$. That is, when the unravelling generating the most robust ensemble is used. This demonstrates the link between the pointer basis and feedback control for LG quantum systems.
\section{Example: quantum Brownian motion}
\label{ExampleQBM}
We now illustrate the theory of \sref{LGQS} and \sref{CLGsys} with the example of a particle in an environment with temperature $T$ undergoing quantum Brownian motion in one dimension in the high temperature limit.
This limit means $k_{\rm B} T \gg \hbar \gamma$ where $k_{\rm B}$ is Boltzmann's constant
and $\gamma$ is the momentum damping rate. In this limit we can use a Lindblad-form master equation as per \eref{Lindblad} to describe the Brownian motion~\cite{DIO93,AB03},
with one dissipative channel (i.e.~$l=1$):
\begin{equation}
\label{L2}
\dot{\rho} = {\cal L} \rho
= - i [\hat{H},\rho] + \hat{c} \rho \hat{c}^\dagger
- \frac{1}{2} \hat{c}^\dagger \hat{c} \rho - \frac{1}{2} \rho \hat{c}^\dagger \hat{c} \;,
\end{equation}
where
\begin{equation}
\hat{H} = \frac{\hat{p}^2}{2} + \frac{1}{2} \big( \hat{q} \hat{p} + \hat{p} \hat{q} \big) \;,
\label{LindbladOperatorQBM}
\quad \hat{c} = \sqrt{2T} \hat{q} + \frac{i}{\sqrt{8T}} \; \hat{p} \;.
\end{equation}
We are using scaled units such that the damping rate, particle mass, Boltzmann constant, and $\hbar$ are all unity.
The above master equation could also describe a driven and damped single-mode field in an optical cavity with a particular type of optical nonlinearity. In this case $\hat{c}$ is the effective annihilation operator for the field fluctuations (about some mean coherent amplitude). That is, the position $\hat{q}$ and momentum operators $\hat{p}$ of the particle translate into the quadratures of the field mode, with suitable scaling (which depends on the model temperature $T$). This interpretation of the master equation allows the unravellings we discuss below to be easily interpreted: they correspond to homodyne measurement of the cavity output with different local oscillator phases.
Comparing \eref{LindbladOperatorQBM} with \eref{LinSysHamiltonian} and \eref{SandCbar}, we see that $\hat{H}$ and $\hat{c}$ can be written, respectively, as a quadratic and linear function of a two-dimensional configuration defined by
\begin{equation}
\label{2by2Z}
\hat{\bi x} = \left( \begin{array}{c}
\hat{q} \\ \hat{p}
\end{array} \right) \;, \quad Z = \left( \begin{array}{cc}
0 & 1 \\ -1 & 0
\end{array} \right) \;.
\end{equation}
The matrices $G$ and $\tilde{C}$ in this case are given by
\begin{equation}
\label{2by2c}
G = \left( \begin{array}{cc}
0 & 1 \\ 1 & 1
\end{array} \right) \;, \quad \tilde{C} = \left( \begin{array}{cc}
\sqrt{2T} , & i/\sqrt{8T}
\end{array} \right) \;.
\end{equation}
These can then be used to characterize the unconditional dynamics in terms of the drift and diffusion matrices given in \eref{feeda} and \eref{feedd} which are easily shown to be
\begin{equation}
\label{2by2ad}
A = \left( \begin{array}{cc}
0 & 1 \\ 0 & -1
\end{array} \right) \;, \quad D = \left( \begin{array}{cc}
1/8T & 0 \\ 0 & 2T
\end{array} \right) \;.
\end{equation}
\subsection{Measurement}
\label{ExampleQBM1}
The theory of PR ensembles, and in particular the realization of a pointer basis by continuous measurement as explained in~\sref{UforPointerBasis} can be applied to the above quantum Brownian motion master equation.
Recall that for LG systems with an efficient fixed measurement, the PR ensembles are uniform Gaussian ensembles of pure states, uniform in the sense that every member of the ensemble is characterized by the same covariance matrix $\Omega_{\sf U}$. We showed that for such an ensemble to be a pointer basis, $\Omega_{\sf U}^\star$ must be the solution to the constrained optimization problem defined by \eref{PRCons}--\eref{PointerBasis}. To find $\Omega^\star_{\sf U}$ let us first write $\Omega_{\sf U}$ as
\begin{equation}
\label{icm}
\Omega_{\sf U} = \frac{1}{4} \left(
\begin{array}{cc}
\alpha & \beta \\ \beta & \gamma
\end{array}
\right)\;,
\end{equation}
which should satisfy the two linear matrix inequalities \eref{PRCons} and \eref{QuantumCons}. The second of these (the Schr\"{o}dinger-Heisenberg uncertainty relation) is saturated for pure states, and this allows us to write $\alpha$ in terms of $\beta$ and $\gamma$: $\alpha = (\beta^2 +4)/\gamma$. Then from the constraint \eref{PRCons}, we have:
\begin{equation}
\label{QBMPRa}
\left( \frac{1}{8T}+\frac{\beta}{2} \right) \left( 2T-\frac{\gamma}{2} \right) - \frac{(\gamma-\beta)^2}{16} \geq 0 \;.
\end{equation}
In the case of $T \gg 1$, a simple calculation from \eref{QBMPRa} shows that the allowed solutions are restricted to $\gamma \in [0,4T)$ and $\beta \in [0,16T]$ (with the maximum range of $\beta$ being when $\gamma = 0$). The PR region is a convex shape in $\beta$-$\gamma$ space as plotted in~\fref{pr}.
\begin{figure}[htbp]
\centering
\subfloat[]{
\label{pr}
\includegraphics[width=0.5\linewidth]{figure5a.pdf}
}
\hspace{1pt}
\subfloat[]{
\label{mix}
\includegraphics[width=0.4\linewidth]{figure5b.pdf}
}
\caption{ Physically realizable region and mixing time for $T$=100. (a) The PR region defined by \eref{QBMPRa} (shaded area). For $T \gg 1$ we find that $0 \leq \gamma \leq 4T$ and $0 \leq \beta \leq 16T$ as can be seen from the plot. (b) The mixing time over the PR region when $\epsilon = 0.1$. It can be seen that the longest mixing time is for ensembles on the boundary of the PR region. That is, the pointer basis lies on the boundary. These plots remain qualitatively the same for all large $T$.}
\end{figure}
Now the definition \eref{DetVtmix1} of the mixing time $\tau_{\rm mix}$ is connected with $\beta$ and $\gamma$ by an implicit function (see appendix A):
\begin{equation}
\det [2V(\tau_{\rm mix}, \beta, \gamma)] = 1/(1-\epsilon)^2 \;.
\end{equation}
Searching over the PR region, we can find the longest mixing time $\tau_{\rm mix}^\star$, at the point $(\beta^\star,\gamma^\star)$. This point corresponds to $\Omega_{\sf U}^\star$, from which we can derive the optimal unravelling matrix ${\sf U}^\star$. It can be shown analytically (appendix A) that $(\beta^\star, \gamma^\star)$ always lies on the boundary of the PR region. Such conditioned states are generated by extremal unravellings ${\sf U}$. Physically (in the language of quantum optics) this corresponds to homodyne detection. Although we will not do so, a relation between ${\sf C}$ in \eref{LinSys2} and ${\sf U}^\star$ may be used to show that ${\sf U}^\star$ does indeed always correspond to homodyne measurement.
A similar conclusion was reached in \cite{ABJW05} but for measurements that maximize the survival time $\tau_{\rm sur}$ and only based on numerics. Note that the survival time (see appendix B) is more general than the mixing time in the sense that it captures any deviation of an unconditionally evolved state from the initially conditioned pure state, not just its decrease in purity. This means that typically $\tau_{\rm sur} \le \tau_{\rm mix}$. We show analytically in appendix B that $\tau_{\rm sur}$ is always maximized by PR ensembles that lie on the boundary of the PR region. This result thus rigorously justifies the claim of~\cite{ABJW05} and it is not surprising to find that they maximize $\tau_{\rm mix}$ as well (appendix A).
We can see from \fref{btt} (b) and (d) that $\tau_{\rm mix}^\star$ decreases monotonically as a function of temperature. Physically this is because a finite-temperature environment tends to introduce thermal fluctuations into the system, making it more mixed. By considering $T$ in the range of $10^2$ to $10^4$ we derive numerically a power law for $\tau_{\rm mix}^\star$; see~\fref{btt}. The fits are given in~\tref{t1}, and to a good approximation we have $\tau_{\rm mix}^\star \sim T^{-1/2}$. Of course this power law will not hold for $T$ small, but in that regime the high-temperature approximation made in deriving \eref{Lindblad} breaks down. \Fref{btt} also shows that $\beta^\star \approx 1$ is independent of $T$. From the equation for the boundary, it follows that $\gamma^\star \approx 4\sqrt{T}$.
\begin{figure}[htbp]
\includegraphics[width=1.\linewidth]{figure6.pdf}
\caption{ (a), (c): $\beta^\star$ as a function of $T$ for $\epsilon$=0.1 and $\epsilon$=0.2 respectively. (b), (d): log-log plot of $\tau_{\rm mix}^\star$ as a function of $T$ for $\epsilon$=0.1 and $\epsilon$=0.2 respectively.}
\label{btt}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|cc|cl}
\hline
&\multicolumn{2}{c}{Value} & \multicolumn{2}{c}{Standard Error} \\
\cline{2-3}
\cline{4-5}
$\epsilon$ & $a$ & $b$ & $a$ & \multicolumn{1}{c}{$b$} \\
\hline
0.1 & -1.02297 & -0.50913 & 0.00325 & 9.63577$\times 10^{-4}$ \\
0.2 & -0.68442 & -0.50959 & 0.00321 & 9.53113$\times 10^{-4}$ \\
\hline
\end{tabular}
\caption{Fitting results for~\fref{btt} (b) and (d). The fit is given by ${\rm log}\: \tau_{\rm mix}^\star = b\,{\rm log}\:T +a$. }
\label{t1}
\end{table}
\subsection{Measurement and feedback}
Having fixed our measurement scheme we are now in position to stabilize the system to a state in phase space prescribed by the Wigner function $W_{\odot}(\breve{\bi x})=g(\breve{\bi x}; {\bf 0}, \Omega_{\sf U})$. To do so we simply close the feedback loop by adding a control signal in the form of \eref{ControlInput} and \eref{FeedbackLG}:
\begin{equation}
{\bi u}(t)=-\frac{k \epsilon}{\tau_{\rm mix}^{\star}}\langle \hat{\bi x}(t)\rangle_{\rm c} \; ,
\end{equation}
where $k \geq 0$ is a dimensionless parameter determining the strength of control. Under controlled dynamics the drift matrix thus changes from $A$ [specified in \eref{2by2Z}] to $N$ [recall \eref{DefnN}] given by
\begin{equation}
N = \bigg(
\begin{array}{cc}
-k \epsilon/\tau_{\rm mix}^\star & 1 \\
0 & -(1+k \epsilon/\tau_{\rm mix}^\star)
\end{array}
\bigg) \;,
\end{equation}
This is an upper-triangular matrix so its eigenvalues $\lambda(N)$ may be read off from the diagonal entries:
\begin{equation}
\lambda(N) = \big\{ -k\epsilon/\tau_{\rm mix}^\star , \, -(1+k\epsilon/\tau_{\rm mix}^\star) \big\} \;.
\end{equation}
Since $k\epsilon$ and $\tau_{\rm mix}^\star$ are both greater than zero, $N$ is negative definite (or in the language of control theory, `strictly stable', or `Hurwitz stable') and the conditional steady-state dynamics described by \eref{KBcontrol} will indeed be stabilized to a state with zero mean in the phase-space variables. Note that the uncontrolled dynamics has a drift matrix with eigenvalues given by
\begin{equation}
\lambda(A) = \big\{ 0, \, -1\big\} \;.
\end{equation}
showing that quantum Brownian motion by itself is only ``marginally stable'' (i.e.~the system configuration will not converge unconditionally to zero owing to the zero eigenvalue). Physically this is because nothing prevents the position of the Brownian particle from diffusing away to infinity. This illustrates a ``stabilizing effect'' of the feedback loop that would not otherwise appear.
One may expect that the state of the quantum Brownian particle can be stabilized to the target pointer state \eref{TargetWigner} when the strength of feedback is much greater than the decoherence rate $1/\tau_{\rm mix}^\star$. However here we show that the system state can be stabilized to \eref{TargetWigner} very well even when the feedback strength is only comparable to the decoherence rate. This, and the effects of varying $\epsilon$, the environment temperature $T$, and $k$ on the performance of control are depicted in~\fref{fb1} which we now explain.
In~\fref{fb1} we plot the infidelity and the mixing time for $(\beta,\gamma)$ points that saturate the PR constraint \eref{QBMPRa}, as a functions of $\beta$. We do not consider values of $\beta$ and $\gamma$ interior to the PR region as we have already shown that the $\Omega_{\sf U}^\star$ which generates the pointer basis will lie on the boundary.
In~\fref{fb1} (a) we set the feedback strength to be comparable to the decoherence rate (corresponding to $k=10$) and for a fixed temperature ($T=1000$). We see from the blue curve in~\fref{fb1}~(a) that the infidelity achieves a minimum close to zero. We also see that our pointer-basis-inducing measurement determined above is indeed optimal for our control objective by observing that the mixing time and the infidelity reaches their maximum and minimum respectively for the same value of $\beta$, namely $\beta^\star$.
To see the effect of the environment temperature we increase $T$ from $1000$ to $5000$ but keep everything else constant. This is shown in~\fref{fb1} (b). As explained previously in~\sref{ExampleQBM1}, an environment at a larger temperature will have a stronger decohering effect on the system and this is seen as the decrease in the mixing time for all values of $\beta$. However, the infidelity, and in particular its minimum value corresponding to $\beta^\star$ has not changed much. This is as expected, since the strength of the feedback is defined relative to the decoherence rate.
Using again~\fref{fb1} (a) for reference we show in~\fref{fb1} (c) the effect of having a larger $\epsilon$ and a smaller $k$ (with $k\epsilon$ fixed). Quantitatively the curves for infidelity and mixing time change, as expected, but qualitatively they are very similar. In particular, the optimal ensemble is at almost the same point for the minimal infidelity, and the value of $\beta^\star$ is little different from that in~\fref{fb1} (a). This is the case even though $\epsilon = 0.2$ barely qualifies as small, and so we would expect some deviations from small $\epsilon$ results obtained in~\sref{s42}.
Finally in~\fref{fb1} (d) we show the effect of increasing the feedback strength by keeping $\epsilon$ and $T$ the same as those in~\fref{fb1} (c) but changing $k$ from 5 back up to 10. As expected this improves the infidelity (i.e.~making it lower for all $\beta$) while the mixing time remains unchanged when compared to that in (c), since it only depends on $\epsilon$ and $T$. We can also compare~\fref{fb1} (d) to (a) which illustrates how the infidelity curve in (d) is restored to one similar to that in (a), as expected because they use the same feedback strength $k$.
In~\fref{fb2}, we push even further into the regimes where $\epsilon$ is not small, and $k$ is not large. In~\fref{fb2} (a), we choose $\epsilon = 0.5$, and find that the ensemble ($\beta^\star, \gamma^\star$) with the longest mixing time for this threshold of impurity---recall \eref{MixingTimeDefn}---is significantly different from that found with $\epsilon$ small. In the same figure we plot the infidelity of the controlled state with the target state, with $k=10$ and $k=2$. The former (green) gives a minimum infidelity comparable with those with $k=10$ in~\fref{fb1}, and at a similar value of $\beta$. This value of $\beta$ thus differs from the $\beta^\star$ found via maximizing the mixing time.
This is not surprising as we expect them to be the same only for $\epsilon$ small. The two are closer together, however, for $k=2$ (blue), for which the performance of the feedback is quite poor, as expected. Keeping $k=2$ but restoring $\epsilon$ to a small value of $0.1$ gives somewhat better performance by the feedback control, as shown in~\fref{fb2} (b).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{figure7.pdf}
\caption{ Feedback simulation results. Mixing time (red dash curve) and infidelity (blue curve) for one-dimensional quantum Brownian motion as a function of $\beta$ for (a) $\epsilon = 0.1$, $T=1000$, $k =10$; (b) $\epsilon = 0.1$, $T=5000$, $k=10$; (c) $\epsilon = 0.2$, $T=1000$, $k =5$; and (d) $\epsilon = 0.2$, $T=1000$, $k = 10$. In summary, the effects of changing $T$, $\epsilon$, and $k$ are respectively illustrated in passing from (a) to (b); (a) to (c); and (c) to (d). The left axis stands for the infidelity and the right one stands for the mixing time. The red dot and the blue dot correspond to the maximum mixing time (also corresponds to the $\beta^\star$ point) and the minimal infidelity respectively. See the main text for an explanation of these plots.}
\label{fb1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{figure8.pdf}
\caption{ Feedback simulation results. Mixing time (red dash curve) and infidelity (blue and green curve) for one-dimension quantum Brownian motion as a function of $\beta$ for (a) $\epsilon = 0.5$, $T=1000$, $k = 2$ for the blue curve and $k = 10 $ for the green curve; (b) $\epsilon = 0.1$, $T=1000$, $k=2$. The left axis stands for the infidelity and the right one stands for the mixing time. The red dot and the blue (green) dot correspond to the maximum mixing time (also corresponds to the $\beta^\star$ point) and the minimal infidelity respectively.}
\label{fb2}
\end{figure}
\section{Conclusion}
We have shown a connection between two hitherto unrelated topics: pointer states and quantum feedback control. While pointer states have appeared in the quantum foundations literature in the early 1980s, the advent of quantum information has since extended this interest in pointer states, and more generally an interest in decoherence into the realm of practical quantum computing \cite{HK97,CMdMFD01,KDV11}. Some of these studies on decoherence have used pointer-state engineering as a means of resisting decoherence such as~\cite{CMdMFD01,KDV11}, but neither work uses feedback \footnote{Note that feedback have been used to protect quantum systems from decoherence as in~\cite{GTV96,HK97}, but not specifically to produce pointer states.}.
Here we have shown that a pointer state, as defined in a rigorous way by us, are those which are most easily attainable, with high fidelity, as target states in quantum linear Gaussian systems. By ``most easily attainable'' we mean with the minimum feedback strength. While we obtained general analytical results in certain limits, our numerical results for a particular system (quantum Brownian motion) shows that our conclusions still hold approximately in a much wider parameter regime. Our work shows how the concept of pointer states has applications outside the realm of quantum foundations, and could aid in the design of feedback loops for quantum LG systems by suggesting the optimal monitoring scheme.
\ack
This research is supported by the ARC Centre of Excellence grant CE110001027. AC acknowledges support from the National Research Foundation and Ministry of Education in Singapore.
|
\section*{\refname}\small\renewcommand\bibnumfmt[1]{##1.}}
\usepackage[T1]{fontenc}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{tikz}
\usetikzlibrary{myautomata}
\usetikzlibrary{decorations.pathreplacing,calc}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{multicol}
\usepackage{accsupp}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage[vlined]{algorithm2e}
\providecommand*{\napprox}{%
\BeginAccSupp{method=hex,unicode,ActualText=2249}%
\not\approx
\EndAccSupp{}%
}
\newcommand{\mathbin{\approx}}{\mathbin{\approx}}
\newcommand{\mathbin{\napprox}}{\mathbin{\napprox}}
\newcommand{\mathbin{\sqsubseteq}}{\mathbin{\sqsubseteq}}
\newcommand{\tikzmark}[1]{\tikz[overlay,remember picture] \node (#1) {};}
\definecolor{lime}{HTML}{A6CE39}
\DeclareRobustCommand{\orcidicon}{
\hspace{-2mm}
\begin{tikzpicture}
\draw[lime, fill=lime] (0,0)
circle [radius=0.16]
node[white] (ID) {{\fontfamily{qag}\selectfont \tiny ID}};
\draw[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
\end{tikzpicture}
\hspace{-2.5mm}
}
\def\orcidID#1{\href{https://orcid.org/#1}{\smash{\orcidicon}}}
\title{Effective Reductions of Mealy Machines}
\author{Florian Renkin\orcidID{0000-0002-5066-1726} \and Philipp Schlehuber-Caissier\orcidID{0000-0002-6611-9659} \and Alexandre Duret-Lutz\orcidID{0000-0002-6623-2512} \and Adrien Pommellet\orcidID{0000-0001-5530-152X}}
\institute{
LRDE, EPITA, Kremlin-Bicêtre, France \email{\{frenkin,philipp,adl,adrien\}@lrde.epita.fr}\\
}
\authorrunning{F. Renkin \and P. Schlehuber-Caissier \and A. Duret-Lutz \and A. Pommellet}
\def\todo#1{\textcolor{red}{#1}}
\usetikzlibrary{automata}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{bending}
\usetikzlibrary{quotes}
\usetikzlibrary{positioning}
\usetikzlibrary{calc}
\tikzset{
automaton/.style={
semithick,shorten >=1pt,>={Stealth[round,bend]},
node distance=1cm,
initial text=,
every initial by arrow/.style={every node/.style={inner sep=0pt}},
every state/.style={minimum size=7.5mm,fill=white}
},
smallautomaton/.style={
automaton,
node distance=5mm,
every state/.style={minimum size=4mm,fill=white,inner sep=1pt}
},
mediumautomaton/.style={
automaton,
node distance=1.5cm,
every state/.style={minimum size=6mm,fill=white,inner sep=1pt}
},
initial overlay/.style={every initial by arrow/.style={overlay}},
accset/.style={
fill=blue!50!black,draw=white,text=white,thin,
circle,inner sep=1pt,anchor=center,font=\bfseries\sffamily\tiny
},
color acc0/.style={fill=magenta},
color acc1/.style={fill=cyan},
color acc2/.style={fill=orange},
color acc3/.style={fill=green!70!black},
color acc4/.style={fill=blue!50!black},
}
\makeatletter
\tikzoption{initial angle}{\tikzaddafternodepathoption{\def\tikz@initial@angle{#1}}}
\makeatother
\tikzstyle{initial overlay}=[every initial by arrow/.style={overlay}]
\tikzstyle{state-labels}=[state/.style=state with output,inner sep=2pt]
\def\slabel#1{\nodepart{lower} #1}
\tikzstyle{statename}=[
below,label distance=2pt,
fill=yellow!30!white,
rounded corners=1mm,inner sep=2pt
]
\tikzstyle{accset}=[
fill=blue!50!black,draw=white,text=white,thin,
circle,inner sep=.9pt,anchor=center,font=\bfseries\sffamily\tiny
]
\tikzset{
ks/.style={},
collacc0/.style={fill=blue!50!cyan},
collacc1/.style={fill=magenta},
collacc2/.style={fill=orange!90!black},
collacc3/.style={fill=green!70!black},
collacc4/.style={fill=blue!50!black},
fs/.style={font=\bfseries\sffamily\small},
acc/.pic={\node[text width={},text height={},minimum size={0pt},accset,collacc#1,ks]{#1};},
accs/.pic={\node[text width={},text height={},minimum size={0pt},accset,collacc#1,fs,ks]{#1};},
starnew/.pic={\node[text width={},text height={},minimum size={0pt},text=magenta]{$\filledlargestar$};},
starimpr/.pic={\node[text width={},text height={},minimum size={0pt},text=blue!50!cyan]{$\filledlargestar$};},
balldigit/.style={text=white,circle,minimum size={12pt},shade,ball color=structure.fg,inner sep=0pt,font={\footnotesize\bf},align=center}
}
\def\balldigit#1#2{\tikz[baseline=(X.base)]\node(X)[balldigit,#2]{#1};}
\def\acc#1{\tikz[smallautomaton,baseline=-4pt] \pic{accs=#1};}
\def\acct#1{\tikz[smallautomaton,baseline=-4pt] \pic{acc=#1};}
\def\tikz[baseline=0]=\pic{starnew};{\tikz[baseline=0]=\pic{starnew};}
\def\tikz[baseline=0]=\pic{starimpr};{\tikz[baseline=0]=\pic{starimpr};}
\makeatletter
\def1{1}
\tikzset{
opacity/.append code={
\pgfmathsetmacro1{#1*1}
},
opacity aux/.code={
\tikz@addoption{\pgfsetstrokeopacity{#1}\pgfsetfillopacity{#1}}
},
every shadow/.style={opacity aux=1},
covered/.style={opacity=0},
uncover on/.style={alt={#1{}{covered}}},
alt/.code args={<#1>#2#3}{%
\alt<#1>{\pgfkeysalso{#2}}{\pgfkeysalso{#3}}
},
explains/.style={rectangle callout,callout absolute pointer={#1},fill=structure.fg!10,drop shadow={fill=black!70!structure.fg!30},align=center}
}
\makeatother
\newcommand{\F}{\mathsf{F}}
\newcommand{\G}{\mathsf{G}}
\newcommand{\X}{\mathsf{X}}
\newcommand{\ensuremath{\mathbb{B}}}{\ensuremath{\mathbb{B}}}
\newcommand{\ensuremath{\mathbb{K}}}{\ensuremath{\mathbb{K}}}
\newcommand{\ensuremath{\mathrm{Succ}}}{\ensuremath{\mathrm{Succ}}}
\newcommand{\ensuremath{\mathrm{Out}}}{\ensuremath{\mathrm{Out}}}
\newcommand{\tup}[1]{{\ensuremath{\left(#1\right)}}}
\newcommand{\set}[1]{{\ensuremath{\left\lbrace#1\right\rbrace}}}
\newcommand{\bfm}[1]{{\ensuremath{\mathbf{#1}}}}
\tikzset{
automaton/.style={
semithick,shorten >=1pt,
node distance=1.5cm,
initial text=,
every initial by arrow/.style={every node/.style={inner sep=0pt}},
every state/.style={
align=center,
fill=white,
minimum size=7.5mm,
inner sep=0pt,
execute at begin node=\strut,
}},
smallautomaton/.style={automaton,
node distance=7mm,
every state/.style={minimum size=4mm,
fill=white,
inner sep=1.5pt}},
>={Stealth[round,bend]},
}
\begin{document}
\maketitle
\begin{abstract}
We revisit the problem of reducing incompletely specified Mealy
machines with reactive synthesis in mind. We propose two
techniques: the former is inspired by the tool {\sc
MeMin}~\citet{abel.15.iccad} and solves the minimization problem,
the latter is a novel approach derived from simulation-based
reductions but may not guarantee a minimized machine. However, we
argue that it offers a good enough compromise between the size of
the resulting Mealy machine and performance. The proposed methods
are benchmarked against \textsc{MeMin} on a large collection of test
cases made of well-known instances as well as new ones.\\
\end{abstract}
\section{Introduction}
\begin{figure}[b]
\begin{subfigure}[t]{0.28\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.1cm and 2.1cm]
\node[draw,minimum width=1.4cm,minimum height=1.4cm] (C) {};
\draw[->] (C.160) +(-5mm,0) node[left]{$a$} -- (C.160);
\draw[->] (C.-160) +(-5mm,0) node[left]{$b$} -- (C.-160);
\draw[->] (C.20) -- ++(5mm,0) node[right]{$x$};
\draw[->] (C.-20) -- ++(5mm,0) node[right]{$y$};
\end{tikzpicture}
\caption{A reactive controller}
\label{controler}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.39\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1cm and 1.33cm]
\node[initial,initial angle=90, lstate] (v0) {$0$};
\node[lstate,right=of v0] (v1) {$1$};
\node[lstate,left=of v0] (v2) {$2$};
\path[->]
(v0) edge[bend left=14] node[above,align=center] {$ab/\{x\bar{y},\mathrlap{xy\}}$\\$a\bar{b}/\set{\bar{x}y}$} (v1)
(v0) edge node[above] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v2)
(v1) edge[bend left=14] node[below,align=center] {$\bar{a}\bar{b}/\set{x\bar{y},\bar{x}\bar{y}}$\\$ab/\set{x\bar{y}}$} (v0)
(v2) edge[loop below,looseness=10] node[right=2pt] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v2)
;
\end{tikzpicture}
\caption{Original machine}
\label{autEx1}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.7cm and 2.1cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate] (v0) {$0$};
\end{scope}
\path[->]
(v0) edge[loop above,looseness=10] node[right=2pt] {$ab/\set{x\bar{y}}$} (v0)
(v0) edge[loop right,looseness=10] node {$a\bar{b}/\set{\bar{x}y}$} (v0)
(v0) edge[loop below,looseness=10] node[right=2pt] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v0)
;
\end{tikzpicture}
\caption{Minimal machine}
\label{autEx1_Min}}
\end{subfigure}
\caption{Minimizing a Mealy machine that models a reactive controller}
\label{autEx1_all}
\end{figure}
Program synthesis is a well-established formal method: given a logical
specification of a system, it allows one to automatically generate a
provably correct implementation. It can be applied to reactive
controllers (Fig.~\ref{controler}): circuits that produce for an input
stream of Boolean valuations (here, over Boolean variables $a$ and
$b$) a matching output stream (here, over $x$ and $y$).
The techniques used to translate a specification (say, a Linear Time
Logic formula that relates input and output Boolean variables) into a
circuit often rely on automata-theoretic intermediate models such as
Mealy machines. These transducers are labeled graphs whose edges
associate input valuations to a choice of one or more output
valuations, as shown in Fig.~\ref{autEx1}.
Since Mealy machines with fewer states result in smaller circuits,
reducing and minimizing the size of Mealy machines are
well-studied problems~\citet{alberto.09.latw, paull.59.tec}.
However, vague specifications may cause incompletely specified
machines: for some states (i.e., nodes of the graph) and inputs,
there may not exist a unique, explicitly defined output, but
a set of valid outputs. Resolving those choices to a single output
(among those allowed) will produce a fully specified machine that
satisfies the initial specification, however those different choices
may have an impact on the minimization of the machine. While
minimizing fully specified machines is efficiently
solvable~\citet{hopcroft.71.tmc}, the problem is NP-complete for
incompletely specified machines~\citet{pfleeger.73.tc}. Hence, it may
also be worth exploring faster algorithms that seek to reduce the
number of states without achieving the optimal result.
Consider Fig. \ref{autEx1}: this machine is incompletely specified, as
for instance state $0$ allows multiple outputs for input $ab$
(i.e., when both input variables $a$ and $b$ are true) and implicitly
allows any output for input $\bar a b$ (i.e., only $b$ is true) as it isn't
constrained in any way by the specification. We
can benefit from this flexibility in unspecified outputs to help
reduce the automaton. For instance if we constrain state 2 to behave
exactly as state 0 for inputs $ab$ and $a \bar b$, then these two
states can be merged. Adding further constraints can lead to the
single-state machine shown in Fig. \ref{autEx1_Min}. These smaller
machines are not \emph{equivalent}, but they are \emph{compatible}: for
any input stream, they can only produce output streams that could also
have been produced by the original machine.
We properly define \emph{Incompletely specified Generalized Mealy
Machines} in Section \ref{secDef} and provide a SAT-based
minimization algorithm in Section \ref{secMin}.
Since the minimization of incompletely specified Mealy machines is desirable but
not crucial for reactive synthesis, we propose a faster reduction
technique yielding ``small enough'' machines in Section
\ref{secBisim}. Finally, in Section \ref{secBench} we benchmark these
techniques against the state-of-the-art tool {\sc
MeMin}~\citet{abel.15.iccad}.
\section{Definitions}\label{secDef}
Given a set of propositions (i.e., Boolean variables) $X$,
let $\ensuremath{\mathbb{B}}^X$ be the set of all possible valuations on $X$, and
let $2^{\ensuremath{\mathbb{B}}^X}$ be its set of subsets.
Any element of $2^{\ensuremath{\mathbb{B}}^X}$ can be expressed as a Boolean formula over $X$.
The negation of proposition $p$ is denoted $\bar{p}$.
We use $\top$ to denote the Boolean formula that is always true, or
equivalently the set $\ensuremath{\mathbb{B}}^X$, and assume that $X$ is clear from the context.
A \emph{cube} is a conjunction of propositions or their negations (i.e., literals).
As an example, given three propositions $a$, $b$ and $c$,
the cube $a \land \bar{b}$, written $a\bar{b}$,
stands for the set of all valuations such that $a$ is true and $b$ is false,
i.e. $\{a\bar{b}c, a\bar{b}\bar{c}\}$.
Let $\ensuremath{\mathbb{K}}^X$ stand for the set of all cubes over $X$.
$\ensuremath{\mathbb{K}}^X$ contains the cube $\top$, that stands
for the set of all possible valuations over $X$.
Note that any set of valuations can be represented
as a disjunction of disjoint cubes (i.e., not sharing a common valuation).
\begin{definition}
An \emph{Incompletely specified Generalized Mealy Machine} (IGMM) is
a tuple $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$, where
$I$ is a set of \emph{input propositions},
$O$ a set of \emph{output propositions},
$Q$ a finite set of \emph{states}, $q_{\mathit{init}}$ an \emph{initial state},
$\delta \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow Q$
a partial \emph{transition function}, and
$\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{B}}^{O}}\setminus \{\emptyset\}$
an \emph{output function} such that $\lambda(q,i)=\top$ when $\delta(q,i)$ is undefined.
If $\delta$ is a total function, we then say that $M$ is \emph{input-complete}.
\end{definition}
It is worth noting that the transition function is input-deterministic
but not complete with regards to $Q$ as $\delta(q,i)$ could be
undefined. Furthermore, the output function may return many valuations
for a given input valuation and state. This is not an unexpected
definition from a reactive synthesis point of view, as a given
specification may yield multiple compatible output valuations for a
given input.
\begin{definition}[Semantics of IGMMs]
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be an
IGMM. For all $u \in \ensuremath{\mathbb{B}}^{I}$ and $q \in Q$, if $\delta(q, u)$ is defined,
we write that $q \xrightarrow{u / v} \delta(q, u)$ for all
$v \in \lambda(q, u)$. Given two infinite sequences of valuations
$\iota=i_0\cdot i_1\cdot i_2\cdots\in (\ensuremath{\mathbb{B}}^{I})^\omega$ and
$o=o_0\cdot o_1\cdot o_2\cdots\in (\ensuremath{\mathbb{B}}^{O})^\omega$,
$(\iota,o)\models M_q$ if and only if:
\begin{itemize}
\item either there is an infinite sequence of states
$(q_j)_{j \ge 0} \in Q^\omega$ such that $q = q_0$ and
$q_0 \xrightarrow{i_0 / o_0} q_1 \xrightarrow{i_1 / o_1} q_2
\xrightarrow{i_2 / o_2} \cdots$;
\item or there is a finite sequence of states
$(q_j)_{0 \le j \le k} \in Q^{k+1}$ such that $q = q_0$, $\delta(q_k, i_k)$ is
undefined, and
$q_0 \xrightarrow{i_0 / o_0} q_1 \xrightarrow{i_1 / o_1} \cdots q_k$.
\end{itemize}
We then say that starting from state $q$, $M$ produces output $o$
given the input $\iota$.
\end{definition}
Note that if $\delta(q_k,i_k)$ is undefined, the machine is allowed to
produce an arbitrary output from then on. Furthermore, given an input
word $\iota$, there may be several output words $o$ such that
$(\iota,o) \models M_q$ (in accordance with a lax specification).
As an example, consider the input sequence
$\iota = ab\cdot \bar a\bar b\cdot ab\cdot \bar a\bar b\cdots$
applied to the initial state $0$ of the machine shown in Figure~\ref{autEx1}.
We have $(\iota,o)\models M_0$ if and only if for all $j \in \mathbb{N}$,
$o_{2j}\in x$ and $o_{2j+1}\in \bar y$, where $x$ and $\bar y$ are
cubes that respectively represent $\{xy,x\bar y\}$ and $\{x\bar y,\bar x\bar y\}$.
\begin{definition}[Variation and specialization]
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ and
$M'=\tup{I, O, Q', q'_{\mathit{init}}, \delta', \lambda'}$ be two IGMMs.
Given two states $q \in Q$, $q' \in Q'$, we say that $q'$ is a:
\begin{itemize}[noitemsep,topsep=2pt]
\item \emph{variation} of $q$ if $\forall \iota \in
({\ensuremath{\mathbb{B}}^I})^\omega ,
\set{o \mid (\iota,o) \models M'_{q'}} \cap \set{o \mid (\iota,o)\models M_{q}}
\neq \emptyset$;
\item \emph{specialization} of $q$ if $\forall \iota \in
({\ensuremath{\mathbb{B}}^I})^\omega ,
\set{o \mid (\iota,o) \models M'_{q'}} \subseteq \set{o \mid (\iota,o)\models M_{q}}$.
\end{itemize}
We say that $M'$ is a variation (resp.\ specialization) of $M$
if $q_{\mathit{init}}'$ is a variation (resp.\ specialization)
of $q_{\mathit{init}}$.
\end{definition}
Intuitively, all the input-output pairs accepted by
a specialization $q'$ in $M'$ are also accepted by $q$ in $M$.
Therefore, if all the outputs produced by state $q$ in $M$
comply with the original specification, then so do the outputs produced
by state $q'$ in $M'$.
In order for two states to be a variation of one another,
for all possible inputs they must be able to agree on a common output behaviour.
We write $q'\mathbin{\approx}{} q$ (resp. $q' \mathbin{\sqsubseteq}{} q$) if $q'$ is a
variation (resp. specialization) of $q$. Note that $\mathbin{\approx}{}$ is a
symmetric but non-transitive relation, while $\mathbin{\sqsubseteq}$ is transitive
($\mathbin{\sqsubseteq}$ is a preorder).
\medskip
Our goal in this article is to solve the following problems:
\begin{description}[noitemsep,topsep=2pt]
\item[Reducing an IGMM $M$:] finding a specialization of $M$ having at most
the same number of states, preferably fewer.
\item[Minimizing an IGMM $M$:] finding a specialization of $M$ having the
least number of states.
\end{description}
Consider again the IGMM shown in Figure~\ref{autEx1}.
The IGMM shown in Figure~\ref{autEx1_Min} is a specialization of this machine
and has a minimal number of states.
\subsubsection*{Generalizing inputs and outputs.}
\label{secCompToReg}
Note that the output function of an IGMM returns a set of valuations,
but it can be rewritten equivalently to output a set of cubes
as $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^I\right) \rightarrow 2^{\ensuremath{\mathbb{K}}^{O}}$.
As an example, consider $I = \{a\}$ and $O = \{x, y, z\}$; the set of valuations
$v = \{\bar{x}yz, \bar{x}y\bar{z}, x\bar{y}z, x\bar{y}\bar{z}\}\in 2^{\ensuremath{\mathbb{B}}^O}$ is equivalent to the
set of cubes $v_c = \{\bar{x}y, x\bar{y}\}\in 2^{\ensuremath{\mathbb{K}}^O}$.
In the literature, a Mealy machine commonly maps a single input
valuation to a single output valuation: its output function is therefore of the
form $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^I\right) \rightarrow \ensuremath{\mathbb{B}}^{O}$. The
tool \textsc{MeMin}~\citet{abel.15.iccad} uses a slight generalization by allowing a
single output cube, hence $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow \ensuremath{\mathbb{K}}^{O}$. Thus,
unlike our model, neither the common definition nor the tool \textsc{MeMin} can
feature an edge outputting the aforementioned set $v$ (or equivalently $v_c$),
as it cannot be represented by a single cube or valuation.
Our model is therefore \emph{strictly more expressive}, although
it comes at a price for minimization.
Note that, in practice, edges with identical source state,
output valuations, and destination state can be merged into a single transition
labeled by the set of allowed inputs. Both our tool and \textsc{MeMin} feature
this optimization. While it does not change the
expressiveness of the underlying model, this more succinct representation
of the machines does improve the efficiency of the algorithms
detailed in the next section, as they depend on the total number of transitions.
\section{SAT-Based Minimization of IGMM}
\label{secMin}
This section builds upon the approach presented
by~\citet{abel.15.iccad} for machines with outputs constrained to
cubes, and generalizes it to the IGMM model (with more
expressive outputs).
\subsection{General approach}
\begin{definition}
Given an IGMM $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$,
\emph{a variation class} $C \subseteq Q$ is a set of states such that all
elements are pairwise variations, i.e. $\forall q,q' \in C$, $q'\mathbin{\approx}{} q$.
For any input $i \in \ensuremath{\mathbb{B}}^I$, we define:
\begin{itemize}[noitemsep,topsep=2pt]
\item the \emph{successor function}
$\ensuremath{\mathrm{Succ}}(C, i) =
\bigcup_{q\in C} \set{\delta(q,i) \mid
\delta(q,i) \text{~is defined}} $;
\item the \emph{output function}
$\ensuremath{\mathrm{Out}}(C, i) = \bigcap_{q\in C} \lambda(q,i)$.
\end{itemize}
\end{definition}
Intuitively, the successor function returns the set of all states
reachable from a given class under a given input symbol.
The output function returns the set of all shared output valuations
between the various states in the class.
In the remainder of this section we will call a variation class simply a class,
as there is no ambiguity. We consider three important notions concerning
classes, or rather sets thereof, of the form $S = \set{C_0,\ldots,C_{n-1}}$.
\begin{definition}[Cover condition]\label{cond_cover}
We say that a set of classes $S$ \emph{covers} the machine $M$ if every
state of $M$ appears in at least one of the classes.
\end{definition}
\begin{definition}[Closure condition]\label{cond_closure}
We say that a set of classes $S$ is \emph{closed} if for all
$C_j\in S$ and for all inputs $i \in \ensuremath{\mathbb{B}}^I$ there exists a $C_k\in S$
such that $\ensuremath{\mathrm{Succ}}(C_j, i) \subseteq C_k$.
\end{definition}
\begin{definition}[Nonemptiness condition]\label{cond_nonempt}
We say that a class $C$ has a \break\emph{nonempty output} if $\ensuremath{\mathrm{Out}}(C,i)\ne\emptyset$ for
all inputs $i\in \ensuremath{\mathbb{B}}^I$.
\end{definition}
The astute reader might have observed that the nonempty output condition
is strictly stronger than the condition that all elements in a class
have to be pairwise variations of one another.
We will see that this distinction is however important, as it gives rise to a
different set of clauses in the SAT problem, reducing the total runtime.
Combining these conditions yields the main theorem for this approach.
This extends a similar theorem by~\citet[][Thm~1]{abel.15.iccad} by
adding the nonemptiness condition to support the more expressive IGMM
model.
\begin{theorem}
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be an IGMM and
$S = \set{C_0,\ldots,C_{n-1}}$ be a \emph{minimal} (in terms of size)
set of classes such that \emph{(1)} $S$ is \emph{closed},
\emph{(2)} $S$ \emph{covers} every state of the machine $M$ and
\emph{(3)} each of the classes $C_j$ has a \emph{nonempty output}.
Then the IGMM $M'=\tup{I, O, S, q_{\mathit{init}}', \delta', \lambda'}$ where:
\begin{itemize}[noitemsep,topsep=2pt]
\item $q_{\mathit{init}}' = C$ for some $C \in S$ such that
$q_{\mathit{init}} \in C$;
\item $\delta'(C_j, i) = \begin{cases}C_k \text{ for some k s.t. } \ensuremath{\mathrm{Succ}}(C_j, i) \subseteq C_k &\text{if~} \ensuremath{\mathrm{Succ}}(C_j, i) \neq \emptyset \\
\text{undefined} &\text{else;}
\end{cases}$
\item $\lambda'(C_j, i) = \begin{cases} \ensuremath{\mathrm{Out}}(C_j, i) &\text{if~} \ensuremath{\mathrm{Succ}}(C_j, i) \neq \emptyset\\
\top &\text{else;}
\end{cases}$
\end{itemize}
is a \emph{specialization} of minimal size (in terms of states) of $M$.
\label{theoremSATMIN}
\end{theorem}
Figure~\ref{autExSatBase} illustrates this construction on an example
with a single input proposition $I=\set{a}$ (hence two input
valuations $\ensuremath{\mathbb{B}}^I = \set{a, \bar{a}}$), and three output propositions
$O=\set{x, y, z}$. To simplify notations, elements of $2^{\ensuremath{\mathbb{B}}^O}$ are
represented as Boolean functions (happening to be cubes in this
example) rather than sets.
States have been colored to
indicate their possible membership to one of the three variational classes.
The SAT solver needs to associate each state to at least one
of them in order to satisfy the cover condition~\eqref{cond_cover},
while simultaneously respecting conditions~\eqref{cond_closure}--\eqref{cond_nonempt}.
A possible choice would be:
\textcolor{violet}{$C_0 = \{0\}$},
\textcolor{orange}{$C_1 = \{1, 3, 6\}$}, and
\textcolor{green}{$C_2 = \{2, 4, 5\}$}.
For this choice, the \textit{\textcolor{violet}{violet}} class \textcolor{violet}{$C_0$}
has only a single state, so the closure condition~\eqref{cond_closure} is trivially satisfied.
All transitions of the states in the \textit{\textcolor{orange}{orange}}
class \textcolor{orange}{$C_1$} go to states in
\textcolor{orange}{$C_1$}, also satisfying the condition. The same
can be said of the \textit{\textcolor{green}{green}} class
\textcolor{green}{$C_2$}.
Finally, we need to check the nonempty output condition~\eqref{cond_nonempt}.
Once again, it is trivially satisfied for the
\textit{\textcolor{violet}{violet}} class \textcolor{violet}{$C_0$}.
For the \textit{\textcolor{orange}{orange}} and \textit{\textcolor{green}{green}} classes,
we need to compute their respective output.
We get $\ensuremath{\mathrm{Out}}(\textcolor{orange}{C_1}, a) = \bar{z}$,
$\ensuremath{\mathrm{Out}}(\textcolor{orange}{C_1}, \bar{a}) = z$,
$\ensuremath{\mathrm{Out}}(\textcolor{green}{C_2}, a) = \bar{z}$ and
$\ensuremath{\mathrm{Out}}(\textcolor{green}{C_2}, \bar{a}) = z$.
None of the output sets is empty, thus condition~\eqref{cond_nonempt}
is satisfied as well.
Note that, since the outgoing transitions of states 4 and 6
are self-loops compatible with all possible output valuations,
another valid choice is:
\textcolor{violet}{$C_0 = \{0, 4, 6\}$},
\textcolor{orange}{$C_1 = \{1, 3, 4, 6\}$}, and
\textcolor{green}{$C_2 = \{2, 4, 5, 6\}$}.
The corresponding specialization, constructed as described in
Theorem~\ref{theoremSATMIN}, is shown in Figure~\ref{autExSatBaseMin}.
Note that this machine is input-complete, so the incompleteness of the
specification only stems from the possible choices in the outputs.
\begin{figure}[t]
\begin{subfigure}[t]{0.59\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=1cm and 1.186cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill = violet!50] (v0) {$0$};
\node[lstate,above=of v0, fill = orange!50] (v1) {$1$};
\node[lstate,right=of v0, fill = green!50] (v2) {$2$};
\node[lstate,right=of v1, fill = orange!50] (v3) {$3$};
\node[lstate,right=of v2, fill = green!50] (v5) {$5$};
\node[lstate,right=of v5, fill = orange!50] (v4) {$4$};
\fill[fill=green!50] (v4.center) -- (v4.east) arc (0:120:2.99mm) -- cycle;
\fill[fill=violet!50] (v4.center) -- (v4.east) arc (0:-120:2.99mm) -- cycle;
\node[lstate,fill=none] at (v4) {$4$};
\node[lstate,right=of v3, fill = orange!50] (v6) {};
\fill[fill=green!50] (v6.center) -- (v6.east) arc (0:120:2.99mm) -- cycle;
\fill[fill=violet!50] (v6.center) -- (v6.east) arc (0:-120:2.99mm) -- cycle;
\node[lstate,fill=none] at (v6) {$6$};
\end{scope}
\path[->]
(v0) edge node[left] {$a/{\bar{z}}$} (v1)
(v0) edge node[above] {$\bar{a}/\bar{x}\bar{y}\bar{z}$} (v2)
(v1) edge[loop left] node {$a/{\bar{z}}$} (v1)
(v1) edge[bend left=10, above] node {$\bar{a}/{z}$} (v3)
(v2) edge[bend right=20] node[below] {$a/\top$} (v4)
(v2) edge[above] node {$\bar{a}/{z}$} (v5)
(v3) edge[bend left=10, below] node {$a/{\bar{z}}$} (v1)
(v3) edge[above] node {$\bar{a}/\top$} (v6)
(v5) edge[above] node {$\bar{a}/\top$} (v4)
(v4) edge[loop above] node[align=center] {$a/\top$\\$\bar{a}/\top$} (v4)
(v5) edge[loop above] node {$a/{z}$} (v5)
(v6) edge[loop right] node[align=center] {$a/\top$\\$\bar{a}/\top$} (v6)
;
\end{tikzpicture}
\subcaption{Original IGMM $M$}
\label{autExSatBase}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.cm and 1.4cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill = violet!50] (v0) {$0$};
\node[lstate,above= of v0, fill = orange!50] (v1) {$1$};
\node[lstate,right= of v0, fill = green!50] (v2) {$2$};
\end{scope}
\path[->]
(v0) edge[above] node {$\bar{a}/{\bar{x}\bar{y}\bar{z}}$} (v2)
(v2) edge[loop above] node[align=center] {$a/{z}$\\$\bar{a}/{z}$} (v2)
(v0) edge[left] node {$a/{\bar{z}}$} (v1)
(v1) edge[loop right] node[align=center] {$a/{\bar{z}}$\\$\bar{a}/{z}$} (v1)
(v2) edge[bend left=35,transparent] node[below] {$a/\top$} (v0)
;
\end{tikzpicture}
\subcaption{Minimal specialization of $M$}
\label{autExSatBaseMin}
\end{subfigure}
\caption{Minimization example}
\label{autExSatBaseGen}
\end{figure}
\subsection{Proposed SAT Encoding}
We want to design an algorithm that finds a minimal specialization of a given
IGMM $M$. To do so, we will use the following approach, starting from $n = 1$:
\begin{itemize}[noitemsep,topsep=2pt]
\item Posit that there are $n$ classes, hence,
$n$ states in the minimal machine.
\item Design SAT clauses ensuring cover, closure and nonempty outputs.
\item Check if the resulting SAT problem is satisfiable.
\item If so, construct the minimal machine described
in Theorem~\ref{theoremSATMIN}.
\item If not, increment $n$ by one and apply the whole process again,
unless $n = \left|Q\right| - 1$, which serves as a proof that the original
machine is already minimal.
\end{itemize}
\subsubsection*{Encoding the cover and closure conditions.}
In order to guarantee that the set of classes
$S = \set{C_0, \ldots, C_{n-1}}$ satisfies both the cover and closure conditions
and that each class $C_j$ is a variation class,
we need two types of literals:
\begin{itemize}[noitemsep,topsep=2pt]
\item $s_{q,j}$ should be true if and only if
state $q$ belongs to the class $C_j$;
\item $z_{i,k,j}$ should be true if
$\ensuremath{\mathrm{Succ}}(C_k, i) \subseteq C_j$ for $i \in \ensuremath{\mathbb{B}}^I$.
\end{itemize}
The cover condition, encoded by Equation~\eqref{eq_SatPart}, guarantees that each state belongs to at least one class.\\
\begin{minipage}[t]{0.49\linewidth}
\begin{equation}
\bigwedge_{q\in Q}\;\bigvee_{0\le j< n} s_{q,j}
\label{eq_SatPart}
\end{equation}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\begin{equation}
\bigwedge_{0\le j< n}\;\bigwedge_{\substack{q,q'\in Q\\q\mathbin{\napprox} q'}} \overline{s_{q,j}} \lor \overline{s_{q',j}}
\label{eq_SatVar}
\end{equation}
\end{minipage}
Equation~\eqref{eq_SatVar} ensures that each class is a variational class:
two states $q$ and $q'$ that are not variations of each other cannot
belong to the same class.
The closure condition must ensure that for every class $C_i$ and every
input symbol $i \in \ensuremath{\mathbb{B}}^I$, there exists at least one class that contains all the
successor states: $\forall k, \forall i, \exists j,\ Succ(C_k, i) \subseteq C_j$.
This is expressed by the constraints~\eqref{eq_SatClos1} and~\eqref{eq_SatClos2}.
\begin{minipage}[t]{0.39\linewidth}
\begin{equation}
\bigwedge_{0\le k< n}\,\bigwedge_{\substack{i \in \ensuremath{\mathbb{B}}^I \\ \phantom{q' = \delta(q,i)}}}\,\bigvee_{0\le j < n} z_{i,k,j}
\label{eq_SatClos1}
\end{equation}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\begin{equation}
\bigwedge_{0\le j, k< n}\;\bigwedge_{\substack{q, q' \in Q, i \in \ensuremath{\mathbb{B}}^I \\ q' = \delta(q,i)}} (z_{i,k,j} \land s_{q,k}) \rightarrow s_{q',j}
\label{eq_SatClos2}
\end{equation}
\end{minipage}
The constraint~\eqref{eq_SatClos1} ensures that at least one $C_j$ contains
$\ensuremath{\mathrm{Succ}}(C_k, i)$, while~\eqref{eq_SatClos2} ensures this mapping of classes
matches the transitions of $M$.
\subsubsection*{Encoding the nonempty output condition.}
\label{encNonEmpty}
Each class in $S$ being a variation class is necessary
but not sufficient to satisfy the nonempty output condition.
We indeed want to guarantee that for any input $i$,
all states in a given class can agree on at least one common output valuation.
However it is possible to have three or more states (like
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $0$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{xy,x\bar y\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture},
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $1$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{\bar xy,x\bar y\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture}, and
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $2$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{xy,\bar xy\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture})
that are all variations of one another, but still cannot
agree on a common output.
This situation cannot occur in \textsc{MeMin} since their model uses
\emph{cubes} as outputs rather than arbitrary sets of valuations
as in our model.
A useful property of cubes is that if the
pairwise intersections of all cubes in a set are nonempty, then
the intersection of all cubes in the set is necessarily nonempty as well.
Since \emph{cubes} are not expressive enough for our model,
we will therefore generalize the output
as discussed earlier in Section \ref{secCompToReg}:
we represent the arbitrary set of valuations produced by the output
function $\lambda$ as a set of cubes whose disjunction yields the original set.
For $q \in Q$ and $i \in \ensuremath{\mathbb{B}}^I$, we partition the set of valuations
$\lambda(q, i)$ into cubes, relying on the~\citet{minato.92.sasimi} algorithm,
and denote the obtained set of cubes as $\mathrm{CS}(\lambda(q, i))$.
Our approach for ensuring that there exists a common output is to
search for disjoint cubes and exclude them from the possible outputs
by selectively deactivating them if necessary; an active cube is a set
in which we will be looking for an output valuation that the whole
class can agree on. To express this, we need two new types of
literals:
\begin{itemize}[noitemsep,topsep=2pt]
\item $a_{c,q,i}$ should be true iff
the particular instance of the cube $c\in \mathrm{CS}(\lambda(q,i))$ used
in the output of state $q$ when reading $i$ is \emph{active};
\item $\mathit{sc}_{q,q'}$ should be true iff
$\exists C_j \in S$ such that $q\in C_j$ and $q'\in C_j
\end{itemize}
The selective deactivation of a cube can then be expressed by the following:
\begin{minipage}[t]{.46\textwidth}
\begin{equation}
\bigwedge_{\substack{q, q' \in Q \\0 \le j < n}} (s_{q, j} \land s_{q', j})
\rightarrow \mathit{sc}_{q,q'}
\label{eq_SatSameClass}
\end{equation}
\end{minipage}
\hfill
\begin{minipage}[t]{.46\textwidth}
\begin{equation}
\bigwedge_{\substack{q \in Q,\, i \in \ensuremath{\mathbb{B}}^I \\\delta(q, i) \text{~is defined} }}\!\!\bigvee_{{c}\in \mathrm{CS}(\lambda(q, i))} a_{c,q,i}\\
\label{eq_SatNEPart}
\end{equation}
\end{minipage}
\begin{equation}
\bigwedge_{\substack{q, q' \in Q,\, i \in \ensuremath{\mathbb{B}}^I \\
\delta(q, i) \text{~is defined}\\\delta(q', i) \text{~is defined} }}
\;
\bigwedge_{\substack{c\in \mathrm{CS}(\lambda(q, i)) \\
c'\in \mathrm{CS}(\lambda(q', i)) \\
c \cap c' = \emptyset}}
(a_{c,q,i} \land a_{c',q',i}) \rightarrow \overline{\mathit{sc}_{q,q'}}.
\label{eq_SatDeact}
\end{equation}
Constraint~\eqref{eq_SatSameClass} ensures that $\mathit{sc}_{q,q'}$ is true if there exists
a class containing both $q$ and $q'$, in accordance with the expected definition.
Constraint~\eqref{eq_SatNEPart} guarantees that at least one of the cubes
in the output $\lambda(q, i)$ is active,
causing the restricted output to be nonempty.
Constraint~\eqref{eq_SatDeact} expresses selective deactivation and only needs to be
added for a given $q, q' \in Q$ and $i \in \ensuremath{\mathbb{B}}^I$ if
$\delta(q, i)$ and $\delta(q', i)$ are properly defined.
This formula guarantees that if there exists a class to which $q$ and $q'$
belong to (i.e., $\mathit{sc}_{q,q'}$ is true) but there also exist disjoint cubes
in the partition of their respective outputs, then
we deactivate at least one of these:
only cubes that intersect can be both activated.
Thus, this constraint guarantees the nonempty output condition.
Since encoding an output set requires a number of cubes exponential
in $|O|$, the above encoding uses
$\mathrm{O}(|Q|(2^{|I|+|O|}+|Q|)+n^2 \cdot 2^{|I|})$ variables as well
as
$\mathrm{O}(Q^2(n+2^{2|O|})+n^2 \cdot 2^{|I|}+|\delta|(2^{|O|}+n^2))$
clauses. We use additional optimizations to limit the number of
clauses, and make the algorithm more practical despite its frightening
theoretical worst case. In particular the CEGAR approach of
Section~\ref{sec:cegar} strives to avoid introducing constraints
\eqref{eq_SatSameClass}--\eqref{eq_SatDeact}.
\subsection{Adjustment of Prior Optimizations}
Constructing the SAT problem iteratively starting from $n = 1$
would be grossly inefficient.
We can instead notice that two states that are not variations of each
other can never be in the same class.
Thus, assuming we can find $k$ states that are not pairwise variations
of one another, we can infer that we need at least as many classes
as there are states in this set, providing a lower bound for $n$.
This idea was first introduced in~\citet{abel.15.iccad};
however, performing a more careful inspection of the constraints
with respect to this ``partial solution'' allows us
to reduce the number of constraints and literals needed.
The nonemptiness condition involves the creation of many literals and clauses
and necessitates an expensive preprocessing step to decompose the
arbitrary output sets returned by output function
($\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{B}}^{O}}\setminus \{\emptyset\}$)
into disjunctions of cubes ($\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{K}}^{O}}\setminus \{\emptyset\}$).
We avoid adding
unnecessary nonempty output clauses in a counter-example guided fashion.
Violation of these conditions can easily be detected
before constructing the minimized machine.
If detected, a small set of these constraints is added to SAT problem
excluding this particular violation.
In many cases, this optimization greatly reduces the number of
literals and constraints needed, to the extent we can often
avoid their use altogether.
From now on, we consider an IGMM with $N$ states
$Q=\set{q_0, q_1, \ldots, q_{N-1}}$.
\subsubsection*{Variation matrix.}
We first need to determine which states are not pairwise
variations of one another in order to extract a partial solution
and perform simplifications on the constraints.
We will compute a square matrix of size $N\times N$ called $\mathrm{mat}$
such that $\mathrm{mat}[k][\ell] = 1$ if and only if $q_k\mathbin{\napprox} q_\ell$
in the following fashion:
\begin{enumerate}
\item Initialize all entries of $\mathrm{mat}$ to $0$.
\item Iterate over all pairs $\tup{k, \ell}$ with $0 \le k < \ell < N$.
If the entry $\mathrm{mat}[k][\ell]$ is $0$, check if $\exists i \in \ensuremath{\mathbb{B}}^I$
such that $\lambda(q_k, i) \cap \lambda(q_l, i) = \emptyset$. If it exists,
$\mathrm{mat}[k][\ell] \gets 1$.
\item For all pairs $\tup{k, \ell}$ whose associated value $\mathrm{mat}[k][\ell]$
changed from $0$ to $1$, set all existing predecessor pairs
$\tup{m,n}$ with $m< n$ under the same input to $1$ as well, that is,
$\exists i \in \ensuremath{\mathbb{B}}^I$ such that $\delta(q_m, i) = q_k$ and
$\delta(q_n, n) = q_l$. Note that we may need to propagate these changes
to the predecessors of $\tup{m, n}$.
\end{enumerate}
As ``being a variation of'' is a symmetric, reflexive relation, we only
compute the elements above the main diagonal of the matrix.
The intuition behind this algorithm is that two states $q$ and $q'$ are
not variations of one another if either:
\begin{itemize}
\item There exists an input symbol for which the output sets are disjoint.
\item There exists a pair of states which are not variations of one another
and that can be reached from $q$ and $q'$ under the same input
sequence.
\end{itemize}
The complexity of this algorithm is
$\mathrm{O}(|Q|^2 \cdot 2^{|I|})$
if we assume that the disjointness of the output sets can be checked in
constant time; see~\citet{abel.15.iccad}.
This assumption is not correct in general: testing disjointness
for cubes has a complexity linear in the number of input propositions.
On the other hand, testing disjointness for generalized Mealy machines
that use arbitrary sets of valuations has a complexity exponential in the
number of input propositions. This increased complexity is however
counterbalanced by the succinctness the use of arbitrary sets allows.
As an example, given $2m$ output propositions $o_0, \ldots, o_{2m-1}$,
consider the set of output valuations expressed as a disjunction of cubes
$\bigvee_{0 \le k < m} o_{2k}\,\overline{o_{2k+1}} \lor
\overline{o_{2k}}\, o_{2k+1}$. Exponentially many \emph{disjoint} cubes are
needed to represent this set. Thus, a non-deterministic Mealy machine
labeled by output cubes will incur an exponential number of computations
performed in linear time, whereas a generalized Mealy machine
will only perform a single test with
exponential runtime.
\subsubsection*{Computing a partial solution.}
The partial solution corresponds to a set of states such that none of them is
a variation of any other state in the set.
Thus, none of these states can belong to the same (variation) class.
The size of this set is therefore a lower bound for the number of states in
the minimal machine.
Finding the largest partial solution is an NP-hard problem; we therefore
use the greedy heuristic described in~\citet{abel.15.iccad}.
For each state $q$ of $M$, we count the number of states $q'$ such that
$q$ is not a variation of $q'$; call this number $\mathit{nvc}_q$.
We then successively add to the partial solution the states
that have the highest $\mathit{nvc}_q$ but are not variations
of any state already inserted.
\subsubsection*{CEGAR approach to ensure the nonempty output condition.}\label{sec:cegar}
Assuming a solution satisfying the cover and closure constraints has already
been found, we then need to check if
said solution satisfies the nonempty output condition.
If this is indeed the case, we can then construct and return a minimal machine.
If the condition is not satisfied, we look for one or
more combinations of classes and input symbols such that
$\ensuremath{\mathrm{Succ}}(C_k, i) = \emptyset$.
We add for the states in $C_k$ and the input symbol $i$
the constraints described in Section~\ref{encNonEmpty}, and for these states
and input symbols only. Then we check if the problem is still satisfiable.
If it is not, then we need to increase the number of classes to find
a valid solution. If it is, the solution either respects
condition~\eqref{cond_nonempt} and we can return a minimal machine, or
it does not and the process of selectively adding constraints is
repeated. Either way, this \emph{counter-example guided abstraction
refinement} (CEGAR) scheme ensures termination, as the problem is
either shown to be unsatisfiable or solved through iterative exclusion
of all violations of condition~\eqref{cond_nonempt}.
\subsection{Algorithm}
The optimizations described previously yield Algorithm~\ref{algoSAT1}.
\begin{algorithm}[t]
\KwData{a machine $M=\tup{I,O,Q,q_{\mathit{init}},\delta,\lambda}$}
\KwResult{a minimal specialization $M'$}
\tcc{Computing the variation matrix}
bool[][] $\mathrm{mat}$ $\gets$ isNotVariationOf($M$)\;
\tcc{Looking for a partial solution P}
set $P \gets$ extractPartialSol($\mathrm{mat}$)\;
clauses $\gets$ empty list\;
\tcc{Using the lower bound inferred from P}
\For{$n\gets \left|P\right| \KwTo \left|Q\right|-1$}{
addCoverCondition(clauses, $M$, $P$, $\mathrm{mat}$, $n$)\;
addClosureCondition(clauses, $M$, $P$, $\mathrm{mat}$, $n$)\;
\tcc{Solving the cover and closure conditions}
(sat, solution) $\gets$ satSolver(clauses)\;
\While{sat}{
\If{verifyNonEmpty($M$, solution)}{
\KwRet buildMachine($M$, solution)\;
}
\tcc{Adding the relevant nonemptiness clauses}
addNonemptinessCondition(clauses, $M$, solution)\;
(sat, solution) $\gets$ satSolver(clauses)\;
}
}
\tcc{If no solution has been found, return M}
\KwRet copyMachine($M$)\;
\caption{SAT-based minimization}
\label{algoSAT1}
\end{algorithm}
\subsubsection*{Further optimizations and comparison to \textsc{MeMin}.}
The proposed algorithm relies on the general approach outline
in~\citet{abel.15.iccad}, as well as the SAT encoding for the cover and closure
conditions.
We find a partial solution by using a similar heuristic and adapt some
optimizations found in their source code, which are neither detailed
in their paper nor here due to a lack of space.
The main difference lies in the increased expressiveness of the input and output
symbols that causes some significant changes.
In particular, we added the nonemptiness condition to guarantee
correctness, as well as a CEGAR-based implementation to maintain performance.
Other improvements mainly stem from a better usage of the partial solution.
For instance, each state $q$ of the partial solution is associated to
``its own'' class $C_j$. Since the matching literal $s_{q,j}$ is trivially true,
it can be omitted by replacing all its occurrences by true.
States belonging to the partial solution have other peculiarities that
can be leveraged to reduce the number of possible successor classes,
further reducing the amount of literals and clauses needed.
We therefore require fewer literals and clauses, trading
a more complex construction of the SAT problem
for a reduced memory footprint.
The impact of these improvements is detailed in Section~\ref{secBench}.
The Mealy machine described by~\citet{abel.15.iccad} come in two flavors:
One with an explicit initial state and a second one where all states are
considered to be possible initial states.
While our approach does
explicit an initial state, it does not further influence the resulting minimal machine
when all original states are reachable.
\section{Bisimulation with Output Assignment}
\label{secBisim}
We introduce in this section another approach tailored to our
primary use case, that is, efficient reduction of control strategies in the
context of reactive synthesis. This technique, based on the $\mathbin{\sqsubseteq}$
specialization relation, yields non-minimal but ``relatively small'' machines at
significantly reduced runtimes.
Given two states $q$ and $q'$ such that $q'\mathbin{\sqsubseteq} q$, one idea is to
restrict the possible outputs of $q$ to match those of $q'$.
Concretely, for all inputs $i\in \ensuremath{\mathbb{B}}^I$, we restrict $\lambda(q,i)$ to
its subset $\lambda(q',i)$; $q$ and $q'$ thus become
bisimilar, allowing us to merge them. In practice, rather than restricting
the output first then reducing bisimilar states to their quotient,
we instead directly build a machine that is minimal
with respect to $\mathbin{\sqsubseteq}$ where all transitions going to $q$
are redirected to $q'$.
Note that if two states $q$ and $q'$ are bisimilar,
then necessarily $q'\mathbin{\sqsubseteq} q$ and $q\mathbin{\sqsubseteq} q'$: therefore, both states will be
merged by our approach. As a consequence, the resulting machine is always
smaller than the bisimulation quotient of the original machine
(as shown in Section~\ref{secBench}).
\subsection{Reducing Machines with $\mathbin{\sqsubseteq}$}
Our algorithm builds upon the following theorem:
\begin{theorem}
\label{theoremSpecReduc}
Let $M = \tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be
an IGMM, and $r\colon Q\to Q$ be a mapping satisfying
$r(q)\mathbin{\sqsubseteq} q$. Define
$M' = \tup{I, O, Q', q_{\mathit{init}}', \delta', \lambda}$ as
an IGMM where $Q' = \mathit{r(Q)}$,
$q'_{\mathit{init}}=r(q_{\mathit{init}})$ and
$\delta' (q, i) = r(\delta (q, i))$ for all states $q$ and input $i$.
Then $M'$ is a specialization of $M$.
\end{theorem}
Intuitively, if a state $q$ is remapped to a
state $q'\mathbin{\sqsubseteq} q$, then the set of words $w$ that can be output for an
input $i$ is simply reduced to a subset of the original output.
The smaller the image $r(Q)$, the more significant the reduction performed on
the machine. Thus, to find a suitable function $r$,
we map each state $q$ to one of the
\emph{minimal elements} of the $\mathbin{\sqsubseteq}$ preorder, also called
the \emph{representative states}.
\begin{figure}[bt]
\begin{minipage}{.35\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton, yscale=1.164
]
\node at (0,2) (n46) {$\{4, 6\}$};
\node at (0,1) (n3) {$\{3\}$};
\node at (-1.5,0) (n2) {$\{2\}$};
\node at (-.5,0) (n0) {$\{0\}$};
\node at (.5,0) (n1) {$\{1\}$};
\node at (1.5,0) (n5) {$\{5\}$};
\draw [->] (n46) edge[bend right] (n2);
\draw [->] (n46) edge[bend right=15] (n0);
\draw [->] (n46) -- (n3);
\draw [->] (n46) edge[bend left=15] (n1);
\draw [->] (n46) edge[bend left] (n5);
\draw [->] (n3) edge[bend right=5] (n0);
\draw [->] (n3) edge[bend left=5] (n1);
\draw[dashed,thin,rounded corners=1mm] ($(n2.north west)+(-2mm,2mm)$) rectangle ($(n5.south east)+ (2mm,-2mm)$);
\node[below=-1.5mm] at (n2.south -| n3) {leaves};
\end{tikzpicture}
\caption{Specialization graph of the IGMM of Fig.~\ref{autExSatBase}}
\label{fig:graph_ex}
\end{minipage}
\hfill
\begin{minipage}{.3\textwidth}
\centering
\begin{tabular}{lcl}
$q$ && $\mathllap{r(}q\mathrlap{)}$ \\
\midrule
0 &$\to$ & 0 \\
1 &$\to$ & 1 \\
2 &$\to$ & 2 \\
3 &$\to$ & 1 \\
4 &$\to$ & 1 \\
5 &$\to$ & 5 \\
6 &$\to$ & 1 \\
\end{tabular}
\vspace*{-1mm}
\caption{Chosen representative mapping.\label{fig:autMapBisim}}
\end{minipage}
\hfill
\begin{minipage}{.33\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=.8cm and 1cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill=violet!50] (v0) {$0$};
\node[lstate,above=of v0, fill=orange!50] (v1) {$1$};
\node[lstate,right=of v0, fill=green!50] (v2) {$2$};
\node[lstate,above=of v2, fill=green!50] (v5) {$5$};
\end{scope}
\path[->]
(v0) edge[left] node {${a}/{\bar{z}}$} (v1)
(v0) edge[below] node {${\bar{a}}/{\bar{x}\bar{y}\bar{z}}$} (v2)
(v1) edge[loop left] node {${a}/{\bar{z}}$} (v1)
(v1) edge[loop above] node {${\bar{a}}/{z}$} (v3)
(v2) edge node[above right=-3pt] {${a}/\top$} (v1)
(v2) edge[right] node {${\bar{a}}/{z}$} (v5)
(v5) edge[above] node {${\bar{a}}/\top$} (v1)
(v5) edge[loop above] node {${a}/{z}$} (v5)
;
\end{tikzpicture}
\caption{IGMM obtained by reducing that of Fig.~\ref{autExSatBase}\label{autExBisim}}
\end{minipage}
\end{figure}
\begin{definition}[Specialization graph]
A \emph{specialization graph} of an IGMM
$M = \tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ is the
\emph{condensation graph} of the directed graph representing the
relation $\mathbin{\sqsubseteq}$: the vertices of the specialization graph
are sets that form a partition of $Q$ such that two states $q$ and
$q'$ belong to the same vertex if $q \mathbin{\sqsubseteq} q'$ and $q' \mathbin{\sqsubseteq} q$;
there is an edge
$\{q_1,q_2,...\} \longrightarrow \{q'_1,q'_2,...\}$ if and only if
$q'_i \mathbin{\sqsubseteq} q_j$ for some (or equivalently all) $i,j$. Note that
this graph is necessarily acyclic.
\end{definition}
Fig.~\ref{fig:graph_ex} shows the specialization graph associated to
the machine of Fig.~\ref{autExSatBase}.
\begin{definition}[Representative of a state]
Given two states $q$ and $q'$ of an IGMM, $q'$ is
a \emph{representative} of $q$ if, in the specialization graph of $M$, $q'$
belongs to a leaf that can be reached from the vertex containing $q$.
In other words, $q'$ is a representative of $q$ if $q' \mathbin{\sqsubseteq} q$ and $q'$ is
a minimal element of the $\mathbin{\sqsubseteq}$ preorder.
\end{definition}
Note that any state has at least one representative. In
Fig.~\ref{fig:graph_ex} we see that $0$ represents $0$,
$3$, $4$, and $6$. States $3$, $4$, and $6$ can be represented by
$0$ or $1$.
By picking one state in each leaf, we obtain a set of
representative states that cover all states of the IGMM. We then
apply Theorem~\ref{theoremSpecReduc} to a function $r$ that maps
each state to its representative in this cover. In Fig.~\ref{fig:graph_ex},
all leaves are singletons, so the set
$\{0,1,2,5\}$ contains representatives for all states. Applying
Th.~\ref{theoremSpecReduc} using $r$ from Fig.~\ref{fig:autMapBisim}
yields the machine shown in Fig.~\ref{autExBisim}. Note that while this
machine is smaller than the original, it is still bigger than the
minimal machine of Fig.~\ref{autExSatBaseMin}, as this approach does not
appraise the variation relation $\mathbin{\approx}$.
\subsection{Implementing $\mathbin{\sqsubseteq}$}
We now introduce an effective decision procedure for $q \mathbin{\sqsubseteq} q'$.
Note that $\mathbin{\sqsubseteq}$ can be defined recursively like a simulation
relation. Assuming, without loss of generality, that the IGMM is
input-complete, $\mathbin{\sqsubseteq}$ is the coarsest relation satisfying:
\[
q'\mathbin{\sqsubseteq} q \Longrightarrow \forall i\in \ensuremath{\mathbb{B}}^I, \begin{cases}
\lambda(q',i) \subseteq \lambda(q,i) \\
\delta(q',i) \mathbin{\sqsubseteq} \delta(q,i) \\
\end{cases}
\]
As a consequence, $\mathbin{\sqsubseteq}$ can be decided using any technique that is suitable
for computing simulation
relations~\citet{henzinger.95.focs,etessami.00.concur}.
Our implementation relies on a straightforward adaptation of the technique
of signatures described by~\citet[][Sec.~4.2]{babiak.13.spin}: for
each state $q$, we compute its \emph{signature} $\mathrm{sig}(q)$, that
is, a Boolean formula (represented as a BDD) encoding the outgoing
transitions of that state such that
$\mathrm{sig}(q) \Rightarrow \mathrm{sig}(q')$ if and only if $q\mathbin{\sqsubseteq} q'$.
Using these signatures, it becomes easy to build the
\emph{specialization graph} and derive a remapping function $r$.
Note that, even if $\mathbin{\sqsubseteq}$ can be computed like a simulation, we do
not use it to build a bisimulation quotient. The remapping applied in
Th.~\ref{theoremSpecReduc} does not correspond to the quotient of $M$ by
the equivalence relation induced by $\mathbin{\sqsubseteq}$.
\section{Benchmarks}\label{secBench}
The two approaches described in Sections \ref{secMin} and \ref{secBisim}
have been implemented within Spot
2.10~\citet{duret.16.atva2}, a toolbox for $\omega$-automata
manipulation, and used in our SyntComp'21
submission~\citet{renkin.21.synt}. The following benchmarks
are based on a development version of Spot\footnote{For instructions
to reproduce, see \url{https://www.lrde.epita.fr/~philipp/forte22/}}
that features efficient variation checks (verifying whether $q\mathbin{\approx} q'$)
thanks to an improved representation of cubes.
We benchmark the two proposed approaches against \textsc{MeMin},
against a simple bisimulation-based approach, and against one another.
The \textsc{MeMin} tool has already been shown~\citet{abel.15.iccad} to
be superior to existing tools like
\textsc{Bica}~\citet{pena.99.cadics},
\textsc{Stamina}~\citet{rho.94.cadics}, and
\textsc{Cosme}~\citet{alberto.13.ocs}; we are not aware of more recent
contenders. For this reason, we only compare our approaches to
\textsc{MeMin}.
In a similar manner to~\citet{abel.15.iccad}, we use the ISM
benchmarks~\citet{kam1994fully} as well as the MCNC benchmark
suite~\citet{yang1991logic}. These benchmarks share a severe drawback: they
only feature very small instances. \textsc{MeMin} is able to solve any of
these instances in less than a second. We therefore extend the set of
benchmarks with our main use-cases: Mealy machines corresponding to control
strategies obtained from SYNTCOMP LTL specifications~\citet{jacobs20205th}.
As mentioned in Section \ref{secCompToReg}, \textsc{MeMin} processes Mealy
machines, encoded using the the KISS2 input format~\citet{yang1991logic},
whose output can be chosen from a cube. However, the IGMM formalism we promote
allows an arbitrary set of output valuations instead.
This is particularly relevant for the SYNTCOMP benchmark, as the LTL
specifications from which the sample's Mealy machines are derived often fail to fully
specify the output. In order to (1) show the benefits of the generalized
formalism while (2) still allowing comparisons with \textsc{MeMin}, we
prepared two versions of each SYNTCOMP input: the ``full'' version features
arbitrary sets of output valuations that cannot be processed by \textsc{MeMin},
while in the ``cube'' version said sets have been
replaced by the first cube produced by the Minato algorithm~\citet{minato.92.sasimi}
on the original output set. The ACM and MCNC benchmarks, on the other hand,
already use a single output cube in the first place.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\resizebox {1.\textwidth} {!}
{
\input{tot_time.tex}
}
\caption{Log-log plot of runtimes. The legend $a/b$ stands for
$a$ cases above diagonal, and $b$ below.}
\label{fig_tottime}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\centering
\resizebox {1.\textwidth} {!}
{
\input{n_lit_clause.tex}
}
\caption{Comparison of the number of literals and clauses in the encodings.}
\label{fig_nclauses}
\end{minipage}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{lccccc|ccc@{~}ccc|ccc@{~}cc}
& & & & & & & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{orig}}$} & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{min}}$} & & & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{orig}}$} & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{min}}$} \\
& >(1) & >(2) & >(3) & >(4) & & & avg. & md. & avg. & md. & & & avg. & md. & avg. & md. \\
\cmidrule{1-5} \cmidrule{8-11} \cmidrule{14-17}
original & 114 & 304 & 271 & 314 & & & 1.00 & 1.0 & 6.56 & 1.0 & & & 1.00 & 1.00 & 12.23 & 1.77 \\
(1) bisim (full) & \phantom{000} & 249 & 214 & 275 & & & 0.94 & 1.0 & 1.85 & 1.0 & & & 0.88 & 1.00 & \phantom{0}2.72 & 1.50 \\
(2) bisi\rlap{m w/ o.a. (full)} & \phantom{000} & \phantom{00}0 & \phantom{0}68 & \phantom{0}84 & & & 0.83 & 1.0 & 1.55 & 1.0 & & & 0.66 & 0.67 & \phantom{0}2.10 & 1.00 \\
(3) \textsc{MeMin} (minima\rlap{l cube)} & \phantom{000} & \phantom{0}74 & \phantom{00}0 & \phantom{0}77 & & & 0.81 & 1.0 & 1.13 & 1.0 & & & 0.63 & 0.69 & \phantom{0}1.27 & 1.00 \\
(4) SAT (full) & \tikzmark{b1}\phantom{000} & \phantom{00}0 & \phantom{00}0 & \phantom{00}0 & & & 0.77 & 1.0 & 1.00 & 1.0\tikzmark{b2} & & & \tikzmark{b3}0.54 & 0.56 & \phantom{0}1.00 & 1.00\tikzmark{b4} \\
\\[1em]
\end{tabular}
\end{center}
\begin{tikzpicture}[overlay,remember picture]
\draw [decoration={brace},decorate,thick]
($(b2)+(0,-2mm)$) -- node [below=2pt,align=center] {all 634 instances\\without timeout} ($(b1)+(0,-2mm)$) ;
\draw [decoration={brace},decorate,thick]
($(b4)+(0,-2mm)$) -- node [below=2pt,align=center] {314 non-minimal\\\llap{instan}ces without timeout} ($(b3)+(0,-2mm)$) ;
\end{tikzpicture}%
\caption{Statistics about our three reduction algorithms. The leftmost pane
counts the number of instances where algorithm (y) yields a smaller result than
algorithm (x); as an example,
bisimulation with output assignment (2) outperforms
standard bisimulation (1) in 249 cases. The middle pane presents mean
(avg.) and median (md.) size ratios relative to the
original size and the minimal size of the sample machines.
The rightmost pane presents similar statistics while ignoring
all instances that were already minimal in the first place.\label{tab:stats}}
\end{table}
Figure~\ref{fig_tottime} displays a log-log plot comparing our different methods to
\textsc{MeMin}, using only the ``cube'' instances.\footnote{A 30 minute
timeout was enforced for all instances. The benchmarks were run on an
Asus G14 with a Ryzen 4800HS CPU with 16GB of RAM and no swap }. The
label ``\emph{bisim. w/ o.a.}'' refers to the approach outlined in
Section~\ref{secBisim}, ``\emph{bisim.}'', to a simple bisimulation
quotient, and ``\emph{SAT}'', to the approach of Section~\ref{secMin}.
Points on the black diagonal stand for cases where \textsc{MeMin} and the
method being tested had equal runtime; cases above this line favor
\textsc{MeMin}, while cases below favor the aforementioned methods.
Points on the dotted line at the edges of the figure represent timeouts.
Only \textsc{MeMin} fails this way, on 10 instances.
Figure~\ref{fig_nclauses} compares the maximal number of literals and clauses
used to perform the SAT-based minimization by \textsc{MeMin} or by our
implementation. These two figures only describe ``cube'' instances, as
\textsc{MeMin} needs to be able to process the sample machines.
To study the benefits of our IGMM model's generic outputs, Table~\ref{tab:stats}
compares the relative reduction ratios achieved by the various methods
w.r.t. other methods as well as the original and minimal size of the sample
machines. We use the ``full'' inputs everywhere with the exception of
\textsc{MeMin}.
\subsubsection{Interpretation.}
Reduction via bisimulation solves all instances and has been proven to be by far the
fastest method (Fig.~\ref{fig_tottime}), but also the coarsest, with
a mere $0.94$ reduction ratio (Tab.\ref{tab:stats}).
Bisimulation with output assignment achieves a better reduction ratio of $0.83$, very
close to \textsc{MeMin}'s $0.81$.
In most cases, the proposed SAT-based approaches remain significantly slower than
approaches based on bisimulation (Fig.~\ref{fig_tottime}). Our SAT-based
algorithm is sometimes slower than \textsc{MeMin}'s, as the model's increased
expressiveness requires a more complex method. However,
improving the use of partial solutions and increasing the expressiveness
of the input symbols significantly reduce the size of the encoding of
the intermediate SAT problems featured in our method (Fig.~\ref{fig_nclauses}),
hence, achieve a lower memory footprint.
Points on the horizontal line at the bottom of Figure~\ref{fig_nclauses}
correspond to instances that have already been proven minimal,
since the partial solution is equal to the entire set of states:
in these cases, no further reduction is required.
Finally, the increased expressiveness of our model results in
significantly smaller minimal machines, as shown by the $1.27$
reduction ratio of \textsc{MeMin}'s cube-based machines compared to
the minimisation of generic IGMMs derived from the same specification.
There are also 74 cases where this superior expressiveness allows the
bisimulation with output assignment to beat \textsc{MeMin}.
\section{Conclusion}\label{secConcl}
We introduced a generalized model for incompletely specified Mealy
machines, whose output is an arbitrary choice between multiple
possible valuations.
We have presented two reduction techniques on this model,
and compared them against the state-of-the-art minimization tool
\textsc{MeMin} (where the output choices are restricted to a cube).
The first technique is a SAT-based approach inspired by {\sc
MeMin}~\citet{abel.15.iccad} that yields a minimal machine. Thanks to
this generalized model and an improved use of the partial solution, we
use substantially fewer clauses and literals.
The second technique yields a reduced yet not necessarily minimal
machine by relying on the notion of state specialization. Compared
to the SAT-based approach, this technique offers a good compromise
between the time spent performing the reduction, and the actual
state-space reduction, especially for the cases derived from SYNTCOMP
from which our initial motivation originated.
Both techniques are implemented in Spot 2.10. They have been used in
our entry to the 2021 Synthesis Competition~\citet{renkin.21.synt}.
Spot comes with Python bindings that make it possible to experiment
with these techniques and compare their respective effects\footnote{See: \url{https://spot.lrde.epita.fr/ipynb/synthesis.html}.}.
\bibliographystyle{abbrvnat}
|
\section{Introduction}
The supersymmetric version of the seesaw mechanism is an attractive candidate for physics beyond the Standard Model. On the one hand, it includes the seesaw mechanism, which postulates the existence of right-handed neutrino fields and has become the most popular framework to account for neutrino masses. The seesaw is able to accommodate the experimental data on neutrino masses and mixing \cite{Yao:2006px}, explaining naturally the small neutrino mass scale. On the other hand, it embraces low energy supersymmetry, with its rich phenomenology and its well known virtues. In fact, the minimal supersymmetric Standard Model solves the hierarchy problem, achieves the unification of the gauge couplings, and contains a dark matter candidate: the lightest supersymmetric particle.
The lightest sneutrino is a new dark matter candidate present in the supersymmetric seesaw. Being a mixture of left-handed and right-handed sneutrino fields, the lightest sneutrino will have different properties depending on its composition in terms of interactions eigenstates. In general, three different kind of sneutrinos can be envisioned: a dominantly left-handed one, a mixed sneutrino, or a dominantly right-handed one. A dominantly left-handed sneutrino is not a good dark matter candidate. They are ruled out by experimental searches \cite{Ahlen:1987mn} and tend to have a too small relic density \cite{Falk:1994es}. A mixed sneutrino can be compatible with the observed dark matter density as well as with present bounds from direct searches \cite{ArkaniHamed:2000bq,Arina:2007tm}. The required mixing is obtained at the expense of a large neutrino trilinear coupling, which is not allowed in typical models of supersymmetry breaking. A dominantly right-handed sneutrino is the final possibility, the one we will be concerned with throughout this paper. A right-handed sneutrino, being essentially sterile, interacts with other particles mainly through the neutrino Yukawa coupling. Could such a sterile sneutrino account for the observed dark matter density?
Gopalakrishna, Gouvea, and Porod, in \cite{Gopalakrishna:2006kr}, studied that possibility within the same scenario we are considering here. They showed that self-annihilations of right-handed sneutrinos as well as co-annihilations with other particles are too weak to keep the sneutrinos in equilibrium with the thermal plasma in the early Universe. They also found that the production of sneutrinos in the decay of other supersymmetric particles gives a too large contribution to the relic density. They concluded, therefore, that in the standard cosmological model right-handed sneutrinos cannot explain the dark matter of the Universe.
Even though generally valid, that conclusion is not guaranteed if the mass difference between the Higgsino and the sneutrino is small. In that case, inverse decays, such as $\tilde N+L\to \tilde H$, contribute to the annihilation of sneutrinos and therefore to the reduction of the sneutrino relic density. Such possibility was not taken into account in \cite{Gopalakrishna:2006kr}. In this paper, we will focus on models with a Higgsino NLSP and show that inverse processes cannot be neglected, for they suppress the sneutrino relic density by several orders of magnitude. Then, we will reexamine whether the sterile sneutrino can explain the dark matter of the Universe in the standard cosmological model.
In the next section we briefly review the supersymmetric seesaw model and show that sterile sneutrinos arise naturally in common scenarios of supersymmetry breaking. Then, in section \ref{sec:3}, we will include inverse decays into the Boltzmann equation that determines the sneutrino abundance. It is then shown that inverse decays are indeed relevant; they cause a significant reduction of the relic density. In section \ref{sec:4}, we study the relic density as a function of the neutrino Yukawa coupling, the sneutrino mass, and the Higgsino-sneutrino mass difference. There, we will obtain our main result: the suppression effect of inverse decays, though important, is not enough to bring the sneutrino relic density down within the observed range. In the final section we will review our study and present our conclusions.
\section{The model}\label{sec:2}
We work within the supersymmetric version of the seesaw mechanism, where the field content of the MSSM is supplemented with a right-handed neutrino superfield $N$ per generation. The superpotential then reads
\begin{equation}
W=W_{MSSM}+\frac12M_N^{IJ}N^IN^J+Y_\nu^{IJ} H_uL^IN^J
\label{superp}
\end{equation}
where, as usual, we have assumed R-parity conservation and renormalizability. $M_N$ is the Majorana mass matrix of right-handed neutrinos and $Y_\nu$ is the matrix of neutrino Yukawa couplings. Without loss of generality $M_N$ can be chosen to be real and diagonal. $Y_\nu$ is in general complex but we will assume, for simplicity, that it is real. $M_N$ and $Y_\nu$ are new free parameters of the model; they are to be determined or constrained from experimental data.
After electroweak symmetry breaking, the above superpotential generates the following neutrino mass terms
\begin{equation}
\mathcal{L}_{\nu\,mass}=-v_uY_\nu \nu N-\frac12M_NNN+h.c.
\end{equation}
If $M_N\gg v_u Y_\nu$, the light neutrino mass matrix, $m_\nu$, is then given by the seesaw formula
\begin{equation}
m_\nu=-m_DM_N^{-1}m_D^T,
\label{seesaw}
\end{equation}
with $m_D=v_uY_\nu$ being the Dirac mass. Since $m_\nu$ is partially known from neutrino oscillation data, equation (\ref{seesaw}) is actually a constraint on the possible values of $Y_\nu$ and $M_N$. It is a weak constraint though; and it allows $M_N$ to vary over many different scales. In this paper we consider what is usually known as a seesaw mechanism at the electroweak scale. That is, we assume that $M_N\sim 100$ GeV. Thus, since the neutrino mass scale is around $m_\nu\sim 0.1$ eV, the typical neutrino Yukawa coupling is
\begin{equation}
Y_\nu\sim 10^{-6}\,,
\end{equation}
or around the same order of magnitude as the electron Yukawa coupling. Notice that this value of $Y_\nu$ is a consequence of the seesaw mechanism at the electroweak scale. In other frameworks, such as Dirac neutrinos or seesaw at much higher energies, $Y_\nu$ takes different values. We will not consider such possibilities here.
The new soft-breaking terms of the supersymmetric seesaw model are given by
\begin{equation}
\mathcal{L}_{soft}=-(m_N^2)^{IJ}\tilde N_R^{*I}\tilde N_R^J+\left[(m_B^2)^{IJ}\tilde N_R^I\tilde N_R^J-A_\nu^{IJ}h_u\tilde L^I\tilde N_R^J+h.c.\right]\,.
\label{lsoft}
\end{equation}
They include sneutrino mass terms as well a trilinear interaction term. For simplicity, we will assume that $m_N^2$, $m_B^2$, and $A_\nu$ are real.
To study the sneutrino mass terms resulting from (\ref{superp}) and (\ref{lsoft}) it is convenient to suppress the generation structure; that is, to work with one fermion generation only. It is also useful to introduce the real fields $\tilde\nu_1$, $\tilde\nu_2$, $\tilde N_1$ and $\tilde N_2$ according to the relations
\begin{eqnarray}
\tilde\nu_L=\frac{1}{\sqrt2}\left(\tilde \nu_1+i\tilde \nu_2\right)\,,
\tilde N_R=\frac{1}{\sqrt2}\left(\tilde N_1+ i \tilde N_2\right).
\end{eqnarray}
Indeed, in the basis $(\tilde \nu_1,\tilde N_1,\tilde \nu_2,\tilde N_2)$ the sneutrino mass matrix takes a block diagonal form
\begin{equation}
\mathcal{M}_{\tilde\nu}=\left(\begin{array}{cccc} m_{LL}^2 & m_{RL}^{2}+m_DM_N & 0 &0\\
m_{RL}^2+m_D M & m_{RR}^2-m_B^2 & 0 &0 \\
0 & 0& m_{LL}^2 & m_{RL}^{2}-m_DM_N\\
0 & 0& m_{RL}^2-m_DM_N & m_{RR}^2+m_B^2\end{array}\right)
\label{eq:mv}
\end{equation}
where $m_{LL}=m_{\tilde L}^2+m_D^2+0.5m_Z^2\cos2\beta $, $m_{RR}^2=M_N^2+m_N^2+m_D^2$, and $m_{LR}^2=-\mu v_dY_N+v_uA_\nu$.
This matrix can be diagonalized by a unitary rotation with a mixing angle given by
\begin{equation}
\tan 2\theta_{1,2}^{\tilde \nu}=\frac{2(m_{RL}^2\pm m_DM)}{m_{LL}^2-(m_{RR}^2\mp m_B^2)},
\label{eq:mix}
\end{equation}
where the top sign corresponds to $\theta_1$ --to the mixing between $\tilde \nu_1$ and $\tilde N_1$-- whereas the bottom sign corresponds to $\theta_2$.
Since $\mathcal{M}_{\tilde\nu}$ is independent of gaugino masses, there is a region in the supersymmetric parameter space where the lightest sneutrino, obtained from (\ref{eq:mv}), is the lightest supersymmetric particle (LSP) and consequently the dark matter candidate. That is the only region we will consider in this paper.
The lightest sneutrino is a mixture of left-handed and right-handed sneutrino fields. Depending on its gauge composition, three kinds of sneutrinos can be distinguished: a dominantly left-handed sneutrino, a mixed sneutrino, and a dominantly right-handed sneutrino. A dominantly left-handed sneutrino is not a good dark matter candidate for it is already ruled out by direct dark matter searches. These sneutrinos also have large interactions cross sections and tend to annihilate efficiently in the early universe, typically yielding a too small relic density. A mixed sneutrino may be a good dark matter candidate. By adjusting the sneutrino mixing angle, one can simultaneously suppress its annihilation cross section, so as to obtain the right relic density, and the sneutrino-nucleon cross section, so as to evade present constraints from direct searches. A detailed study of models with mixed sneutrino dark matter was presented recently in \cite{Arina:2007tm}. A major drawback of these models is that the required mixing may be incompatible with certain scenarios of supersymmetry breaking, such as gravity mediation. The third possibility, the one we consider, is a lightest sneutrino which is predominantly right-handed. That is, a \emph{sterile sneutrino}.
A sterile sneutrino is actually unavoidable in supersymmetry breaking scenarios where the trilinear couplings are proportional to the corresponding Yukawa matrices, such as the constrained Minimal Supersymmetric Standard Model (CMSSM)\cite{Yao:2006px}. In these models
\begin{equation}
A_\nu=a_\nu Y_\nu m_{soft}
\end{equation}
where $m_{soft}\sim 100$ GeV is a typical supersymmetry breaking mass and $a_\nu$ is an order one parameter. Because $Y_\nu$ is small, $A_\nu$ is much smaller than the electroweak scale,
\begin{equation}
A_\nu\sim 100 \mathrm{keV}\,.
\end{equation}
Hence, from equation (\ref{eq:mix}), the mixing angle between $\tilde \nu_i$ and $\tilde N_i$ is also very small
\begin{equation}
\sin\theta_i\sim 10^{-6}\,.
\end{equation}
Thus, we see how in these models the small $Y_\nu$ translates into a small trilinear coupling $A_\nu$ that in turn leads to a small mixing angle --to a sterile sneutrino. Sterile sneutrinos are also expected in other supersymmetry breaking mechanisms that yield a small $A_\nu$ at the electroweak scale.
Since the mixing angle is small, we can extract the sterile neutrino mass directly from (\ref{eq:mv}). It is given by
\begin{equation}
m_{\tilde N}^2=m_{RR}^2-m_{B}^2\approx M_N^2+m_N^2-m_B^2
\end{equation}
where we have neglected the Dirac mass term in the last expression. $m_{\tilde N}$ is thus expected to be at the electroweak scale. In the following, we will consider $m_{\tilde N}=m_{LSP}$ as a free parameter of the model.
To summarize, the models we study consist of the MSSM plus an electroweak scale seesaw mechanism that accounts for neutrino masses. Such models include a new dark matter candidate: the lightest sneutrino. In common scenarios of supersymmetry breaking, the lightest sneutrino, which we assume to be the dark matter candidate, turns out to be a dominantly right handed sneutrino, or a sterile sneutrino. In the following, we will examine whether such a \emph{sterile} sneutrino may account for the dark matter of the Universe.
\section{The $\tilde N$ relic density}\label{sec:3}
To determine whether the sterile sneutrino can explain the dark matter of the universe we must compute its relic density $\Omega_{\tilde N}h^2$ and compare it with the observed value $\Omega_{DM}h^2=0.11\cite{Dunkley:2008ie}$. This question was already addressed in \cite{Gopalakrishna:2006kr}. They showed that, due to their weak interactions, sneutrinos are unable to reach thermal equilibrium in the early Universe. In fact, both the self-annihilation and the co-annihilation cross section are very suppressed. They also noticed that sneutrinos could be produced in the decays of other supersymmetric particles and found that such decay contributions lead to a relic density several orders of magnitude larger than observed. Thus, they concluded, sterile sneutrinos can only be non-thermal dark matter candidates.
That conclusion was drawn, however, without taking into account inverse decay processes. We now show that if the Higgsino-sneutrino mass difference is small\footnote{If it is large the results in \cite{Gopalakrishna:2006kr} would follow.}, inverse decays may suppress the sneutrino relic density by several orders of magnitude. To isolate this effect, only models with a Higgsino NLSP are considered in the following. We then reexamine the possibility of having a sterile sneutrino as a thermal dark matter candidate within the standard cosmological model.
In the early Universe, sterile sneutrinos are mainly created through the decay $\tilde H\to \tilde N+L$, where $\tilde H$ is the Higgsino and $L$ is the lepton doublet. Alternatively, using the mass-eigenstate language, one may say that sneutrinos are created in the decay of neutralinos ($\chi^0\to \tilde N +\nu$) and charginos ($\chi^\pm\to \ell^\pm +\tilde N$). These decays are all controlled by the neutrino Yukawa coupling $Y_\nu$. Other decays, such as $\tilde\ell\to \tilde N f f'$ via $W^\pm$, also occur but the Higgsino channel dominates. Regarding annihilation processes, the most important one is the inverse decay $\tilde N+L\to\tilde H$. In fact, the sneutrino-sneutrino annihilation cross section is so small that such process never reaches equilibrium. And a similar result holds for the sneutrino coannihilation cross section. We can therefore safely neglect annihilations and coannihilations in the following. Only decays and inverse decays contribute to the sneutrino relic density.
The Boltzmann equation for the sneutrino distribution function $f_{\tilde N}$ then reads:
\begin{align}
\label{boltzmann}
\frac{\partial f_{\tilde N}}{\partial t}-H\frac{|\mathbf{p}|^2}{E}\frac{\partial f_{\tilde N}}{\partial E}=\frac{1}{2 E_{\tilde N}}\int & \frac{d^3p_L}{(2\pi)^3 2E_L}\frac{d^3p_{\tilde H}}{(2\pi)^3 2E_{\tilde H}}|\mathcal{M}_{\tilde H\to L\tilde N}|^2\\ \nonumber
& (2\pi)^4 \delta^4(p_{\tilde H}-p_L-p_{\tilde N})\left[f_{\tilde H}-f_L f_{\tilde N}\right]
\end{align}
where $H$ is the Hubble parameter and $f_{\tilde H}$, $f_{L}$ respectively denote the $\tilde H$ and $L$ distribution functions. Other dark matter candidates, including the neutralino, have large elastic scatterings cross sections with the thermal plasma that keep them in \emph{kinetic} equilibrium during the freeze out process. Their distribution functions are then proportional to those in \emph{chemical} equilibrium and the Boltzmann equation can be written as an equation for the number density instead of the distribution function \cite{Gondolo:1990dk}. For sterile sneutrinos, on the contrary, the elastic scattering is a slow process --being suppressed by the Yukawa coupling-- and kinetic equilibrium is not guaranteed. Hence, we cannot write (\ref{boltzmann}) as an equation for the sneutrino number density $n_{\tilde N}$ and must instead solve it for $f_{\tilde N}$.
If the condition $f_{\tilde N}\ll 1$ were satisfied, inverse processes could be neglected and a simple equation relating the sneutrino number density to the Higgsino number density could be obtained. That is the case, for instance, in supersymmetric scenarios with Dirac mass terms only \cite{Asaka:2005cn}. In such models, the neutrino Yukawa coupling is very small, $Y_\nu\sim 10^{-13}$, and sneutrinos never reach chemical equilibrium. But for the range of parameters we consider, $Y_\nu\sim 10^{-6}$, the condition $f_{\tilde N}\ll 1$ is not satisfied.
Since equation (\ref{boltzmann}) depends also on the Higgsino distribution function, one may think that it is necessary to write the Boltzmann equation for $f_{\tilde H}$ and then solve the resulting system for $f_{\tilde N}$ and $f_{\tilde H}$. Not so. Higgsinos, due to their gauge interactions, are kept in thermal equilibrium --by self-annihilation processes-- until low temperatures, when they decay into $\tilde N+L$ through the $Y_\nu$ suppressed interaction. It is thus useful to define a \emph{freeze-out} temperature, $T_{f.o.}$, as the temperature at which these two reaction rates become equal. That is,
\begin{equation}
n_{\tilde H}\langle\sigma_{\tilde H\sH}v\rangle|_{T_{f.o.}}=\Gamma(\tilde H\to \tilde N +L)|_{T_{f.o.}}\,,
\label{eq:fo}
\end{equation}
where $n_{\tilde H}$ is the Higgsino number density and $\langle\sigma_{\tilde H\sH}v\rangle$ is the thermal average of the Higgsino-Higgsino annihilation rate into light particles. $T_{f.o.}$ marks the boundary between two different regimes. For $T>T_{f.o.}$ Higgsinos are in equilibrium and annihilate efficiently. The Higgsinos produced in the inverse decays, in particular, easily annihilate with thermal Higgsinos into light particles. The inverse process is thus effective. In contrast, for $T<T_{f.o.}$ Higgsinos mostly decay into the LSP and inverse decays cannot deplete the sneutrino abundance. The final state Higgsinos simply decay back into sneutrinos: $\tilde N+L\to\tilde H\to \tilde N+L$.
Below $T_{f.o.}$, therefore, the total number of sneutrinos plus Higgsinos remains constant. Thus, we only need to integrate equation (\ref{boltzmann2}) until $T_{f.o.}$, a region in which Higgsinos are in equilibrium.
Assuming a Maxwell-Boltzmann distribution, $f(E)\propto \exp(-E/T)$, for Higgsinos and leptons and neglecting lepton masses, the integrals in (\ref{boltzmann}) can be evaluated analytically to find
\begin{equation}
\frac{\partial f_{\tilde \nu}}{\partial t}-H\frac{|\mathbf{p}|^2}{E}\frac{\partial f_{\tilde \nu}}{\partial E}=\frac{|\mathcal{M}_{\tilde H\to L\tilde N}|^2 T}{16\pi E_{\tilde N}|\mathbf{p}_{\tilde N}|}\left(e^{-E_{\tilde N}/T} -f_{\tilde N}\right) \left[e^{-E_{-}/T}-e^{-E_{+}/T}\right]
\label{boltzmann2}
\end{equation}
where
\begin{align}
E_\pm&=\frac{m_{\tilde H}^2-m_{\tilde N}^2}{2m_{\tilde N}^2}(E_{\tilde N}\pm|\mathbf{p}_{\tilde N}|).
\end{align}
In the following we will solve equation (\ref{boltzmann2}) to obtain the sneutrino abundance, $Y_{\tilde N}=n_{\tilde N}/s$, and the sneutrino relic density, $\Omega_{\tilde N}h^2$. The sneutrino abundance today will be given by
\begin{equation}
Y_{\tilde N}|_{T_0}=Y_{\tilde N}|_{T_{f.o.}}+Y_{\tilde H}|_{T_{f.o.}},
\end{equation}
where the second term takes into account that the Higgsinos present at freeze-out will decay into sneutrinos. The sneutrino relic density today is then obtained as
\begin{equation}
\Omega_{\tilde N}h^2=2.8\times10^{10} Y_{\tilde N} \frac{m_{\tilde N}}{100\mathrm{GeV}}.
\end{equation}
The only parameters that enter directly in the computation of the sneutrino relic density are the Yukawa coupling, the sneutrino mass, and the Higgsino mass, which we take to be given by the $\mu$ parameter --$m_{\tilde H}=\mu$. All other supersymmetric particles besides $\tilde N$ and $\tilde H$ are assumed to be heavier, with $m_{susy}\sim 1$ TeV. To determine the freeze-out temperature, equation (\ref{eq:fo}), we also need to know the Higgsino annihilation rate into Standard Model particles. We use the DarkSUSY package \cite{Gondolo:2004sc} to extract that value. Regarding the initial conditions, we assume that at high temperatures ($T\gg m_{\tilde H}$) the sneutrino distribution function is negligible $f_{\tilde N}\sim 0$. Finally, we assume that the early Universe is described by the standard cosmological model.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{evolyuk.ps}
\caption{\small The role of the neutrino Yukawa coupling on the sterile sneutrino abundance. The figure shows $Y$ as a function of the temperature for different values of $Y_\nu$. The sneutrino mass is $100$GeV while $\mu=120$GeV.}
\label{figure1}
\end{center}
\end{figure}
Once decays and inverse decays are included in the $\tilde N$ Boltzmann equation, two questions naturally come to mind. First, for what values of $Y_\nu$ are inverse decays relevant? Second, can decays and inverse decays bring the sneutrinos into equilibrium? To answer these questions we show in figure \ref{figure1} the sneutrino abundance as a function of the temperature for $m_{\tilde N}=100$ GeV, $m_{\tilde H}=120$ GeV, and different values of $Y_\nu$. Notice that for $Y_\nu=10^{-8}$ inverse processes are negligible and the sneutrino abundance simply grows with temperature. In that region, for $Y_\nu\lesssim 10^{-8}$, the sneutrino relic density is proportional to $Y_\nu^2$. From the figure we see that for $Y_\nu=10^{-7}$ the inverse process leads to a reduction of the sneutrino abundance around $T=20$ GeV. The Yukawa interaction is not yet strong enough to bring the sneutrinos into equilibrium. For $Y_\nu=10^{-6}$ sneutrinos do reach equilibrium and then decouple at lower temperatures. For even larger Yukawa couplings, $Y_\nu=10^{-5},10^{-4}$, equilibrium is also reached but the decoupling occurs at higher temperatures. In that region, the relic density also increases with the Yukawas. Thus, for $Y_\nu\sim 10^{-6}$ inverse decays not only are relevant, they are strong enough to thermalize the sneutrinos.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{compall.ps}
\end{center}
\caption{\small The effect of the inverse process on the sneutrino relic density. The panels show the resulting sneutrino abundance $Y=n/s$ as a function of the temperature for $m_{\tilde N}=100$GeV and different values of $\mu$. The full line is the result obtained including the inverse process whereas the dashed line is the result without including them. The dash-dotted line shows the sneutrino equilibrium abundance.}
\label{figure2}
\end{figure}
Figure \ref{figure2} directly compares the resulting sneutrino abundance with and without including the inverse process. The full line corresponds to the correct result, taking into account the direct and the inverse process. The dashed line, instead, shows the result for the direct process only, that is the sneutrino abundance according to \cite{Gopalakrishna:2006kr}. The sneutrino mass was taken to be $100$GeV and $Y_\nu$ was set to $10^{-6}$. The Higgsino mass is different in each panel and includes values leading to strong and mild degeneracy as well as no-degeneracy at all between the sneutrino and the Higgsino. Notice that the correct final abundance, and consequently the resulting relic density, is always several orders of magnitude below the value predicted in \cite{Gopalakrishna:2006kr}. Even for the case of a large mass difference, we find a suppression of 3 orders of magnitude in the relic density. And as the mass difference shrinks the suppression becomes larger, reaching about $6$ orders of magnitude for $\mu=150$ and about $7$ orders of magnitude for $\mu=120$GeV. We thus see that over the whole parameter space the inverse process has a large suppression effect on the sneutrino relic density.
\section{Results}
\label{sec:4}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{newsneuyuk.ps}
\caption{\small The sneutrino relic density as a function of the neutrino Yukawa coupling for different values of $m_{\tilde N}$ and $\Delta m=20$GeV.}
\label{figure3}
\end{center}
\end{figure}
So far we have found that the inverse decay process $\tilde N+L\to\tilde H$ leads to a suppression of the sneutrino relic density. It remains to be seen whether such suppression is strong enough to bring the relic density down to the observed value. That is, we will now study the dependence of the relic density with the sneutrino mass, the Higgsino-sneutrino mass difference, and the neutrino Yukawa coupling to find the region of the parameter space that satisfies the condition $\Omega_{\tilde N}h^2=\Omega_{DM}h^2$.
Figure \ref{figure3} shows the sneutrino relic density as a function of the neutrino Yukawa coupling and different values of the sneutrino mass. The Higgsino-sneutrino mass difference ($\Delta m=m_{\tilde H}-m_{\tilde N}$) was set to $20$ GeV. Larger values would only increase the relic density --see figure \ref{figure2}. Notice that, for a given sneutrino mass, the relic density initially decreases rather steeply reaching a minimum value at $Y_\nu\lesssim 10^{-6}$ and then increases again. From the figure we also observe that the smallest value of the relic density is obtained for $m_{\tilde H}=400$ GeV, that is, when the percentage mass difference is smaller. In any case, the relic density is always larger than $1$, too large to be compatible with the observations.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{newsneumsn.ps}
\caption{\small The sneutrino relic density as a function of the sneutrino mass for $Y_\nu=10^{-6}$ and different values of $\Delta m/m_{\tilde N}$. As expected the smaller the mass difference the smaller the relic density is.}
\end{center}
\label{fig:msn}
\end{figure}
This result is confirmed in figure \ref{fig:msn} when we display the relic density as a function of the sneutrino mass for $Y_{\tilde N}=10^{-6}$ and different values of $\Delta m/m$. In agreement with the previous figure, we see that the smaller the percentage mass difference, the smaller the relic density is. Yet, $\Omega_{\tilde N}h^2$ is always larger than $1$. We have verified that this conclusion is robust. Neither larger sneutrino masses nor different Yukawa couplings lead to the correct value of the relic density.
\section{Conclusions}
We studied the possibility of explaining the dark matter with a sterile sneutrino in a supersymmetric model consisting of the MSSM supplemented with a seesaw mechanism at the weak scale. We showed that if the Higgsino is the NLSP inverse decays play a crucial role in the computation of the sneutrino relic density, suppressing it for several orders of magnitude. We wrote down and numerically solved the correct Boltzmann equation that determines the sneutrino abundance and studied the resulting relic density as a function of the sneutrino mass, the neutrino Yukawa coupling and the Higgsino-sneutrino mass difference. We found that the sterile sneutrino relic density, even though much smaller than previously believed, is still larger than the observed dark matter density. In this scenario, therefore, the sterile sneutrino is not a thermal dark matter candidate.
\section*{Acknowledgments}
I am supported by the \emph{Juan de la Cierva} program of the Ministerio de Educacion y Ciencia of Spain, by Proyecto Nacional FPA2006-01105, and by the Comunidad de Madrid under Proyecto HEPHACOS S-0505/ESP-0346. I would like to thank W. Porod and Ki-Young Choi for comments and suggestions.
\thebibliography{99}
\bibitem{Yao:2006px}
C.~Amsler {\it et al.} [Particle Data Group],
Phys.\ Lett.\ B {\bf 667}, 1 (2008). W.~M.~Yao {\it et al.} [Particle Data Group],
J.\ Phys.\ G {\bf 33} (2006) 1.
\bibitem{Ahlen:1987mn}
S.~P.~Ahlen, F.~T.~Avignone, R.~L.~Brodzinski, A.~K.~Drukier, G.~Gelmini and D.~N.~Spergel,
Phys.\ Lett.\ B {\bf 195} (1987) 603.
D.~O.~Caldwell, R.~M.~Eisberg, D.~M.~Grumm, M.~S.~Witherell, B.~Sadoulet, F.~S.~Goulding and A.~R.~Smith,
Phys.\ Rev.\ Lett.\ {\bf 61} (1988) 510.
M.~Beck {\it et al.},
Phys.\ Lett.\ B {\bf 336} (1994) 141.
\bibitem{Falk:1994es}
T.~Falk, K.~A.~Olive and M.~Srednicki,
Phys.\ Lett.\ B {\bf 339} (1994) 248
[arXiv:hep-ph/9409270].
\bibitem{ArkaniHamed:2000bq}
N.~Arkani-Hamed, L.~J.~Hall, H.~Murayama, D.~Tucker-Smith and N.~Weiner,
Phys.\ Rev.\ D {\bf 64} (2001) 115011
[arXiv:hep-ph/0006312].
F.~Borzumati and Y.~Nomura,
Phys.\ Rev.\ D {\bf 64} (2001) 053005
[arXiv:hep-ph/0007018].
\bibitem{Arina:2007tm}
C.~Arina and N.~Fornengo,
JHEP {\bf 0711} (2007) 029
[arXiv:0709.4477 [hep-ph]].
\bibitem{Gopalakrishna:2006kr}
S.~Gopalakrishna, A.~de Gouvea and W.~Porod,
JCAP {\bf 0605} (2006) 005
[arXiv:hep-ph/0602027].
\bibitem{Asaka:2005cn}
T.~Asaka, K.~Ishiwata and T.~Moroi,
Phys.\ Rev.\ D {\bf 73} (2006) 051301
[arXiv:hep-ph/0512118].
T.~Asaka, K.~Ishiwata and T.~Moroi,
Phys.\ Rev.\ D {\bf 75} (2007) 065001
[arXiv:hep-ph/0612211].
\bibitem{Dunkley:2008ie}
J.~Dunkley {\it et al.} [WMAP Collaboration],
arXiv:0803.0586 [astro-ph].
\bibitem{Gondolo:1990dk}
P.~Gondolo and G.~Gelmini,
Nucl.\ Phys.\ B {\bf 360} (1991) 145.
\bibitem{Gondolo:2004sc}
P.~Gondolo, J.~Edsjo, P.~Ullio, L.~Bergstrom, M.~Schelke and E.~A.~Baltz,
JCAP {\bf 0407} (2004) 008
[arXiv:astro-ph/0406204].
\end{document}
|
\section{Introduction}\label{sec1}
The anomalous Hall effect (AHE)~\cite{Nagaosa2010} and the magneto-optical effect (MOE)~\cite{Ebert1996,Oppeneer2001,Antonov2004} are fundamental phenomena in condensed matter physics and they have become appealing techniques to detect and measure magnetism by electric and optical means, respectively. Usually occuring in ferromagnetic metals, the AHE is characterized by a transverse voltage drop resulting from a longitudinal charge current in the absence of applied magnetic fields. There are two distinct contributions to the AHE, that is, the extrinsic one~\cite{Smit1955,Smit1958,Berger1970} depending on scattering of electron off impurities or due to disorder, and the intrinsic one~\cite{Sundaram1999,YG-Yao2004} solely determined by the Berry phase effect~\cite{D-Xiao2010} in a pristine crystal. Both of these mechanisms originate from time-reversal ($T$) symmetry breaking in combination with spin-orbit coupling (SOC)~\cite{Nagaosa2010}. The intrinsic AHE can be accurately calculated from electronic-structure theory on the \textit{ab initio} level, and examples include studies of Fe~\cite{YG-Yao2004,XJ-Wang2006}, Co~\cite{XJ-Wang2007,Roman2009}, SrRuO$_{3}$~\cite{Fang2003,Mathieu2004}, Mn$_{5}$Ge$_{3}$~\cite{CG-Zeng2006}, and CuCr$_{2}$Se$_{4-x}$Br$_{x}$~\cite{YG-Yao2007}. Referring to the Kubo formula~\cite{Kubo1957,CS-Wang1974}, the intrinsic anomalous Hall conductivity (IAHC) can be straightforwardly extended to the ac case (as given by the optical Hall conductivity), which is intimately related to the magneto-optical Kerr and Faraday effects (MOKE and MOFE) [see Eqs.~\eqref{eq:kerr} and~\eqref{eq:faraday} below]. Phenomenally, the MOKE and MOFE refer to the rotation of the polarization plane when a linearly polarized light is reflected from, or transmitted through a magnetic material, respectively. Owing to their similar physical nature, the intrinsic AHE is often studied together with MOKE and MOFE.
\begin{figure*}
\includegraphics[width=2\columnwidth]{Fig1}
\caption{(Color online) (a) Right-handed ($\kappa=+1$) and (b) left-handed ($\kappa=-1$) vector spin chiralities in coplanar noncollinear spin systems. The open arrows indicate the clockwise rotation of spin with a uniform angle, which results in a different spin configuration with the same spin chirality. (c) The crystal and magnetic structures of Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The purple, green, and blue balls represent Mn, $X$, and N atoms, respectively. The spin magnetic moments originate mainly from Mn atoms, while the spin polarization of $X$ and N atoms is negligible. The spins on three Mn-sublattices (Mn$_{1}$, Mn$_{2}$, and Mn$_{3}$) are indicated by red arrows that are aligned within the (111) plane (here, the right-handed spin chirality is shown as an example). The angles between neighboring spins are always 120$^{\circ}$, while the spins can simultaneously rotate within the (111) plane that is characterized by an azimuthal angle $\theta$ away from the diagonals of the face. (d) The (111) plane of Mn$_{3}X$N, which can be regarded as a kagome lattice of Mn atoms. The dotted lines mark the two-dimensional unit cell. (e)-(g) The R1, R2, and R3 phases with the right-handed spin chirality. There are one three-fold rotation axis ($C_{3}$, which is along the $[111]$ direction ($z$ axis)), three two-fold rotation axes ($C_{2}^{(1)}$, $C_{2}^{(2)}$, and $C_{2}^{(3)}$), and three mirror planes ($M^{(1)}$, $M^{(2)}$, and $M^{(3)}$) in the R1 phase; only $C_{3}$ axis is preserved in the R2 phase; the time-reversal symmetry $T$ has to be combined with a two-fold rotation and mirror symmetries in the R3 phase. (h)-(j) The L1, L2, and L3 phases with the left-handed spin chirality. There are one two-fold rotation axis ($C_{2}$) and one mirror plane ($M$) in the L1 phase; the time-reversal symmetry $T$ is combined with two-fold rotation and mirror symmetries in both the L2 and the L3 phases.}
\label{fig1}
\end{figure*}
As the AHE and the MOE are commonly considered to be proportional to the magnetization, most of the materials studied to date with respect to these phenomena are ferromagnets (FMs) and ferrimagnets (FiMs), while antiferromagnets (AFMs) are naively expected to have neither AHE nor MOE due to their vanishing net magnetization. Although $T$ symmetry is broken in AFMs, its combination $TS$ with other spatial symmetries $S$ (e.g., fractional translations or inversion) can reinstate Kramers theorem such that AHE and MOE vanish. A simple example is the one-dimensional collinear bipartite antiferromagnet~\cite{Herring1966}, where $S$ is the fractional translation by half of the vector connecting the two sublattices. Another example is the two-dimensional honeycomb lattice with collinear N{\'e}el order (as realized, e.g., in the bilayer MnPSe$_{3}$)~\cite{Sivadas2016}, which has natively the combined symmetry $TI$ although time-reversal symmetry $T$ and spatial inversion symmetry $I$ are both broken individually. The application of an electric field perpendicular to the film plane will manifest in broken $TI$ symmetry and band exchange splitting that generates the MOKE~\cite{Sivadas2016}. Such electrically driven MOKE has been realized, e.g., in multiferroic Cr-based metallorganic perovskites~\cite{FR-Fan2017}. Therefore, the AHE and the MOE, as the most fundamental fingerprints of $T$ symmetry breaking in matter, can in principle exist in AFMs if certain crystal symmetries are absent, even though the net magnetization vanishes. Notably, the cluster multipole theory proposed by Suzuki \textit{et al.}~\cite{Suzuki2017,Suzuki2018} has been recently applied to interpret the origin of AHE in AFMs.
Leaving aside collinear AFMs, recent works~\cite{Ohgushi2000,Shindou2001,Hanke2017,Shiomi2018,J-Zhou2016,WX-Feng2018,H-Chen2014,Kubler2014,GY-Guo2017,Y-Zhang2017,Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017,WX-Feng2015,Higo2018} revealed that noncollinear AFMs can also host nonvanishing AHE and MOE. Two types of noncollinear AFMs can be considered: noncoplanar and coplanar, which are characterized by scalar and vector spin chiralities, respectively~\cite{Kawamura2001}. On the one hand, the nonzero scalar spin chirality $\chi=\boldsymbol{S}_{i}\cdot(\boldsymbol{S}_{j}\times\boldsymbol{S}_{k})$ (where $\boldsymbol{S}_{i}$, $\boldsymbol{S}_{j}$, and $\boldsymbol{S}_{k}$ denote three neighboring noncoplanar spins) will generate a fictitious magnetic field that makes the electrons feel a real-space Berry phase while hopping in the spin lattice~\cite{Ohgushi2000,Shindou2001}. Consequently, the AHE can emerge in noncoplanar AFMs without SOC, which is referred to the topological Hall effect that has been theoretically predicted~\cite{Shindou2001,Hanke2017} and experimentally observed~\cite{Shiomi2018}, for instance, in disordered $\gamma$-Fe$_{x}$Mn$_{1-x}$ alloys. Moreover, the quantized version of the topological Hall effect was reported in the layered noncoplanar noncollinear K$_{0.5}$RhO$_{2}$ AFM insulator~\cite{J-Zhou2016}. Extending these findings, Feng \textit{et al.}~\cite{WX-Feng2018} proposed that topological MOE and quantum topological MOE exist in $\gamma$-Fe$_{x}$Mn$_{1-x}$ and K$_{0.5}$RhO$_{2}$, respectively.
Instead of the scalar spin chirality (which vanishes for coplanar spin configurations), the finite vector spin chirality~\cite{Kawamura2001},
\begin{equation}\label{eq:kappa}
\kappa=\frac{2}{3\sqrt{3}}\sum_{\langle ij\rangle}\left[\boldsymbol{S}_{i}\times\boldsymbol{S}_{j}\right]_{z},
\end{equation}
where $\langle ij\rangle$ runs over the nearest neighboring spins, is an important quantity in coplanar noncollinear AFMs such as cubic Mn$_{3}X$ ($X$ = Rh, Ir, Pt) and hexagonal Mn$_{3}Y$ ($Y$ = Ge, Sn, Ga). The Mn atoms in the (111) plane of Mn$_{3}X$ and in the (0001) plane of Mn$_{3}Y$ are arranged into a kagome lattice, while Mn$_{3}X$ and Mn$_{3}Y$ have opposite vector spin chiralities~\cite{Y-Zhang2017} with $\kappa=+1$ (right-handed state) and $\kappa=-1$ (left-handed state) [see Figs.~\ref{fig1}(a) and~\ref{fig1}(b)], respectively. The concept of right- and left-handed states adopted here follows the convention of Ref.~\onlinecite{Kawamura2001}. For both right- and left-handed spin chiralities, the spins can be simultaneously rotated within the plane, further resulting in different spin configurations [see Figs.~\ref{fig1}(a) and~\ref{fig1}(b)], e.g., the T1 and the T2 phases in Mn$_{3}X$~\cite{WX-Feng2015} as well as the type-A and the type-B phases in Mn$_{3}Y$~\cite{GY-Guo2017}. The vector spin chirality and the spin rotation discussed here allow us to characterize coplanar AFMs that have a 120$^\circ$ noncollinear magnetic ordering. For the AHE, Chen \textit{et al.}~\cite{H-Chen2014} discovered theoretically that Mn$_{3}$Ir has unexpectedly large IAHC and several other groups predicted the IAHC in Mn$_{3}Y$ with comparable magnitudes~\cite{Kubler2014,GY-Guo2017,Y-Zhang2017}. At the same time, the AHE in Mn$_{3}Y$ has been experimentally confirmed~\cite{Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017}. Because of the close relationship to AHE, Feng \textit{et al.}~\cite{WX-Feng2015} first predicted that large MOKE can emerge in Mn$_{3}X$ even though the net magnetization is zero. Eventually, Higo \textit{et al.}~\cite{Higo2018} successfully measured large zero-field Kerr rotation angles in Mn$_{3}$Sn at room temperature.
In addition to Mn$_3X$ and Mn$_3Y$, the antiperovskite Mn$_3X$N ($X$ = Ga, Zn, Ag, Ni, etc.) is another important class of coplanar noncollinear AFMs~\cite{Singh2018}, which was known since the 1970s~\cite{Bertaut1968,Fruchart1978}. Compared to Mn$_3X$, the $X$ atoms in Mn$_3X$N also occupy the corners of the cube [see Fig.~\ref{fig1}(c)] and the face-centered Mn atoms are arranged into a kagome lattice in the (111) plane [see Fig.~\ref{fig1}(d)], while there is an additional N atom located in the center of the cube [see Fig.~\ref{fig1}(c)]. Despite the structural similarity, some unique physical properties have been found in Mn$_3X$N, such as magnetovolume effects~\cite{Gomonaj1989,Gomonaj1992,WS-Kim2003,Lukashev2008,Lukashev2010,Takenaka2014,SH-Deng2015,Zemen2017a} and magnetocaloric effects~\cite{Y-Sun2012,Matsunami2014,KW-Shi2016,Zemen2017} that stem from a strong coupling between spin, lattice, and heat. The most interesting discovery in Mn$_3X$N may be the giant negative thermal expansion that was observed in the first-order phase transition from a paramagnetic state to a noncollinear antiferromagnetic state with decreasing temperature $\mathtt{T}$. Below the N{\'e}el temperature ($\mathtt{T_{N}}$), a second-order phase transition between two different noncollinear antiferromagnetic states, which are featured by a nearly constant volume but the change of spin configuration, possibly occurs.
Taking Mn$_3$NiN as an example~\cite{Fruchart1978}, all the spins point along the diagonals of the face if $\mathtt{T}<$163 K (the so-called $\Gamma^{5g}$ configuration), while in the temperature range of 163 K $<\mathtt{T}<$ 266 K the spins can point to the center of the triangle formed by three nearest-neighboring Mn atoms (the so-called $\Gamma^{4g}$ configuration). The $\Gamma^{5g}$ and the $\Gamma^{4g}$ spin configurations are named as R1 ($\theta=0^{\circ}$) and R3 ($\theta=90^{\circ}$) phases in this work [see Figs.~\ref{fig1}(e) and~\ref{fig1}(g), where the azimuthal angle $\theta$ measures the rotation of the spins starting from the diagonals of the face], respectively. An intermediate state ($0^{\circ}<\theta<90^{\circ}$) between the R1 and R3 phases, referred to as the R2 phase (see Fig.~\ref{fig1}(f) with $\theta=30^{\circ}$ as an example), was proposed to exist~\cite{Gomonaj1989,Gomonaj1992}. Such nontrivial magnetic orders are also believed to occur in other Mn$_3X$N compounds~\cite{Bertaut1968,Fruchart1978,Gomonaj1989,Gomonaj1992}, as recently clarified by Mochizuki \textit{et al.}~\cite{Mochizuki2018} using a classical spin model together with the replica-exchange Monte Carlo simulation. However, the details of the changes in spin configurations from R1 phase, passing through the R2 phase to R3 phase, and how they affect the relevant physical properties (e.g., AHE and MOE) are still unclear. Moreover, although only the right-handed spin chirality was reported in the previous literature, the left-handed spin chirality as a counterpart [Fig.~\ref{fig1}(h-j)] could also exist, e.g., in Mn$_{3}$NiN, because of the favorable total energy for a particular $\theta$ [see Fig.~\ref{fig4}(a)].
In this work, using first-principles density functional theory together with group-theory analysis and tight-binding modelling, we systematically investigate the effect of \textit{spin order} on the intrinsic AHE as well as the MOKE and the MOFE in coplanar noncollinear AFMs Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The \textit{spin order} considered here has dual implications, i.e., spin chiralities (right- and left-handed states) and spin configurations [regarding the different spin orientations by simultaneously rotating the spins within the (111) plane]. In Sec.~\ref{sec2}, we first identify the antisymmetric shape of the IAHC tensor (i.e., zero and nonzero elements) for different spin orders by a group theoretical analysis. For the right-handed spin chirality, only $\sigma_{xy}$ is nonzero (except for two particular spin configurations: $\theta=0^{\circ}$ and $180^{\circ}$); for the left-handed spin chirality, all three off-diagonal elements ($\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$) can be nonzero (except for some particular spin configurations, e.g., $\theta=0^{\circ}$ and $60^{\circ}$ for $\sigma_{xy}$, $\theta=30^{\circ}$ and $210^{\circ}$ for $\sigma_{yz}$, $\theta=120^{\circ}$ and $300^{\circ}$ for $\sigma_{zx}$). The results of the group-theory analysis are further confirmed by both tight-binding modelling (Sec.~\ref{sec3}) and first-principles calculations (Sec.~\ref{sec4-1}). In addition to the IAHC, the magnetic anisotropy energy (MAE) has also been accessed and the in-plane easy spin orientation is determined (Sec.~\ref{sec4-1}).
Considering Mn$_{3}$NiN as a prototype, we extend the study of IAHC to the optical Hall conductivity [$\sigma_{xy}(\omega)$, $\sigma_{yz}(\omega)$ $\sigma_{zx}(\omega)$] as well as the corresponding diagonal elements [$\sigma_{xx}(\omega)$, $\sigma_{yy}(\omega)$, and $\sigma_{zz}(\omega)$] (Sec.~\ref{sec4-2}). The spin order hardly affects the diagonal elements, whereas a significant dependence on the spin order is observed in the off-diagonal elements akin to the IAHC. Subsequently in Sec.~\ref{sec4-3}, the MOKE and the MOFE are computed from the optical conductivity for all Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). Kerr and Faraday spectra exhibit a distinct dependence on the spin order, which they inherit from the optical Hall conductivity. The computed Kerr and Faraday rotation angles in Mn$_{3}X$N are comparable to the ones in Mn$_{3}X$ studied in our previous work~\cite{WX-Feng2015}. The magneto-optical anisotropy, originating from the nonequivalent off-diagonal elements of optical conductivity, is explored for both right- and left-handed spin chiralities. Finally, the summary is drawn in Sec.~\ref{sec5}. Our work reveals that the AHE and the MOE depend strongly on the spin order in noncollinear AFMs Mn$_{3}X$N which suggests that complex noncollinear spin structures can be uniquely classified in experiments by measuring AHE and MOE.
\begin{table*}[htpb]
\caption{The magnetic space and point groups as well as the nonzero elements of IAHC for Mn$_{3}X$N for different spin orders characterized by the azimuthal angle $\theta$ and the vector spin chirality $\kappa$. The magnetic space and point groups exhibit a period of $\pi$ ($\pi/3$) in $\theta$ for right-handed (left-handed) spin chirality. The IAHC is considered as a pseudovector, i.e., $\boldsymbol{\sigma}=[\sigma^{x},\sigma^{y},\sigma^{z}]=[\sigma_{yz},\sigma_{zx},\sigma_{xy}]$, which is expressed in the Cartesian coordinate system defined in Fig.~\ref{fig1}. The nonzero elements of IAHC are in complete accord with the tight-binding and first-principles calculations, shown in Figs.~\ref{fig2}(c), ~\ref{fig4}(b), and~\ref{fig4}(c), respectively.}
\label{tab1}
\begin{ruledtabular}
\begingroup
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lccccccccccccccc}
\multicolumn{2}{c}{} &
\multicolumn{13}{c}{azimuthal angle $\theta$} & \\
\cline{3-15}
&$\kappa$&$0^{\circ}$&$15^{\circ}$&$30^{\circ}$&$45^{\circ}$&$60^{\circ}$&$75^{\circ}$&$90^{\circ}$&$105^{\circ}$&$120^{\circ}$&$135^{\circ}$&$150^{\circ}$&$165^{\circ}$&$180^{\circ}$&\\
\hline
magnetic space group & $+1$ & $R\bar{3}m$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}m^{\prime}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}m$ \\
& $-1$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ \\
\hline
magnetic point group & $+1$ & $\bar{3}1m$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}1m^{\prime}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}1m$ \\
& $-1$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$& $2/m$ \\
\hline
nonzero elements & $+1$ & -- & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & -- \\
of IAHC & $-1$ & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\;\:$--}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}}
\end{tabular}
\endgroup
\end{ruledtabular}
\end{table*}
\section{Group theory analysis}\label{sec2}
In this section, we determine the magnetic space and point groups of Mn$_{3}X$N for given spin orders, and then identify the nonzero elements of IAHC from group theory. The magnetic groups computed with the \textsc{isotropy} code~\cite{Stokes,Stokes2005} are listed in Table~\ref{tab1}, from which one can observe that the magnetic groups vary in the azimuthal angle $\theta$ with a period of $\pi$ for right-handed spin chirality, but with a period of $\pi/3$ for left-handed spin chirality. This indicates that the magnetic groups that need to be analyzed are limited to a finite number. Furthermore, it is sufficient to restrict the analysis to magnetic point groups since the IAHC~\cite{Kubo1957,CS-Wang1974,YG-Yao2004},
\begin{equation}\label{eq:IAHC}
\sigma_{\alpha\beta} = -\dfrac{e^{2}}{\hbar}\int_{BZ}\frac{d^{3}k}{(2\pi)^{3}}\Omega_{\alpha\beta}(\bm{k}),
\end{equation}
is translationally invariant. In the above expression $\Omega_{\alpha\beta}(\bm{k})=\sum_{n}f_{n}(\bm{k})\Omega_{n,\alpha\beta}(\bm{k})$ is the momentum-space Berry curvature, with the Fermi-Dirac distribution function $f_{n}(\bm{k})$ and the band-resolved Berry curvature
\begin{equation}\label{eq:BerryCur}
\Omega_{n,\alpha\beta}\left(\bm{k}\right) = -2 \mathrm{Im}\sum_{n^{\prime} \neq n}\frac{\left\langle \psi_{n\bm{k}}\right|\hat{v}_{\alpha}\left| \psi_{n^{\prime}\bm{k}} \right\rangle \left\langle \psi_{n^{\prime}\bm{k}}\right|\hat{v}_{\beta}\left|\psi_{n\bm{k}} \right\rangle}{\left(\omega_{n^{\prime}\bm{k}}-\omega_{n\bm{k}}\right)^{2}}.
\end{equation}
Here $\hat{v}_{\alpha}$ is the velocity operator along the $\alpha$th Cartesian direction, and $\psi_{n\bm{k}}$ ($\hbar\omega_{n\bm{k}}=\epsilon_{n\bm{k}}$) is the eigenvector (eigenvalue) to the band index $n$ and the momentum $\bm{k}$. Since the IAHC and the Berry curvature can be regarded as pseudovectors, just like spin, their vector-form notations $\boldsymbol{\sigma}=[\sigma^{x},\sigma^{y},\sigma^{z}]=[\sigma_{yz},\sigma_{zx},\sigma_{xy}]$ and $\boldsymbol{\Omega}_{n}=[\Omega_{n}^{x},\Omega_{n}^{y},\Omega_{n}^{z}]=[\Omega_{n,yz},\Omega_{n,zx},\Omega_{n,xy}]$ are used here for convenience.
Let us start with the right-handed spin chirality by considering the three non-repetitive magnetic point groups: $\bar{3}1m$ [$\theta=n\pi$], $\bar{3}1m^{\prime}$ [$\theta=(n +\frac{1}{2})\pi$], and $\bar{3}$ [$\theta \neq n\pi \text{ and }\theta \neq (n +\frac{1}{2})\pi$] with $n\in\mathbb{N}$ (see Tab.~\ref{tab1}). First, $\bar{3}1m$ belongs to the type-I magnetic point group, i.e., it is identical to the crystallographic point group $D_{3d}$. As seen from Fig.~\ref{fig1}(e), it has one three-fold rotation axis ($C_{3}$), three two-fold rotation axes ($C_{2}^{(1)}$, $C_{2}^{(2)}$, and $C_{2}^{(3)}$) and three mirror planes ($M^{(1)}$, $M^{(2)}$, and $M^{(3)}$). As mentioned before, $\boldsymbol{\Omega}_{n}$ is a pseudovector, and the mirror operation $M^{(1)}$ (parallel to the $yz$ plane) changes the sign of $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$, but preserves $\Omega_{n}^{x}$. This indicates that $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are odd functions along the $k_{x}$ direction in momentum space, while $\Omega_{n}^{x}$ is an even function. Correspondingly, integrating the Berry curvature over the entire Brillouin zone should give $\boldsymbol{\sigma}=[\sigma^{x},0,0]$. The role of $C_{2}^{(1)}$ is the same as that of $M^{(1)}$. The other two mirror (two-fold rotation) symmetries are related to $M^{(1)}$ ($C_{2}^{(1)}$) by the $C_{3}$ rotation, which transforms $[\sigma^{x},0,0]$ into $[-\frac{1}{2}\sigma^{x},-\frac{\sqrt{3}}{2}\sigma^{x},0]$ and $[-\frac{1}{2}\sigma^{x},\frac{\sqrt{3}}{2}\sigma^{x},0]$. Therefore, all components of IAHC are zero, i.e., $\boldsymbol{\sigma}=[0,0,0]$, owing to the symmetries of the group $\bar{3}1m$. Second, $\bar{3}$ is also a type-I magnetic point group, which is identical to the crystallographic point group $C_{3i}$. Compared to $D_{3d}$, all $C_{2}$ and $M$ operations are absent whereas only the $C_{3}$ operation is left [see Fig.~\ref{fig1}(f)]. In this situation, the components of $\boldsymbol{\sigma}$ that are normal to the $C_{3}$ axis disappear due to the cancellations of $\Omega_{n}^{x}$ and $\Omega_{n}^{y}$ in the $k_{x}$--$k_{y}$ plane. This gives rise to $\boldsymbol{\sigma}=[0,0,\sigma^{z}]$. Finally, $\bar{3}1m^{\prime}=C_{3i} \oplus T(D_{3d}-C_{3i})$ is a type-III magnetic point group as it contains operations combining time and space symmetries. Here, $T(D_{3d}-C_{3i})$ is the set of three $TM$ and three $TC_{2}$ operations depicted in Fig.~\ref{fig1}(g). With respect to the mirror symmetry $M^{(1)}$, $\Omega_{n}^{x}$ is even but $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are odd; with respect to the time-reversal symmetry $T$, all of $\Omega_{n}^{x}$, $\Omega_{n}^{y}$, and $\Omega_{n}^{z}$ are odd; hence, with respect to the $TM^{(1)}$ symmetry, $\Omega_{n}^{x}$ is odd but $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are even, resulting in $\boldsymbol{\sigma}=[0,\sigma^{y},\sigma^{z}]$. $TC_{2}^{(1)}$ plays the same role, just like $TM^{(1)}$ does. The other two $TM$ ($TC_{2}$) symmetries are related to $TM^{(1)}$ ($TC_{2}^{(1)}$) by the $C_{3}$ rotation in the subgroup $C_{3i}$, which forces $\sigma^{y}$ to be zero but allows finite $\sigma^{z}$. Thus, the IAHC tensor shape is $\boldsymbol{\sigma}=[0,0,\sigma^{z}]$ in the magnetic point group $\bar{3}1m^{\prime}$. To summarize, for the right-handed spin chirality only $\sigma^{z}$ can be nonzero, except for $\theta=n\pi$ where all components of the IAHC vanish.
Next, we turn to the left-handed spin chirality, which also has three non-repetitive magnetic point groups: $2/m$ [$\theta=n\frac{\pi}{3}$], $2^{\prime}/m^{\prime}$ [$\theta=(n +\frac{1}{2})\frac{\pi}{3}$], and $\bar{1}$ [$\theta \neq n\frac{\pi}{3} \text{ and }\theta \neq (n +\frac{1}{2})\frac{\pi}{3}$] with $n\in\mathbb{N}$ (see Tab.~\ref{tab1}). First, $2/m$ belongs to the type-I magnetic point group, which is identical to the crystallographic point group $C_{2h}$ that contains one two-fold rotation axis ($C_{2}$) and one mirror plane ($M$) [see Fig.~\ref{fig1}(h)]. As mentioned before, the $M$ symmetry allows only for those components of the IAHC that are perpendicular to the mirror plane (i.e., along the current $C_{2}$ axis), therefore, $\sigma^{z}$ should be always zero but $\sigma^{x}$ and $\sigma^{y}$ are generally finite for $\theta= 0^{\circ}$ (for current Cartesian coordinates). If $\theta= \frac{2\pi}{3}$ or $\frac{5\pi}{3}$, the mirror plane is parallel to the $yz$ plane and renders only $\sigma^{x}$ potentially nonzero. Similarly, $\bar{1}$ is also a type-I magnetic point group that is identical to the crystallographic group $C_{i}$. Since all components $\Omega_{n}^{x}$, $\Omega_{n}^{y}$, and $\Omega_{n}^{z}$ are even with respect to the spatial inversion symmetry $I$, the group $C_{i}$ imposes no restrictions on the shape of $\boldsymbol{\sigma}$, allowing all components to be finite. Finally, $2^{\prime}/m^{\prime}=C_{i} \oplus T(C_{2h}-C_{i})$ is a type-III magnetic point group containing one $TM$ and one $TC_{2}$ operation [see Figs.~\ref{fig1}(i) and~\ref{fig1}(j)]. There are two scenarios: if $\theta= \frac{\pi}{6}$ [Fig.~\ref{fig1}(i)], $TM$ (or $TC_{2}$) symmetry forces $\sigma^{x}$ to vanish but facilitates nonzero $\sigma^{y}$ and $\sigma^{z}$; if $\theta= \frac{\pi}{2}$ [Fig.~\ref{fig1}(j)], the principal axis of both symmetry operations changes ($M$ is neither parallel to $yz$ nor $zx$ plane) such that all entries $\sigma^{x}$, $\sigma^{y}$ and $\sigma^{z}$ are finite. The other cases of $\theta= \frac{7\pi}{6}$ and $\frac{5\pi}{6}$ are identical to $\theta= \frac{\pi}{6}$ and $\frac{\pi}{2}$, respectively. In summary, all tensor components of $\boldsymbol{\sigma}$ are allowed (except for some particular $\theta$) for the left-handed spin chirality owing to the reduced symmetry as compared to the systems with right-handed spin chirality.
In the above discussion, all zero and potentially nonzero elements of the IAHC tensor are identified based on the underlying magnetic point groups. Alternatively, these results can also be obtained by following the Neumann principle, i.e., by applying all symmetry operations of the corresponding point group to the conductivity tensor~\cite{Seemann2015}. This method has been implemented in a computer program~\cite{Zelezny2017a,Zelezny2018a}, which generates the shape of linear response tensors (IAHC or intrinsic spin Hall conductivity) in a given coordinate system. Another useful analysis tool is the so-called cluster multipole theory~\cite{Suzuki2017,Suzuki2018}, which is capable of uncovering the hidden AHE in AFMs by evaluating the cluster multipole moment that behaves as a macroscopic magnetic order. For instance, although the cluster dipole moments (i.e., the net magnetization from the conventional understanding) vanish in noncollinear AFMs (e.g., Mn$_{3}X$ and Mn$_{3}Y$), the emerging cluster octupole moments lead to a finite AHE.
\begin{figure*}
\includegraphics[width=2\columnwidth]{Fig2}
\caption{(Color online) (a)~Band structures of the kagome lattice with the spin orders $\kappa=+1$ and $\theta=0^{\circ}$, $30^{\circ}$, $90^{\circ}$. (b)~Band structures of the kagome lattice with the spin orders $\kappa=-1$ and $\theta=0^{\circ}$, $15^{\circ}$, $30^{\circ}$. (c)~IAHC of the kagome lattice as a function of $\theta$ for $\kappa=\pm1$ states for the three positions of the Fermi energy $E_{F}$ at 1.8~eV (top panel), 0~eV (middle panel), and $-$1.8~eV (bottom panel). The curves of the $\kappa=-1$ state (green lines) are scaled by a factor of 10. (d)-(g)~Berry curvature $\Omega_{xy}(\bm{k})$ with $\kappa=+1$ and $\theta=0^{\circ}$, $30^{\circ}$, $90^{\circ}$, $270^{\circ}$ at $E_{F}=-1.8$~eV. (h)-(k)~Berry curvature $\Omega_{xy}(\bm{k})$ with $\kappa=-1$ and $\theta=0^{\circ}$, $15^{\circ}$, $30^{\circ}$, $90^{\circ}$ at $E_{F}=-1.8$~eV. Dotted lines in panels (d)-(k) indicate the first Brillouin zone.}
\label{fig2}
\end{figure*}
\section{Tight-binding model}\label{sec3}
Group theory is particularly powerful to identify the tensor shape of the IAHC, but it provides no insights into the magnitude of the allowed elements, which will depend strongly on details of the electronic structure. In this light, tight-binding models and first-principles calculations are valuable tools to arrive at quantitative predictions. In this section, we consider a double-exchange $s$-$d$ model that describes itinerant $s$ electrons interacting with local $d$ magnetic moments on the kagome lattice, which refers to the (111) plane of cubic Mn$_{3}X$N. Following Ref.~\onlinecite{H-Chen2014}, the Hamiltonian is written as
\begin{eqnarray}\label{eq:Hamiltonian}
H & = & t\sum_{\left<ij\right>\alpha}c_{i\alpha}^{\dagger}c_{j\alpha}-J\sum_{i\alpha\beta}\left(\boldsymbol{\tau}_{\alpha\beta}\cdot\boldsymbol{S}_{i}\right)c_{i\alpha}^{\dagger}c_{i\beta} \nonumber \\
& & + it_{\text{SO}}\sum_{\left<ij\right>\alpha\beta}\nu_{ij}\left(\boldsymbol{\tau}_{\alpha\beta}\cdot\boldsymbol{n}_{ij}\right)c_{i\alpha}^{\dagger}c_{i\beta},
\end{eqnarray}
where $c_{i\alpha}^{\dagger}$ ($c_{i\alpha}$) is the electron creation (annihilation) operator on site $i$ with spin $\alpha$, and $\boldsymbol{\tau}$ is the vector of Pauli matrices, and $\left\langle ij\right\rangle$ restricts the summation to nearest-neighbor sites. The first term is the nearest-neighbor hopping with the transfer integral $t$. The second term is the on-site exchange coupling between the conduction electron and the localized spin moment $\boldsymbol{S}_{i}$, and $J$ is the Hund's coupling strength. The third term is the SOC effect with coupling strength $t_{\text{SO}}$, $\nu_{ij}$ is the antisymmetric 2D Levi-Civita symbol (with $\nu_{12}=\nu_{23}=\nu_{31}=1$), and $\boldsymbol{n}_{ij}$ is an in-plane vector perpendicular to the line from site $j$ to site $i$~\cite{H-Chen2014}. In the following calculations, we set $J=1.7t$ and $t_{\text{SO}}=0.2t$.
We first discuss the band structure, the IAHC, and the Berry curvature of the system with right-handed spin chirality ($\kappa=+1$), plotted in Figs.~\ref{fig2}(a),~\ref{fig2}(c), and~\ref{fig2}(d-g), respectively. The band structure significantly changes from $\theta=0^{\circ}$ (R1 phase, $\bar{3}1m$), to $30^{\circ}$ (R2 phase, $\bar{3}$), and to $90^{\circ}$ (R3 phase, $\bar{3}1m^{\prime}$). If $\theta=0^{\circ}$, two band crossings around 1.8 eV appear at the $K$ point and along the $M$-$K$ path, respectively. This band structure is identical to the one without SOC~\cite{H-Chen2014}, because the SOC term in Eq.~\eqref{eq:Hamiltonian} plays no effect in the spin configuration of $\theta=0^{\circ}$ in the sense that the left-handed and right-handed environments of an electron hopping between nearest neighbors are uniform. Accordingly, the Berry curvature $\Omega_{xy}(\bm{k})$ vanishes everywhere in the Brillouin zone [Fig.~\ref{fig2}(d)]. The band degeneracy is split when $\theta\neq0^{\circ}$ and with increasing $\theta$, the band gap at the $K$ point enlarges significantly, while the one at the $M$ point shrinks slightly. In order to disentangle the dependence of the IAHC on the band structure, the IAHC is calculated at different Fermi energies ($E_{F}$) including 1.8~eV, 0~eV, and $-$1.8~eV, shown in Fig.~\ref{fig2}(c). In all three cases, the IAHC exhibits a period of $2\pi$ in $\theta$, and the values for $E_{F}=\pm1.8$~eV are two order of magnitude larger than the ones at $E_{F}=0$~eV. The large IAHC originates from the small band gap at the $M$ point since the Berry curvature shows sharp peaks there [see Figs.~\ref{fig2}(e-g)]. For $E_{F}=-1.8$~eV and $0$~eV, the largest IAHC occurs for $\theta=90^{\circ}$ and $270^{\circ}$. The case of $E_{F}=1.8$ eV is special since the IAHC is quantized to $\pm2e^{2}/\hbar$ in a broad range of $\theta$, revealing the presence of quantum anomalous Hall state in coplanar noncollinear AFMs.
For the left-handed spin chirality ($\kappa=-1$), the band structure, the IAHC, and the Berry curvature are plotted in Figs.~\ref{fig2}(b),~\ref{fig2}(c), and~\ref{fig2}(h-k), respectively. The band structure hardly changes from $\theta=0^{\circ}$ ($2/m$), to $15^{\circ}$ ($\bar{1}$), and to $30^{\circ}$ ($2^{\prime}/m^{\prime}$). If $\theta=0^{\circ}$, the Berry curvature $\Omega_{xy}(\bm{k})$ is odd for the group $2/m$ [Fig.~\ref{fig2}(h)] such that the IAHC $\sigma_{xy}$ is zero when integrating the $\Omega_{xy}(\bm{k})$ over the entire Brillouin zone. With increasing $\theta$, the IAHC reaches its maximum at $\theta=30^{\circ}$ and exhibits a period of $\frac{2\pi}{3}$ [Fig.~\ref{fig2}(c)]. Similarly to the $\kappa=+1$ state, the IAHC at $E_{F}=\pm1.8$~eV is two orders of magnitude larger than at $E_{F}=0$~eV. However, the IAHC of $\kappa=-1$ state is much smaller than that of $\kappa=+1$ state [Fig.~\ref{fig2}(c)]. This is understood based on the Berry curvature shown in Figs.~\ref{fig2}(i-k), which reveals that $\Omega_{xy}(\bm{k})$ at the three $M$ points has different signs (two negative and one positive, or two positive and one negative) due to the reduced symmetry in the $\kappa=-1$ state, in contrast to the same sign in the $\kappa=+1$ state [Figs.~\ref{fig2}(e-g)].
The tight-binding model used here is constructed on a two-dimensional kagome lattice, for which the $\sigma_{yz}$ and $\sigma_{zx}$ components vanish. Although the model is rather simple, the following qualitative results are useful: (1) the IAHC turns out to be large if the Fermi energy lies in a small band gap as outlined in previous theoretical work~\cite{YG-Yao2004}; (2) $\sigma_{xy}$ has a period of $2\pi$ ($\frac{2\pi}{3}$) in $\theta$ for right-handed (left-handed) spin chirality; (3) For structures with right-handed spin chirality, $\sigma_{xy}$ is much larger than for the left-handed case.
\section{First-principles calculations}\label{sec4}
In this section, by computing explicitly the electronic structure of the Mn$_3X$N compounds with different spin orders, we first demonstrate that key properties of these systems follow the qualitative conclusions drawn from the discussed tight-binding model. Then, we present the values of the computed magnetic anisotropy energy (MAE) and the IAHC of the Mn$_{3}X$N compounds. The obtained in-plane easy spin orientations are comparable to the previous reports~\cite{Mochizuki2018}, while the IAHC is found to depend strongly on the spin order, in agreement with the above tight-binding results. Taking Mn$_{3}$NiN as an example, we further discuss the longitudinal and transverse optical conductivity, which are key to evaluating the MOE. Finally, the spin-order dependent MOKE and MOFE as well as their anisotropy are explored. Computational details of the first-principles calculations are given in Appendix~\ref{appendix}.
\begin{table}[b!]
\caption{Magnetic anisotropy constant ($K_\text{eff}$) and the maximum of IAHC for Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The IAHC is listed in the order of $\sigma_{yz}$, $\sigma_{zx}$, and $\sigma_{xy}$. For the $\kappa=+1$ state, $\sigma_{xy}$ reaches its maximum at $\theta=90^{\circ}$. For the $\kappa=-1$ state, $\sigma_{yz}$, $\sigma_{zx}$, and $\sigma_{xy}$ reach their maxima at $\theta=120^{\circ}$, $30^{\circ}$, and $30^{\circ}$, respectively.}
\label{tab2}
\begin{ruledtabular}
\begingroup
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ccccc}
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{$K_\text{eff}$ (meV/cell)} &
\multicolumn{2}{c}{IAHC (S/cm)} \\
\cline{2-3}
\cline{4-5}
System & $\kappa=+1$ & $\kappa=-1$ & $\kappa=+1$ & $\kappa=-1$ \\
\hline
Mn$_{3}$GaN & 0.52 & 0.26 & 0, 0, $-$99 & 59, $-$67, $-$5 \\
Mn$_{3}$ZnN & 0.43 & 0.21 & 0, 0, $-$232 & 156, $-$174, 23 \\
Mn$_{3}$AgN & 0.15 & 0.08 & 0, 0, $-$359 & 344, $-$314, 72 \\
Mn$_{3}$NiN & $-$0.18 & $-$0.09 & 0, 0, $-$301 & 149, $-$134, 5 \\
\end{tabular}
\endgroup
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3}
\caption{(Color online) The first-principles band structures of (a,b)~Mn$_{3}$ZnN and (c,d)~Mn$_{3}$NiN for different spin orders ($\theta=0^{\circ}$, $30^{\circ}$, and $90^{\circ}$ for the right-handed state with $\kappa=+1$, and $\theta=0^{\circ}$, $15^{\circ}$, and $30^{\circ}$ for the left-handed state of opposite spin chirality). The $k$-path lies within the (111) plane.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig4}
\caption{(Color online) (a) Magnetic anisotropy energy of Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni) as a function of the azimuthal angle $\theta$. The results for left-handed spin chirality ($\kappa=-1$) are only shown in Mn$_{3}$ZnN and Mn$_{3}$NiN as the representatives. The solid and dotted lines are expressed by $\text{MAE}(\theta)=K_{\text{eff}}\sin^{2}(\theta)$ and $\text{MAE}=K_{\text{eff}}/2$, respectively. (b), (c) The IAHC of Mn$_{3}$ZnN and Mn$_{3}$NiN as a function of the azimuthal angle $\theta$ for both right-handed ($\kappa=+1$) and left-handed ($\kappa=-1$) spin chiralities. The solid and dotted lines are the polynomial fits to the data.}
\label{fig4}
\end{figure}
\subsection{Electronic structure}\label{sec4-0}
Figure~\ref{fig3} illustrates the first-principles band structures of the Mn$_{3}X$N systems, taking Mn$_{3}$ZnN and Mn$_{3}$NiN as two prototypical examples. While the electronic structure of the left-handed state with $\kappa=-1$ hardly changes as the spin-rotation angle $\theta$ is tuned, the right-handed state of opposite vector spin chirality is rather sensitive to details of the noncollinear spin configuration. Specifically, the calculated electronic structure for the $\kappa=+1$ state reveals that the band degeneracy (e.g., at the $\Gamma$ point) is lifted for $\theta\neq0^{\circ}$, and the magnitude of the band splitting increases with the spin-rotation angle. These features are in a very good qualitative agreement with the tight-binding results [see Figs.~\ref{fig2}(a) and~\ref{fig2}(b)], which roots in the fact that the (111) planes of Mn$_{3}X$N compounds and the 2D Kagome lattice considered in the previous sections have common symmetries.
\subsection{Intrinsic anomalous Hall conductivity and magnetic anisotropy energy}\label{sec4-1}
The MAE is one of the most important parameters that characterizes a magnetic material. In FMs, the MAE refers to the total energy difference between easy- and hard-axis magnetization directions. In the noncollinear AFMs that we consider here, we define the MAE as the total energy difference between different spin orders, given by
\begin{equation}\label{eq:MAE}
\text{MAE}(\theta)=E_{\kappa=\pm1,\theta\neq0^{\circ}}-E_{\kappa=+1,\theta=0^{\circ}},
\end{equation}
where the spin order with $\kappa=+1$ and $\theta=0^{\circ}$ is set as the reference state. The calculated MAE of Mn$_{3}X$N is plotted in Fig.~\ref{fig4}(a). For the $\kappa=+1$ state, the MAE can be fitted well to the uniaxial anisotropy $K_{\text{eff}}\sin^{2}(\theta)$, where $K_{\text{eff}}$ is the magnetic anisotropy constant listed in Tab.~\ref{tab2}. Compared to traditional Mn-based alloys, the value of $K_{\text{eff}}$ in Mn$_{3}X$N is comparable in magnitude MnPt (0.51 meV/cell)~\cite{Umetsu2006}, MnPd ($-$0.57 meV/cell)~\cite{Umetsu2006}, MnNi ($-$0.29 meV/cell)~\cite{Umetsu2006}, and MnRh ($-$0.63 meV/cell)~\cite{Umetsu2006}, but are one order of magnitude smaller than in MnIr ($-$7.05 meV/cell)~\cite{Umetsu2006}, Mn$_3$Pt (2.8 meV/cell)~\cite{Kota2008}, and Mn$_3$Ir (10.42 meV/cell)~\cite{Szunyogh2009}. For the $\kappa=-1$ state, the MAE is approximately constant with a value of $K_{\text{eff}}/2$, indicating the vanishing in-plane anisotropy energy that leads to a relatively easy rotation of the spins within the (111) plane. This feature has also been found in other noncollinear AFMs such as Mn$_{3}$Ir~\cite{Szunyogh2009}, Mn$_{3}$Ge~\cite{Nagamiya1982}, and Mn$_{3}$Sn~\cite{Nagamiya1982,Tomiyoshi1982,Nakatsuji2015}.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{Fig5}
\caption{(Color online) Energy dependence of the optical conductivity in Mn$_{3}$NiN. (a-b)~Real and imaginary parts of $\sigma_{xx}$ for the $\kappa=+1$ state. Characteristic peaks and valleys are marked by black arrows. (c-d)~Real and imaginary parts of $\sigma_{xy}$ for the $\kappa=+1$ state. (e-f)~The real and imaginary parts of $\sigma_{xx}$ for the $\kappa=-1$ state. (g-h), (i-j), (k-l)~The real and imaginary parts of $\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$ for the $\kappa=-1$ state.}
\label{fig5}
\end{figure*}
Fig.~\ref{fig4}(a) reveals that $\text{MAE}(\theta)=\text{MAE}(\theta+\pi)$, implying that the ground state of $120^{\circ}$ triangular spin order has a discrete two-fold degeneracy~\cite{H-Chen2014}. For the $\kappa=+1$ state, Mn$_{3}$GaN and Mn$_{3}$ZnN obviously prefer the R1 phase ($\theta=0^{\circ}$ or $180^{\circ}$), which is in full accordance with the $\Gamma^{5g}$ spin configuration identified in Ref.~\onlinecite{Mochizuki2018} using a classical spin model with frustrated exchange interactions and magnetic anisotropy. As the spin configuration is closely related to the number of valence electrons $n_{\nu}$ in the $X$ ion, Mochizuki \textit{et al.}~\cite{Mochizuki2018} propose a mixture of the $\Gamma^{5g}$ and the $\Gamma^{4g}$ spin patterns in Mn$_{3}$AgN and Mn$_{3}$NiN due to the smaller $n_{\nu}$ (weaker $X$-ion crystal field) as compared to that of Mn$_{3}$GaN and Mn$_{3}$ZnN. In the present calculations, Mn$_{3}$AgN still hosts the $\Gamma^{5g}$ spin configuration but has a much smaller MAE compared to Mn$_{3}$GaN and Mn$_{3}$ZnN, while Mn$_{3}$NiN favors the $\Gamma^{4g}$ spin configuration (R3 phase, $\theta=90^{\circ}$ or $270^{\circ}$). Our calculated MAE is a monotonic function of $n_{\nu}$, i.e., Ni $(n_{\nu}=0)<$ Ag $(n_{\nu}=1)<$ Zn $(n_{\nu}=2)<$ Ga $(n_{\nu}=3)$, which provides a clear interpretation for the systematic evolution of the magnetic orders in Mn$_{3}X$N. On the other hand, the $\kappa=-1$ state of Mn$_{3}X$N has not been considered in previous works, while we find that it could exist for particular values of $\theta$. For example, the $\kappa=-1$ state in Mn$_{3}$NiN has the favorable energy in three segments of $\theta$: $[0^{\circ},45^{\circ})$, $(135^{\circ},225^{\circ})$, and $(315^{\circ},360^{\circ}]$. In the light of recent experiments on Mn$_{3}$Sn~\cite{Nakatsuji2015,Ikhlas2017} and Mn$_{3}$Ge~\cite{Nayak2016,Kiyohara2016}, an external magnetic field may be used to tune the spin orientation by coupling to the weak in-plane magnetic moment. This finding enriches the spectrum of possible magnetic orders in Mn$_{3}X$N compounds.
The IAHC of Mn$_{3}$ZnN and Mn$_{3}$NiN with different spin orders is illustrated in Figs.~\ref{fig4}(b) and~\ref{fig4}(c), respectively. The component $\sigma_{xy}$ displays a period of $2\pi$ ($\frac{2\pi}{3}$) in $\theta$ for the $\kappa=+1$ ($\kappa=-1$) state, and its magnitude in the $\kappa=+1$ state is much larger than that of the $\kappa=-1$ state, in excellent agreement with the tight-binding results. From the group theoretical analysis we showed that $\sigma_{yz}$ and $\sigma_{zx}$ are allowed in $\kappa=-1$ state, which is confirmed by our first-principles results. Moreover, we observe that both $\sigma_{yz}$ and $\sigma_{zx}$ display a period of $2\pi$ in $\theta$ and their magnitudes are much larger than that of $\sigma_{xy}$. The maximum of IAHC for $\kappa=\pm1$ states is summarized in Tab.~\ref{tab2}. Overall, the obtained magnitude of the IAHC in the studied family of compounds is comparable or even larger than that in other noncollinear AFMs like Mn$_3X$~\cite{H-Chen2014,Y-Zhang2017} and Mn$_{3}Y$~\cite{Kubler2014,GY-Guo2017,Y-Zhang2017,Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017}. In contrast to the MAE, the IAHC follows the relation $\boldsymbol{\sigma}(\theta)=-\boldsymbol{\sigma}(\theta+\pi)$, which manifests that the spin state at $\theta+\pi$ is the time-reversed counterpart of the order at $\theta$ and the IAHC is odd under time-reversal symmetry.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig6}
\caption{(Color online) Magneto-optical spectra of Mn$_{3}X$N for $X=$ Ga~(a), Zn~(b), Ag~(c), and Ni~(d) in the $\kappa=+1$ spin configuration. The panels from left to right show Kerr rotation angle $\theta^{z}_{K}$, Kerr ellipticity $\varepsilon^{z}_{K}$, Faraday rotation angle $\theta^{z}_{F}$, and Faraday ellipticity $\varepsilon^{z}_{F}$, respectively.}
\label{fig6}
\end{figure*}
\subsection{Optical conductivity}\label{sec4-2}
Before proceeding to the MOE, we evaluate the optical conductivity as a key quantity that comprises the MOE. Expanding on the expressions for the IAHC [Eqs.~\eqref{eq:IAHC} and~\eqref{eq:BerryCur}], the optical conductivity can be written down as
\begin{eqnarray}\label{eq:optical}
\sigma_{\alpha\beta}(\omega) & = & \sigma^{\prime}_{\alpha\beta}(\omega) + i \sigma^{\prime\prime}_{\alpha\beta}(\omega) \nonumber \\
& = & \hbar e^{2}\int\frac{d^{3}k}{(2\pi)^{3}}\sum_{n\neq n^{\prime}}\left[f_{n}(\bm{k})-f_{n^{\prime}}(\bm{k})\right] \nonumber \\
& & \times\frac{\textrm{Im}\left[\left\langle \psi_{n\bm{k}}|v_{\alpha}|\psi_{n^{\prime}\bm{k}}\right\rangle \left\langle \psi_{n^{\prime}\bm{k}}|v_{\beta}|\psi_{n\bm{k}}\right\rangle\right] }{(\hbar\omega_{n\bm{k}}-\hbar\omega_{n^{\prime}\bm{k}})^{2}-(\hbar\omega+i\eta)^{2}},
\end{eqnarray}
where the superscript $^{\prime}$ ($^{\prime\prime}$) of $\sigma_{\alpha\beta}$ denotes its the real (imaginary) part, $\eta$ is an adjustable smearing parameter in units of energy, and $\hbar\omega$ is the photon energy. Due to the found similarity in the results among all studied systems, we take Mn$_{3}$NiN as a representative example for discussing the optical conductivity (Fig.~\ref{fig5}). The real part of the diagonal element, $\sigma^{\prime}_{xx}$ [see Figs.~\ref{fig5}(a) and.~\ref{fig5}(e)], measures the average in the absorption of left- and right-circularly polarized light. The spectrum exhibits one absorptive peak at 1.8~eV with a shoulder at 1.1~eV and another absorptive peak at 3.9~eV. The imaginary part of the diagonal element, $\sigma^{\prime\prime}_{xx}$ [see Figs.~\ref{fig5}(b) and.~\ref{fig5}(f)], is the dispersive part of the optical conductivity, revealing two distinct valleys at 0.6 eV and 3.4 eV. Obviously, $\sigma_{xx}$ is not affected by the spin order (neither spin chirality $\kappa$ nor azimuthal angle $\theta$). A similar behavior has been found in Mn$_{3}X$~\cite{WX-Feng2015}, where $\sigma_{xx}$ is identical for T1 and T2 spin structures. From the symmetry analysis~\cite{Seemann2015}, it should hold that $\sigma_{xx}=\sigma_{yy}\neq\sigma_{zz}$ for the magnetic point groups $\bar{3}1m$, $\bar{3}$, and $\bar{3}1m^{\prime}$ in the $\kappa=+1$ state, whereas $\sigma_{xx}\neq\sigma_{yy}\neq\sigma_{zz}$ for the magnetic point groups of $2/m$, $\bar{1}$, and $2^{\prime}/m^{\prime}$ in the $\kappa=-1$ state. However, all diagonal elements are approximately equal in our calculations, i.e., we observe that $\sigma_{xx}\approx\sigma_{yy}\approx\sigma_{zz}$. This promotes the optical isotropy in the Mn$_{3}X$N family.
In contrast to the diagonal entries, the off-diagonal elements displayed in Figs.~\ref{fig5}(c,d) and~\ref{fig5}(g--l) depend significantly on the spin order. For the $\kappa=+1$ state [Figs.~\ref{fig5}(c,d)], $\sigma_{xy}(\omega)$ vanishes if $\theta=0^{\circ}$, but it increases with the increasing $\theta$ and reaches its maximum at $\theta=90^{\circ}$. For the $\kappa=-1$ state [Figs.~\ref{fig5}(g--l)], all three off-diagonal elements $-$ $\sigma_{xy}(\omega)$, $\sigma_{yz}(\omega)$, and $\sigma_{zx}(\omega)$ $-$ can be nonzero and they peak at $\theta=30^{\circ}$, $120^{\circ}$, and $30^{\circ}$, respectively. Furthermore, $\sigma_{xy}(\omega)$ is at least two orders of magnitude smaller than $\sigma_{yz}(\omega)$ and $\sigma_{zx}(\omega)$. The overall trend of $\sigma_{xy}(\omega)$ depending on the spin order is very similar to that of the IAHC in Fig.~\ref{fig4}(c).
\begin{figure*}
\includegraphics[width=\textwidth]{Fig7}
\caption{(Color online) The magneto-optical spectra of Mn$_{3}$NiN for $\kappa=-1$ state: $\phi^{x}_{K,F}$ (a), $\phi^{y}_{K,F}$ (b), and $\phi^{z}_{K,F}$ (c). The panels from left to right are Kerr rotation angle, Kerr ellipticity, Faraday rotation angle, and Faraday ellipticity, respectively.}
\label{fig7}
\end{figure*}
\subsection{Magneto-optical Kerr and Faraday effects}\label{sec4-3}
We now turn to the magneto-optical Kerr and Faraday effects (MOKE and MOFE). The characteristic Kerr rotation angle $\theta_{K}$ and the ellipticity $\varepsilon_{K}$ are typically combined into the complex Kerr angle given by~\cite{Kahn1969,GY-Guo1994,GY-Guo1995}
\begin{eqnarray}\label{eq:kerr}
\phi^{\gamma}_{K} & = & \theta^{\gamma}_{K}+i\varepsilon^{\gamma}_{K} \nonumber \\
& = & \frac{-\nu_{\alpha\beta\gamma}\sigma_{\alpha\beta}}{\sigma_{0}\sqrt{1+i\left(4\pi/\omega\right)\sigma_{0}}},
\end{eqnarray}
where $\nu_{\alpha\beta\gamma}$ is the 3D Levi-Civita symbol with the Cartesian coordinates $\alpha,\beta,\gamma\in\{x,y,z\}$ and $\sigma_{0}=\frac{1}{2}(\sigma_{\alpha\alpha}+\sigma_{\beta\beta})\approx\sigma_{\alpha\alpha}$. The complex Kerr angle expressed here, similarly to the IAHC, holds a pseudovector form, i.e., $\boldsymbol{\phi}_{K}=[\phi^{x}_{K},\phi^{y}_{K},\phi^{z}_{K}]$, which differentiates the Kerr angles when the incident light propagates along different crystallographic axes. One can read from Eq.~\eqref{eq:kerr} that the longitudinal optical conductivity $\sigma_{\alpha\alpha}$ modulates the magnitude of the Kerr spectrum, while the transverse optical conductivity $\sigma_{\alpha\beta}$ determines key features of the Kerr spectrum. For example, only the component $\phi^{z}_{K}$ is finite for the $\kappa=+1$ state, whereas all components $\phi^{x}_{K}$, $\phi^{y}_{K}$, and $\phi^{z}_{K}$ are nonzero in the $\kappa=-1$ configuration. More importantly, $\phi^{x}_{K}\neq\phi^{y}_{K}\neq\phi^{z}_{K}$ implies the presence of magneto-optical anisotropy if the incident light propagates along $x$ ($01\bar{1}$), $y$ ($\bar{2}11$), and $z$ ($111$) axes (see Fig.~\ref{fig1}), respectively. Similarly, the complex Faraday angle can be expressed as~\cite{Reim1990}
\begin{eqnarray}\label{eq:faraday}
\phi^{\gamma}_{F} & = & \theta^{\gamma}_{F}+i\varepsilon^{\gamma}_{F} \nonumber \\
& = & \nu_{\alpha\beta\gamma}(n_{+}-n_{-})\frac{\omega l}{2c},
\end{eqnarray}
where $n_{\pm}=[1+\frac{4\pi i}{\omega}(\sigma_{\alpha\alpha}\pm i\sigma_{\alpha\beta})]^{1/2}$ are the complex refractive indices and $l$ is the thickness of the thin film. Since $\sigma_{\alpha\alpha}$ is generally much larger than $\sigma_{\alpha\beta}$ (see Fig.~\ref{fig5}), $n_{\pm}\approx[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{1/2}\mp\frac{2\pi}{\omega}\sigma_{\alpha\beta}[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{-1/2}$ (Ref.~\onlinecite{YM-Fang2018}) and consequently, the complex Faraday angle can be approximated as $\theta^{\gamma}_{F}+i\varepsilon^{\gamma}_{F} = -\nu_{\alpha\beta\gamma}\frac{2\pi l}{c}\sigma_{\alpha\beta}[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{-1/2}$. Therefore, the Faraday spectrum is also determined by $\sigma_{\alpha\beta}$.
The magneto-optical Kerr and Faraday spectra for the spin order with $\kappa=+1$ in Mn$_{3}X$N are plotted in Fig.~\ref{fig6}, where only $\phi^{z}_{K,F}$ are shown since all other components vanish. Taking Mn$_{3}$NiN as an example [Fig.~\ref{fig6}(d)], one can observe that the Kerr and Faraday spectra indeed inherit the behavior of the optical conductivity $\sigma_{xy}(\omega)$ [Figs.~\ref{fig5}(c) and~\ref{fig5}(d)]. For example, the Kerr and Faraday angles are zero when $\theta=0^{\circ}$, increase with increasing $\theta$, and reach their maximum at $\theta=90^{\circ}$. This indicates that the symmetry requirements for MOKE and MOFE are the same as that for the optical Hall conductivity. In addition, all Mn$_{3}X$N compounds considered here have similar Kerr and Faraday spectra, primarily due to their isostructural nature. The Kerr rotation angles in Mn$_{3}X$N are comparable to the theoretical values in Mn$_{3}X$ ($0.2\sim0.6$ deg)~\cite{WX-Feng2015} and are larger than the experimental value in Mn$_{3}$Sn (0.02 deg)~\cite{Higo2018}. The largest Kerr and Faraday rotation angles of respectively 0.42 deg and $4\times10^{5}$ deg/cm emerge in Mn$_{3}$AgN. This roots potentially in the stronger SOC of the Ag atom as compared to other lighter $X$ atoms.
Figure~\ref{fig7} shows the magneto-optical Kerr and Faraday spectra for the $\kappa=-1$ state of Mn$_{3}$NiN. Since all off-diagonal elements $\sigma_{yz}(\omega)$, $\sigma_{zx}(\omega)$, and $\sigma_{xy}(\omega)$ of the optical conductivity are nonzero for the $\kappa=-1$ state, the Kerr and Faraday effects will appear if the incident light propagates along any Cartesian axes. This is in contrast to the case of the $\kappa=+1$ configuration, for which only the incident light along the $z$ axis generates finite $\phi^{z}_{K,F}$. In Fig.~\ref{fig7}(a), $\phi^{x}_{K,F}$ are zero at $\theta=30^{\circ}$ but have the largest values at $\theta=120^{\circ}$, owing to the features in $\sigma_{yz}$ [Figs.~\ref{fig5}(i) and~\ref{fig5}(j)]. Moreover, the Kerr and Faraday rotation angles ($\theta^{x}_{K}$ and $\theta^{x}_{F}$) and the ellipticity ($\varepsilon^{x}_{K}$ and $\varepsilon^{x}_{F}$) resemble, respectively, the real part ($\sigma^{\prime}_{yz}$) and imaginary part ($\sigma^{\prime\prime}_{yz}$) of the corresponding off-diagonal conductivity element. Compared to $\phi^{x}_{K,F}$, the angle $\phi^{y}_{K,F}$ in Fig.~\ref{fig7}(b) displays an opposite behavior in the sense that it has the largest values at $\theta=30^{\circ}$ but vanishes at $\theta=120^{\circ}$. This is not surprising as the periods of $\sigma_{yz}$ and $\sigma_{zx}$ as a function of $\theta$ differ by $\frac{\pi}{2}$, which can be read from Fig.~\ref{fig4}(c) and Figs.~\ref{fig5}(i-l). The angles $\phi^{z}_{K,F}$ shown in Fig.~\ref{fig7}(c) are two orders of magnitude smaller than $\phi^{x}_{K,F}$ and $\phi^{y}_{K,F}$, implying that very weak Kerr and Faraday effects are expected for the incident light along the $z$ axis. From Figs.~\ref{fig6} and~\ref{fig7}, we conclude that the MOKE and MOFE depend strongly on the spin order, as in the case of the IAHC.
\section{Summary}\label{sec5}
In summary, using a group theoretical analysis, tight-binding modelling, and first-principles calculations, we have systematically investigated the spin-order dependent intrinsic anomalous Hall effect and magneto-optical Kerr and Faraday effects in Mn$_3X$N ($X$ = Ga, Zn, Ag, and Ni) compounds, which are considered to be an important class of noncollinear antiferromagnets. The symmetry-imposed shape of the anomalous Hall conductivity tensor is determined via the analysis of magnetic point groups, that is, only $\sigma_{xy}$ can be nonzero for the right-handed spin chirality ($\kappa=+1$) while finite $\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$ exist for the left-handed spin chirality ($\kappa=-1$). Our tight-binding modelling confirms these results and further reveals that $\sigma_{xy}$ is a \textit{sine}-like function of the azimuthal angle $\theta$ with a period of $2\pi$ ($\frac{2\pi}{3}$) for the $\kappa=+1$ ($\kappa=-1$) state. By examining the $\bm{k}$-resolved Berry curvature, we uncovered that the intrinsic anomalous Hall conductivity is generally large if the Fermi energy enters into the region with small band gaps formed at anticrossings. The first-principles calculations reproduce all features of $\sigma_{xy}$ and further verify that $\sigma_{yz}$ and $\sigma_{zx}$ have a period of $2\pi$ for the $\kappa=-1$ state. The intrinsic anomalous Hall conductivity shows a distinct relation of $\boldsymbol{\sigma}(\theta)=-\boldsymbol{\sigma}(\theta+\pi)$ due to its odd nature under time-reversal symmetry. In addition, we have calculated the magnetic anisotropy energy which manifests as $K_{\textrm{eff}}$ $\sin^{2}$($\theta)$ for the $\kappa=+1$ state but remains nearly constant at $K_{\textrm{eff}}/2$ for the $\kappa=-1$ state. A discrete two-fold energy degeneracy, i.e., $\text{MAE}(\theta)=\text{MAE}(\theta+\pi)$, is found in the noncollinear antiferromagnetic Mn$_3X$N. Strikingly, our first-principles calculations reveal that the $\kappa=-1$ state could exist in Mn$_3X$N for certain values of $\theta$.
The optical conductivities for $\kappa=\pm1$ states were explored, considering Mn$_3$NiN as a prototypical example. We find that the spin order hardly affects the diagonal elements whereas it influences strongly the off-diagonal entries. The optical isotropy is established since $\sigma_{xx}(\omega)\approx\sigma_{yy}(\omega)\approx\sigma_{zz}(\omega)$, while magneto-optical anisotropy occurs inevitably as $\sigma_{xy}(\omega)\neq\sigma_{yz}(\omega)\neq\sigma_{zx}(\omega)$. Finally, magneto-optical Kerr and Faraday effects are evaluated based on the optical conductivity. The largest Kerr rotation angles in Mn$_3X$N amount to 0.4 deg, which is comparable to other noncollinear antiferromagnets, e.g., Mn$_{3}X$~\cite{WX-Feng2015} and Mn$_{3}$Sn~\cite{Higo2018}. Since the optical Hall conductivity plays a major role for magneto-optical effects, the Kerr and Faraday spectra also display a spin-order dependent behavior. Our work illustrates that complex noncollinear spin structures could be probed via anomalous Hall and magneto-optical effects measurements.
\begin{acknowledgments}
W. F. and Y. Y. acknowledge the support from the National Natural Science Foundation of China (Nos. 11874085 and 11734003) and the National Key R\&D Program of China (No. 2016YFA0300600). W. F. also acknowledges the funding through an Alexander von Humboldt Fellowship. Y.M., J.-P. H. and S.B. acknowledge funding under SPP 2137 ``Skyrmionics" (project MO 1731/7-1), collaborative Research Center SFB 1238, and Y.M. acknowledges funding from project MO 1731/5-1 of Deutsche Forschungsgemeinschaft (DFG). G.-Y. G. is supported by the Ministry of Science and Technology and the Academia Sinica as well as NCTS and Kenda Foundation in Taiwan. We acknowledge computing time on the supercomputers JUQUEEN and JURECA at J\"ulich Supercomputing Centre and JARA-HPC of RWTH Aachen University.
\end{acknowledgments}
|
\section{Introduction}
Among the most promising approaches to address the issue of global optimization
of an unknown function under reasonable smoothness assumptions
comes from extensions of the multi-armed bandit setup.
\cite{Bubeck2009} highlighted the connection between cumulative regret
and simple regret which facilitates fair comparison between methods
and \cite{Bubeck2011} proposed bandit algorithms on metric space $\cX$, called $\cX$-armed bandits.
In this context, theory and algorithms have been developed
in the case where the expected reward is a function $f:\cX\to\bR$
which satisfies certain smoothness conditions such as Lipschitz or H\"older continuity
\citep{Kleinberg2004, Kocsis2006, Auer2007, Kleinberg2008, Munos2011}.
Another line of work is the Bayesian optimization framework \citep{Jones1998, Bull2011, Mockus2012}
for which the unknown function $f$ is assumed to be the realization of a prior stochastic process distribution,
typically a Gaussian process.
An efficient algorithm that can be derived in this framework is the popular GP-UCB algorithm
due to \cite{Srinivas2012}.
However an important limitation of the upper confidence bound (UCB) strategies
without smoothness condition
is that the search space has to be {\em finite} with bounded cardinality,
a fact which is well known but, up to our knowledge,
has not been discussed so far in the related literature.
In this paper, we propose an approach which improves both lines of work with respect to their present limitations.
Our purpose is to: (i) relax smoothness assumptions that limit the relevance
of $\cX$-armed bandits in practical situations where target functions may only display random smoothness,
(ii) extend the UCB strategy for arbitrary sets $\cX$.
Here we will assume that $f$, being the realization of a given stochastic process distribution,
fulfills a \emph{probabilistic smoothness} condition.
We will consider the stochastic process bandit setup and we develop a UCB algorithm
based on {\em generic chaining} \citep{Bogachev1998,Adler2009,Talagrand2014,Gine2015}.
Using the generic chaining construction,
we compute hierarchical discretizations of $\cX$ under the form of chaining trees
in a way that permits to control precisely the discretization error.
The UCB algorithm then applies on these successive discrete subspaces
and chooses the accuracy of the discretization at each iteration
so that the cumulative regret it incurs matches the state-of-the art bounds on finite $\cX$.
In the paper, we propose an algorithm which computes a generic chaining tree
for arbitrary stochastic process in quadratic time.
We show that this tree is optimal for classes like Gaussian processes with high probability.
Our theoretical contributions have an impact in the two contexts mentioned above.
From the bandit and global optimization point of view,
we provide a generic algorithm that incurs state-of-the-art regret on stochastic process objectives
including non-trivial functionals of Gaussian processes such as
the sum of squares of Gaussian processes (in the spirit of mean-square-error minimization),
or nonparametric Gaussian processes on ellipsoids (RKHS classes),
or the Ornstein-Uhlenbeck process, which was conjectured impossible by \citep{Srinivas2010} and \citep{Srinivas2012}.
From the point of view of Gaussian process theory,
the generic chaining algorithm leads to tight bounds on the supremum of the process
in probability and not only in expectation.
The remainder of the paper is organized as follows.
In Section~\ref{sec:framework}, we present the stochastic process bandit framework over continuous spaces.
Section~\ref{sec:chaining} is devoted to the construction of generic chaining trees for search space discretization.
Regret bounds are derived in Section~\ref{sec:regret} after choosing adequate discretization depth.
Finally, lower bounds are established in Section~\ref{sec:lower_bound}.
\section{Stochastic Process Bandits Framework}
\label{sec:framework}
We consider the optimization of an unknown function $f:\cX\to\bR$
which is assumed to be sampled from a given separable stochastic process distribution.
The input space $\cX$ is an arbitrary space not restricted to subsets of $\bR^D$,
and we will see in the next section how the geometry of $\cX$ for a particular metric
is related to the hardness of the optimization.
An algorithm iterates the following:
\begin{itemize}
\item it queries $f$ at a point $x_i$ chosen with the previously acquired information,
\item it receives a noisy observation $y_i=f(x_i)+\epsilon_t$,
\end{itemize}
where the $(\epsilon_i)_{1\le i \le t}$ are independent centered Gaussian $\cN(0,\eta^2)$ of known variance.
We evaluate the performances of such an algorithm using $R_t$ the cumulative regret:
\[R_t = t\sup_{x\in\cX}f(x) - \sum_{i=1}^t f(x_i)\,.\]
This objective is not observable in practice,
and our aim is to give theoretical upper bounds that hold with arbitrary high probability
in the form:
\[\Pr\big[R_t \leq g(t,u)\big] \geq 1-e^{-u}\,.\]
Since the stochastic process is separable, the supremum over $\cX$ can be replaced by
the supremum over all finite subsets of $\cX$ \citep{Boucheron2013}.
Therefore we can assume without loss of generality that $\cX$ is finite with arbitrary cardinality.
We discuss on practical approaches to handle continuous space in Appendix~\ref{sec:greedy_cover}.
Note that the probabilities are taken under the product space of both the stochastic process $f$ itself
and the independent Gaussian noises $(\epsilon_i)_{1\le i\le t}$.
The algorithm faces the exploration-exploitation tradeoff.
It has to decide between reducing the uncertainty on $f$
and maximizing the rewards.
In some applications one may be interested in finding the maximum of $f$ only,
that is minimizing $S_t$ the simple regret:
\[S_t = \sup_{x\in\cX}f(x) - \max_{i\leq t}f(x_i)\,.\]
We will reduce our analysis to this case by simply observing that $S_T\leq \frac{R_T}{T}$.
\paragraph{Confidence Bound Algorithms and Discretization.}
To deal with the uncertainty,
we adopt the \emph{optimistic optimization} paradigm
and compute high confidence intervals where the values $f(x)$ lie with high probability,
and then query the point maximizing the upper confidence bound \citep{Auer2002}.
A naive approach would use a union bound over all $\cX$
to get the high confidence intervals at every points $x\in\cX$.
This would work for a search space with fixed cardinality $\abs{\cX}$,
resulting in a factor $\sqrt{\log\abs{\cX}}$ in the Gaussian case,
but this fails when $\abs{\cX}$ is unbounded,
typically a grid of high density approximating a continuous space.
In the next section,
we tackle this challenge by employing {\em generic chaining} to build hierarchical discretizations of $\cX$.
\section{Discretizing the Search Space via Generic Chaining}
\label{sec:chaining}
\subsection{The Stochastic Smoothness of the Process}
Let $\ell_u(x,y)$ for $x,y\in\cX$ and $u\geq 0$ be the following confidence bound on the increments of $f$:
\[\ell_u(x,y) = \inf\Big\{s\in\bR: \Pr[f(x)-f(y) > s] < e^{-u}\Big\}\,.\]
In short, $\ell_u(x,y)$ is the best bound satisfying $\Pr\big[f(x)-f(y) \geq \ell_u(x,y)\big] < e^{-u}$.
For particular distributions of $f$, it is possible to obtain closed formulae for $\ell_u$.
However, in the present work we will consider upper bounds on $\ell_u$.
Typically, if $f$ is distributed as a centered Gaussian process of covariance $k$,
which we denote $f\sim\cGP(0,k)$, we know that $\ell_u(x,y) \leq \sqrt{2u}d(x,y)$,
where $d(x,y)=\big(\E(f(x)-f(y))^2\big)^{\frac 1 2}$ is the canonical pseudo-metric of the process.
More generally, if it exists a pseudo-metric $d(\cdot,\cdot)$ and a function $\psi(\cdot,\cdot)$
bounding the logarithm of the moment-generating function of the increments, that is,
\[\log \E e^{\lambda(f(x)-f(y))} \leq \psi(\lambda,d(x,y))\,,\]
for $x,y\in\cX$ and $\lambda\in I \subseteq \bR$,
then using the Chernoff bounding method \citep{Boucheron2013},
\[\ell_u(x,y) \leq \psi^{*-1}(u,d(x,y))\,,\]
where $\psi^*(s,\delta)=\sup_{\lambda\in I}\big\{\lambda s - \psi(\lambda,\delta)\big\}$
is the Fenchel-Legendre dual of $\psi$
and $\psi^{*-1}(u,\delta)=\inf\big\{s\in\bR: \psi^*(s,\delta)>u\big\}$
denotes its generalized inverse.
In that case, we say that $f$ is a $(d,\psi)$-process.
For example if $f$ is sub-Gamma, that is:
\begin{equation}
\label{eq:sub_gamma}
\psi(\lambda,\delta)\leq \frac{\nu \lambda^2 \delta^2}{2(1-c\lambda \delta)}\,,
\end{equation}
we obtain,
\begin{equation}
\label{eq:sub_gamma_tail}
\ell_u(x,y) \leq \big(c u + \sqrt{2\nu u}\big) d(x,y)\,.
\end{equation}
The generality of Eq.~\ref{eq:sub_gamma} makes it convenient to derive bounds
for a wide variety of processes beyond Gaussian processes,
as we see for example in Section~\ref{sec:gp2}.
\subsection{A Tree of Successive Discretizations}
As stated in the introduction, our strategy to obtain confidence intervals
for stochastic processes is by successive discretization of $\cX$.
We define a notion of tree that will be used for this purpose.
A set $\cT=\big(\cT_h\big)_{h\geq 0}$ where $\cT_h\subset\cX$ for $h\geq 0$ is a tree
with parent relationship $p:\cX\to\cX$, when for all $x\in \cT_{h+1}$ its parent is given by
$p(x)\in \cT_h$.
We denote by $\cT_{\leq h}$ the set of the nodes of $\cT$ at depth lower than $h$:
$\cT_{\leq h} = \bigcup_{h'\leq h} \cT_h'$.
For $h\geq 0$ and a node $x\in \cT_{h'}$ with $h\leq h'$,
we also denote by $p_h(x)$ its parent at depth $h$,
that is $p_h(x) = p^{h'-h}(x)$
and we note $x\succ s$ when $s$ is a parent of $x$.
To simplify the notations in the sequel,
we extend the relation $p_h$ to $p_h(x)=x$ when $x\in\cT_{\leq h}$.
We now introduce a powerful inequality bounding the supremum of the difference of $f$
between a node and any of its descendent in $\cT$,
provided that $\abs{\cT_h}$ is not excessively large.
\begin{theorem}[Generic Chaining Upper Bound]
\label{thm:chaining}
Fix any $u>0$, $a>1$ and $\big(n_h\big)_{h\in\bN}$ an increasing sequence of integers.
Set $u_i=u+n_i+\log\big(i^a\zeta(a)\big)$
where $\zeta$ is the Riemann zeta function.
Then for any tree $\cT$ such that $\abs{\cT_h}\leq e^{n_h}$,
\[\forall h\geq 0, \forall s\in\cT_h,~ \sup_{x\succ s} f(x)-f(s) \leq \omega_h\,,\]
holds with probability at least $1-e^{-u}$,
where,
\[\omega_h = \sup_{x\in\cX} \sum_{i> h} \ell_{u_i}\big(p_i(x), p_{i-1}(x)\big)\,.\]
\end{theorem}
The full proof of the theorem can be found in Appendix~\ref{sec:proof_chaining}.
It relies on repeated application of the union bound over the $e^{n_i}$ pairs $\big(p_i(x),p_{i-1}(x)\big)$.
Now, if we look at $\cT_h$ as a discretization of $\cX$
where a point $x\in\cX$ is approximated by $p_h(x)\in\cT_h$,
this result can be read in terms of discretization error,
as stated in the following corollary.
\begin{corollary}[Discretization error of $\cT_h$]
\label{cor:chaining}
Under the assumptions of Theorem~\ref{thm:chaining}
with $\cX=\cT_{\leq h_0}$ for $h_0$ large enough, we have that,
\[\forall h, \forall x\in\cX,~ f(x)-f(p_h(x)) \leq \omega_h\,,\]
holds with probability at least $1-e^{-u}$.
\end{corollary}
\subsection{Geometric Interpretation for $(d,\psi)$-processes}
\label{sec:psi_process}
The previous inequality suggests that to obtain a good upper bound on the discretization error,
one should take $\cT$ such that $\ell_{u_i}(p_i(x),p_{i-1}(x))$
is as small as possible for every $i>0$ and $x\in\cX$.
We specify what it implies for $(d,\psi)$-processes.
In that case, we have:
\[\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\Big(u_i,d\big(p_i(x),p_{i-1}(x)\big)\Big)\,.\]
Writing $\Delta_i(x)=\sup_{x'\succ p_i(x)}d(x',p_i(x))$
the $d$-radius of the ``cell'' at depth $i$ containing $x$,
we remark that $d(p_i(x),p_{i-1}(x))\leq \Delta_{i-1}(x)$,
that is:
\[
\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\big(u_i,\Delta_{i-1}(x)\big)\,.
\]
In order to make this bound as small as possible,
one should spread the points of $\cT_h$ in $\cX$
so that $\Delta_h(x)$ is evenly small,
while satisfying the requirement $\abs{\cT_h}\leq e^{n_h}$.
Let $\Delta = \sup_{x,y\in\cX}d(x,y)$ and $\epsilon_h=\Delta 2^{-h}$,
and define an $\epsilon$-net as a set $T\subseteq \cX$ for which $\cX$ is covered by $d$-balls
of radius $\epsilon$ with center in $T$.
Then if one takes $n_h=2\log N(\cX,d,\epsilon_h)$, twice the metric entropy of $\cX$,
that is the logarithm of the minimal $\epsilon_h$-net,
we obtain with probability at least $1-e^{-u}$ that
$\forall h\geq 0, \forall s\in\cT_h$\,:
\begin{equation}
\label{eq:classical_chaining}
\sup_{x\succ s}f(x)-f(s) \leq \sum_{i>h} \psi^{*-1}(u_i, \epsilon_i)\,,
\end{equation}
where $u_i= u+2\log N(\cX,d,\epsilon_i)+\log(i^a\zeta(a))$.
The tree $\cT$ achieving this bound consists in computing a minimal $\epsilon$-net at each depth,
which can be done efficiently by Algorithm~\ref{alg:greedy_cover}
if one is satisfied by an almost optimal heuristic
which exhibits an approximation ratio of $\max_{x\in\cX} \sqrt{\log \log \abs{\cB(x,\epsilon)}}$,
as discussed in Appendix~\ref{sec:greedy_cover}.
This technique is often called \emph{classical chaining} \citep{Dudley1967}
and we note that an implementation appears in \cite{Contal2015} on real data.
However the upper bound in Eq.~\ref{eq:classical_chaining}
is not tight as for instance with a Gaussian process indexed by an ellipsoid,
as discussed in Section~\ref{sec:gp}.
We will present later in Section~\ref{sec:lower_bound} an algorithm to compute a tree $\cT$
in quadratic time leading to both a lower and upper bound on $\sup_{x\succ s}f(x)-f(s)$
when $f$ is a Gaussian process.
The previous inequality is particularly convenient when we know a bound
on the growth of the metric entropy of $(\cX,d)$, as stated in the following corollary.
\begin{corollary}[Sub-Gamma process with metric entropy bound]
\label{cor:subgamma_bigoh}
If $f$ is sub-Gamma and there exists $R,D\in\bR$ such that for all $\epsilon>0$,
$N(\cX,d,\epsilon) \leq (\frac R \epsilon)^D$, then with probability at least $1-e^{-u}$\,:
\[\forall h\geq 0,\forall s\in\cT_h,~ \sup_{x\succ s}f(x)-f(s) =\cO\Big(\big(c(u + D h)+\sqrt{\nu(u+Dh)}\big) 2^{-h}\Big)\,.\]
\end{corollary}
\begin{proof}
With the condition on the growth of the metric entropy,
we obtain $u_i = \cO\big(u+D\log R + D i\big)$.
With Eq.~\ref{eq:classical_chaining} for a sub-Gamma process we get,
knowing that $\sum_{i=h}^\infty i 2^{-i} =\cO\big(h 2^{-h}\big)$
and $\sum_{i=h}^\infty \sqrt{i}2^{-i}=\cO\big(\sqrt{h}2^{-h}\big)$,
that $\omega_h = \cO\Big(\big(c (u+D h) + \sqrt{\nu(u+D h)}\big)2^{-h}\Big)$.
\end{proof}
Note that the conditions of Corollary~\ref{cor:subgamma_bigoh} are fulfilled
when $\cX\subset [0,R]^D$ and there is $c\in\bR$ such that for all $x,y\in\cX,~d(x,y) \leq c\norm{x-y}_2$,
by simply cutting $\cX$ in hyper-cubes of side length $\epsilon$.
We also remark that this condition is very close to the near-optimality dimension of the metric space $(\cX,d)$
defined in \cite{Bubeck2011}.
However our condition constraints the entire search space $\cX$
instead of the near-optimal set $\cX_\epsilon = \big\{ x\in\cX: f(x)\geq \sup_{x^\star\in\cX}f(x^\star)-\epsilon\big\}$.
Controlling the dimension of $\cX_\epsilon$ may allow to obtain an exponential decay of the regret
in particular deterministic function $f$ with a quadratic behavior near its maximum.
However, up to our knowledge no progress has been made in this direction for stochastic processes
without constraining its behavior around the maximum.
A reader interested in this subject may look at
the recent work by \cite{Grill2015} on smooth and noisy functions with unknown smoothness,
and the works by \cite{Freitas2012} or \cite{Wang2014b}
on Gaussian processes without noise and a quadratic local behavior.
\section{Regret Bounds for Bandit Algorithms}
\label{sec:regret}
Now we have a tool to discretize $\cX$ at a certain accuracy,
we show here how to derive an optimization strategy on $\cX$.
\subsection{High Confidence Intervals}
Assume that given $i-1$ observations $Y_{i-1}=(y_1,\dots,y_{i-1})$ at queried locations $X_{i-1}$,
we can compute $L_i(x,u)$ and $U_i(x,u)$ for all $u>0$ and $x\in\cX$, such that:
\[ \Pr\Big[ f(x) \in \big(L_i(x,u), U_i(x,u)\big) \Big] \geq 1-e^{-u}\,.\]
Then for any $h(i)>0$ that we will carefully choose later,
we obtain by a union bound on $\cT_{h(i)}$ that:
\[ \Pr\Big[ \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u+n_{h(i)}), U_i(x,u+n_{h(i)})\big) \Big] \geq 1-e^{-u}\,.\]
And by an additional union bound on $\bN$ that:
\begin{equation}
\label{eq:ucb}
\Pr\Big[ \forall i\geq 1, \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u_i), U_i(x,u_i)\big) \Big] \geq 1-e^{-u}\,,
\end{equation}
where $u_i=u+n_{h(i)}+\log\big(i^a\zeta(a)\big)$ for any $a>1$ and $\zeta$ is the Riemann zeta function.
Our \emph{optimistic} decision rule for the next query is thus:
\begin{equation}
\label{eq:argmax}
x_i \in \argmax_{x\in\cT_{h(i)}} U_i(x,u_i)\,.
\end{equation}
Combining this with Corollary~\ref{cor:chaining}, we are able to prove the following bound
linking the regret with $\omega_{h(i)}$ and the width of the confidence interval.
\begin{theorem}[Generic Regret Bound]
\label{thm:regret_bound}
When for all $i\geq 1$, $x_i \in \argmax_{x\in \cT_{h(i)}} U_i(x,u_i)$
we have with probability at least $1- 2 e^{-u}$:
\[ R_t = t \sup_{x\in\cX} f(x)-\sum_{i=1}^t f(x_i) \leq \sum_{i=1}^t\Big\{ \omega_{h(i)} + U_i(x_i,u_i)-L_i(x_i,u_i)\Big\}\,.\]
\end{theorem}
\begin{proof}
Using Theorem~\ref{thm:chaining} we have that,
\[\forall h\geq 0,\,\sup_{x\in\cX}f(x) \leq \omega_h+\sup_{x\in\cX}f(p_h(x))\,,\]
holds with probability at least $1-e^{-u}$.
Since $p_{h(i)}(x) \in \cT_{h(i)}$ for all $x\in\cX$,
we can invoke Eq.~\ref{eq:ucb}\,:
\[\forall i\geq 1\,~ \sup_{x\in\cX} f(x)-f(x_i) \leq \omega_{h(i)}+\sup_{x\in\cT_{h(i)}}U_i(x,u_i)-L_i(x_i,u_i)\,,\]
holds with probability at least $1-2e^{-u}$.
Now by our choice for $x_i$, $\sup_{x\in\cT_{h(i)}}U_i(x,u_i) = U_i(x_i,u_i)$,
proving Theorem~\ref{thm:regret_bound}.
\end{proof}
In order to select the level of discretization $h(i)$ to reduce the bound on the regret,
it is required to have explicit bounds on $\omega_i$ and the confidence intervals.
For example by choosing
\[h(i)=\min\Big\{i:\bN: \omega_i \leq \sqrt{\frac{\log i}{i}} \Big\}\,,\]
we obtain $\sum_{i=1}^t \omega_{h(i)} \leq 2\sqrt{t\log t}$ as shown later.
The performance of our algorithm is thus linked with the decrease rate of $\omega_i$,
which characterizes the ``size'' of the optimization problem.
We first study the case where $f$ is distributed as a Gaussian process,
and then for a sum of squared Gaussian processes.
\subsection{Results for Gaussian Processes}
\label{sec:gp}
The problem of regret minimization where $f$ is sampled from a Gaussian process
has been introduced by \cite{Srinivas2010} and \cite{grunewalder2010}.
Since then, it has been extensively adapted to various settings of Bayesian optimization
with successful practical applications.
In the first work the authors address the cumulative regret
and assume that either $\cX$ is finite or that the samples of the process are Lipschitz
with high probability, where the distribution of the Lipschitz constant has Gaussian tails.
In the second work the authors address the simple regret without noise and with known horizon,
they assume that the canonical pseudo-metric $d$ is bounded by a given power of the supremum norm.
In both works they require that the input space is a subset of $\bR^D$.
The analysis in our paper permits to derive similar bounds in a nonparametric fashion
where $(\cX,d)$ is an arbitrary metric space.
Note that if $(\cX,d)$ is not totally bounded, then the supremum of the process is infinite with probability one,
so is the regret of any algorithm.
\paragraph{Confidence intervals and information gain.}
First, $f$ being distributed as a Gaussian process,
it is easy to derive confidence intervals given a set of observations.
Writing $\mat{Y}_i$ the vector of noisy values at points in $X_i$,
we find by Bayesian inference \citep{Rasmussen2006} that:
\[\Pr\Big[ \abs{f(x)-\mu_i(x)} \geq \sigma_i(x)\sqrt{2u}\Big] < e^{-u}\,,\]
for all $x\in\cX$ and $u>0$, where:
\begin{align}
\label{eq:mu}
\mu_i(x) &= \mat{k}_i(x)^\top \mat{C}_i^{-1}\mat{Y}_i\\
\label{eq:sigma}
\sigma_i^2(x) &= k(x,x) - \mat{k}_i(x)^\top \mat{C}_i^{-1} \mat{k}_i(x)\,,
\end{align}
where $\mat{k}_i(x) = [k(x_j, x)]_{x_j \in X_i}$ is the covariance vector between $x$ and $X_i$,
$\mat{C}_i = \mat{K}_i + \eta^2 \mat{I}$,
and $\mat{K}_i=[k(x,x')]_{x,x' \in X_i}$ the covariance matrix
and $\eta^2$ the variance of the Gaussian noise.
Therefore the width of the confidence interval in Theorem~\ref{thm:regret_bound}
can be bounded in terms of $\sigma_{i-1}$:
\[U_i(x_i,u_i)-L_i(x_i,u_i) \leq 2\sigma_{i-1}(x_i)\sqrt{2u_i}\,.\]
Furthermore it is proved in \cite{Srinivas2012} that the sum of the posterior variances
at the queried points $\sigma_{i-1}^2(x_i)$ is bounded in terms of information gain:
\[\sum_{i=1}^t \sigma_{i-1}^2(x_i) \leq c_\eta \gamma_t\,,\]
where $c_\eta=\frac{2}{\log(1+\eta^{-2})}$
and $\gamma_t = \max_{X_t\subseteq\cX:\abs{X_t}=t} I(X_t)$
is the maximum information gain of $f$ obtainable by a set of $t$ points.
Note that for Gaussian processes,
the information gain is simply $I(X_t)=\frac 1 2 \log\det(\mat{I}+\eta^{-2}\mat{K}_t)$.
Finally, using the Cauchy-Schwarz inequality and the fact that $u_t$ is increasing we have
with probability at least $1- 2 e^{-u}$:
\begin{equation}
\label{eq:gp_regret}
R_t \leq 2\sqrt{2 c_\eta t u_t \gamma_t} + \sum_{i=1}^t \omega_{h(i)}\,.
\end{equation}
The quantity $\gamma_t$ heavily depends on the covariance of the process.
On one extreme, if $k(\cdot,\cdot)$ is a Kronecker delta,
$f$ is a Gaussian white noise process and $\gamma_t=\cO(t)$.
On the other hand \cite{Srinivas2012} proved the following inequalities for widely used covariance functions
and $\cX\subset \bR^D$:
\begin{itemize}
\item linear covariance $k(x,y)=x^\top y$, $\gamma_t=\cO\big(D \log t\big)$.
\item squared exponential covariance $k(x,y)=e^{-\frac 1 2 \norm{x-y}_2^2}$, $\gamma_t=\cO\big((\log t)^{D+1}\big)$.
\item Mat\'ern covariance, $k(x,y)=\frac{2^{p-1}}{\Gamma(p)}\big(\sqrt{2p}\norm{x-y}_2\big)^p K_p\big(\sqrt{2p}\norm{x-y}_2\big)$,
where $p>0$ and $K_p$ is the modified Bessel function,
$\gamma_t=\cO\big( (\log t) t^a\big)$, with $a=\frac{D(D+1)}{2p+D(D+1)}<1$ for $p>1$.
\end{itemize}
\paragraph{Bounding $\omega_h$ with the metric entropy.}
We now provide a policy to choose $h(i)$ minimizing the right hand side of Eq.\ref{eq:gp_regret}.
When an explicit upper bound on the metric entropy of the form
$\log N(\cX,d,\epsilon)\leq \cO(-D \log \epsilon)$ holds,
we can use Corollary~\ref{cor:subgamma_bigoh} which gives:
\[\omega_h\leq\cO\big(\sqrt{u+D h}2^{-h}\big)\,.\]
This upper bound holds true in particular for Gaussian processes with $\cX\subset[0,R]^D$
and for all $x,y\in\cX$, $d(x,y) \leq \cO\big(\norm{x-y}_2\big)$.
For stationary covariance this becomes $k(x,x)-k(x,y)\leq \cO\big(\norm{x-y}_2\big)$
which is satisfied for the usual covariances used in Bayesian optimization such as
the squared exponential covariance
or the Mat\'ern covariance with parameter $p\in\big(\frac 1 2, \frac 3 2, \frac 5 2\big)$.
For these values of $p$ it is well known that $k(x,y)=h_p\big(\sqrt{2p}\norm{x-y}_2\big) \exp\big(-\sqrt{2p}\norm{x-y}_2\big)$,
with $h_{\frac 1 2}(\delta)=1$, $h_{\frac 3 2}(\delta)=1+\delta$ and $h_{\frac 5 2}(\delta)=1+\delta+\frac 1 3 \delta^2$.
Then we see that is suffices to choose $h(i)=\ceil{\frac 1 2 \log_2 i}$
to obtain $\omega_{h(i)} \leq \cO\Big( \sqrt{\frac{u+\frac 1 2 D\log i}{i}} \Big)$
and since $\sum_{i=1}^t i^{-\frac 1 2}\leq 2 \sqrt{t}$ and
$\sum_{i=1}^t \big(\frac{\log i}{i}\big)^{\frac 1 2} \leq 2\sqrt{t\log t}$,
\[R_t \leq \cO\Big(\sqrt{t \gamma_t \log t }\Big)\,, \]
holds with high probability.
Such a bound holds true in particular for the Ornstein-Uhlenbeck process,
which was conjectured impossible in \cite{Srinivas2010} and \cite{Srinivas2012}.
However we do not know suitable bounds for $\gamma_t$ in this case
and can not deduce convergence rates.
\paragraph{Gaussian processes indexed on ellipsoids and RKHS.}
As mentioned in Section~\ref{sec:psi_process}, the previous bound on the discretization error
is not tight for every Gaussian process.
An important example is when the search space is a (possibly infinite dimensional) ellipsoid:
\[\cX=\Big\{ x\in \ell^2: \sum_{i\geq 1}\frac{x_i^2}{a_i^2} \leq 1\Big\}\,.\]
where $a\in\ell^2$,
and $f(x) = \sum_{i\geq 1}x_ig_i$ with $g_i\iid \cN(0,1)$,
and the pseudo-metric $d(x,y)$ coincide with the usual $\ell_2$ metric.
The study of the supremum of such processes is connected to learning error bounds
for kernel machines like Support Vector Machines,
as a quantity bounding the learning capacity of a class of functions in a RKHS,
see for example \cite{Mendelson2002}.
It can be shown by geometrical arguments that
$\E \sup_{x: d(x,s)\leq \epsilon} f(x)-f(s) \leq \cO\big(\sqrt{\sum_{i\geq 1}\min(a_i^2,\epsilon^2)}\big)\,,$
and that this supremum exhibits $\chi^2$-tails around its expectation,
see for example \cite{Boucheron2013} and \cite{Talagrand2014}.
This concentration is not grasped by Corollary~\ref{cor:subgamma_bigoh},
it is required to leverage the construction of Section~\ref{sec:lower_bound}
to get a tight estimate.
Therefore the present work forms a step toward efficient and practical online model selection
in such classes in the spirit of \cite{Rakhlin2014} and \cite{Gaillard2015}.
\subsection{Results for Quadratic Forms of Gaussian Processes}
\label{sec:gp2}
The preeminent model in Bayesian optimization is by far the Gaussian process.
Yet, it is a very common task to attempt minimizing a regret on functions which
does not look like Gaussian processes.
Consider the typical cases where $f$ has the form of a mean square error
or a Gaussian likelihood.
In both cases, minimizing $f$ is equivalent to minimize a sum of squares,
which we can not assume to be sampled from a Gaussian process.
To alleviate this problem, we show that this objective fits in our generic setting.
Indeed, if we consider that $f$ is a sum of squares of Gaussian processes,
then $f$ is sub-Gamma with respect to a natural pseudo-metric.
In order to match the challenge of maximization, we will precisely take the opposite.
In this particular setting we allow the algorithm
to observe directly the noisy values of the \emph{separated} Gaussian processes,
instead of the sum of their square.
To simplify the forthcoming arguments, we will choose independent and identically distributed processes,
but one can remove the covariances between the processes by Cholesky decomposition of the covariance matrix,
and then our analysis adapts easily to processes with non identical distributions.
\paragraph{The stochastic smoothness of squared GP.}
Let $f=-\sum_{j=1}^N g_j^2(x)$,
where $\big(g_j\big)_{1\le j\le N}$ are independent centered Gaussian processes $g_j\iid\cGP(0,k)$
with stationary covariance $k$ such that $k(x,x)=\kappa$ for every $x\in\cX$.
We have for $x,y\in\cX$ and $\lambda<(2\kappa)^{-1}$:
\[\log\E e^{\lambda(f(x)-f(y))} = -\frac{N}{2}\log\Big(1-4\lambda^2(\kappa^2-k^2(x,y))\Big)\,. \]
Therefore with $d(x,y)=2\sqrt{\kappa^2-k^2(x,y)}$ and $\psi(\lambda,\delta)=-\frac{N}{2}\log\big(1-\lambda^2\delta^2\big)$,
we conclude that $f$ is a $(d,\psi)$-process.
Since $-\log(1-x^2) \leq \frac{x^2}{1-x}$ for $0\leq x <1$,
which can be proved by series comparison,
we obtain that $f$ is sub-Gamma with parameters $\nu=N$ and $c=1$.
Now with Eq.~\ref{eq:sub_gamma_tail},
\[\ell_u(x,y)\leq (u+\sqrt{2 u N})d(x,y)\,.\]
Furthermore, we also have that $d(x,y)\leq \cO(\norm{x-y}_2)$ for $\cX\subseteq \bR^D$
and standard covariance functions including
the squared exponential covariance or the Mat\'ern covariance with parameter $p=\frac 3 2$ or $p=\frac 5 2$.
Then Corollary~\ref{cor:subgamma_bigoh} leads to:
\begin{equation}
\label{eq:omega_gp2}
\forall i\geq 0,~ \omega_i \leq \cO\Big( u+D i + \sqrt{N(u+D i)}2^{-i}\Big)\,.
\end{equation}
\paragraph{Confidence intervals for squared GP.}
As mentioned above, we consider here that we are given separated noisy observations $\mat{Y}_i^j$
for each of the $N$ processes.
Deriving confidence intervals for $f$ given $\big(\mat{Y}_i^j\big)_{j\leq N}$
is a tedious task since the posterior processes $g_j$ given $\mat{Y}_i^j$
are not standard nor centered.
We propose here a solution based directly on a careful analysis of Gaussian integrals.
The proof of the following technical lemma can be found in Appendix~\ref{sec:gp2_tail}.
\begin{lemma}[Tails of squared Gaussian]
\label{lem:gp2_tail}
Let $X\sim\cN(\mu,\sigma^2)$ and $s>0$. We have:
\[\Pr\Big[ X^2 \not\in \big(l^2, u^2\big)\Big] < e^{-s^2}\,,\]
for $u=\abs{\mu}+\sqrt{2} \sigma s$
and $l=\max\big(0,\abs{\mu}-\sqrt{2}\sigma s\big)$.
\end{lemma}
Using this lemma, we compute the confidence interval for $f(x)$
by a union bound over $N$.
Denoting $\mu_i^j$ and $\sigma_i^j$ the posterior expectation and deviation
of $g_j$ given $\mat{Y}_i^j$ (computed as in Eq.~\ref{eq:mu} and \ref{eq:sigma}),
the confidence interval follows for all $x\in\cX$:
\begin{equation}
\label{eq:gp2_ci}
\Pr\Big[ \forall j\leq m,~ g_j^2(x) \in \big( L_i^j(x,u), U_i^j(x,u) \big)\Big] \geq 1- e^{-u}\,,
\end{equation}
where
\begin{align*}
U_i^j(x,u) &= \Big(\abs{\mu_i^j(x)}+\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\\
\text{ and } L_i^j(x,u) &= \max\Big(0, \abs{\mu_i^j(x)}-\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\,.
\end{align*}
We are now ready to use Theorem~\ref{thm:regret_bound} to control $R_t$
by a union bound for all $i\in\bN$ and $x\in\cT_{h(i)}$.
Note that under the event of Theorem~\ref{thm:regret_bound},
we have the following:
\[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ g_j^2(x) \in \big(L_i^j(x,u_i), U_i^j(x,u_i)\big)\,,\]
Then we also have:
\[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ \abs{\mu_i^j(x)} \leq \abs{g_j(x)}+\sqrt{2(u_i+\log N)}\sigma_{i-1}^j(x)\,,\]
Since $\mu_0^j(x)=0$, $\sigma_0^j(x)=\kappa$ and $u_0\leq u_i$
we obtain $\abs{\mu_i^j(x)} \leq \sqrt{2(u_i+\log N)}\big(\sigma_{i-1}^j(x)+\kappa\big)$.
Therefore Theorem~\ref{thm:regret_bound} says with probability at least $1-2e^{-u}$:
\[R_t \leq \sum_{i=1}^t\Big\{\omega_{h(i)} + 8\sum_{j\leq N}(u_i+\log N)\big(\sigma_{i-1}^j(x)+\kappa\big)\sigma_{i-1}^j(x_i) \Big\}\,.\]
It is now possible to proceed as in Section~\ref{sec:gp} and bound the sum of posterior variances with $\gamma_t$\,:
\[R_t \leq \cO\Big( N u_t \big(\sqrt{t \gamma_t} + \gamma_t\big) + \sum_{i=1}^t \omega_{h(t)} \Big)\,.\]
As before, under the conditions of Eq.~\ref{eq:omega_gp2} and
choosing the discretization level $h(i)=\ceil{\frac 1 2 \log_2 i}$
we obtain $\omega_{h(i)}=\cO\Big(i^{-\frac 1 2} \big(u+\frac 1 2 D\log i\big)\sqrt{N}\Big)$,
and since $\sum_{i=1}^t i^{-\frac 1 2} \log i\leq 2 \sqrt{t}\log t$,
\[R_t \leq \cO\Big(N \big(\sqrt{t\gamma_t \log t}+\gamma_t\big) + \sqrt{Nt}\log t\Big)\,,\]
holds with high probability.
\section{Tightness Results for Gaussian Processes}
\label{sec:lower_bound}
We present in this section a strong result on the tree $\cT$ obtained by Algorithm~\ref{alg:tree_lb}.
Let $f$ be a centered Gaussian process $\cGP(0,k)$ with arbitrary covariance $k$.
We show that a converse of Theorem~\ref{thm:chaining} is true with high probability.
\subsection{A High Probabilistic Lower Bound on the Supremum}
We first recall that for Gaussian process we have $\psi^{*-1}(u_i,\delta)=\cO\big(\delta \sqrt{u+n_i}\big)$,
that is:
\[\forall h\geq 0, \forall s\in\cT_h,~\sup_{x\succ s}f(x)-f(s) \leq \cO\Big(\sup_{x\succ s}\sum_{i>h}\Delta_i(x) \sqrt{u+n_i}\Big)\,,\]
with probability at least $1-e^{-u}$.
For the following, we will fix for $n_i$ a geometric sequence $n_i=2^i$ for all $i\geq 1$.
Therefore we have the following upper bound:
\begin{corollary}
Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}.
Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$,
\[\sup_{x\succ s} f(x)-f(s) \leq c_u \sup_{x\succ s} \sum_{i>h} \Delta_i(x)2^{\frac i 2}\,,\]
holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$.
\end{corollary}
To show the tightness of this result,
we prove the following probabilistic bound:
\begin{theorem}[Generic Chaining Lower Bound]
\label{thm:lower_bound}
Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}.
Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$,
\[\sup_{x\succ s} f(x)-f(s) \geq c_u \sup_{x\succ s}\sum_{i=h}^\infty \Delta_i(x)2^{\frac i 2}\,,\]
holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$.
\end{theorem}
The benefit of this lower bound is huge for theoretical and practical reasons.
It first says that we cannot discretize $\cX$ in a finer way that Algorithm~\ref{alg:tree_lb}
up to a constant factor.
This also means that even if the search space $\cX$ is ``smaller''
than what suggested using the metric entropy,
like for ellipsoids,
then Algorithm~\ref{alg:tree_lb} finds the correct ``size''.
Up to our knowledge, this result is the first construction of tree $\cT$
leading to a lower bound at every depth with high probability.
The proof of this theorem shares some similarity with the construction
to obtain lower bound in expectation,
see for example \cite{Talagrand2014} or \cite{Ding2011} for a tractable algorithm.
\subsection{Analysis of Algorithm~\ref{alg:tree_lb}}
Algorithm~\ref{alg:tree_lb} proceeds as follows.
It first computes $(\cT_h)_{h\geq 0}$ a succession of $\epsilon_h$-nets as in Section~\ref{sec:psi_process}
with $\epsilon_h=\Delta 2^{-h}$ where $\Delta$ is the diameter of $\cX$.
The parent of a node is set to the closest node in the upper level,
\[\forall t\in\cT_h,~ p(t) = \argmin_{s\in\cT_{h-1}} d(t,s)\,\]
Therefore we have $d(t,p(t))\leq \epsilon_{h-1}$ for all $t\in\cT_h$.
Moreover, by looking at how the $\epsilon_h$-net is computed we also have
$d(t_i,t_j) \geq \epsilon_h$ for all $t_i,t_j\in\cT_h$.
These two properties are crucial for the proof of the lower bound.
Then, the algorithm updates the tree to make it well balanced,
that is such that no node $t\in\cT_h$ has more that $e^{n_{h+1}-n_h}=e^{2^h}$ children.
We note at this time that this condition will be already satisfied in every reasonable space,
so that the complex procedure that follows is only required in extreme cases.
To force this condition, Algorithm~\ref{alg:tree_lb} starts from the leafs
and ``prunes'' the branches if they outnumber $e^{2^h}$.
We remark that this backward step is not present in the literature on generic chaining,
and is needed for our objective of a lower bound with high probability.
By doing so, it creates a node called a \emph{pruned node} which will take as children
the pruned branches.
For this construction to be tight, the pruning step has to be careful.
Algorithm~\ref{alg:tree_lb} attaches to every pruned node a value,
computed using the values of its children,
hence the backward strategy.
When pruning branches, the algorithm keeps the $e^{2^h}$ nodes with maximum values and displaces the others.
The intuition behind this strategy is to avoid pruning branches that already contain pruned node.
Finally, note that this pruning step may creates unbalanced pruned nodes
when the number of nodes at depth $h$ is way larger that $e^{2^h}$.
When this is the case, Algorithm~\ref{alg:tree_lb}
restarts the pruning with the updated tree to recompute the values.
Thanks to the doubly exponential growth in the balance condition,
this can not occur more that $\log \log \abs{\cX}$ times
and the total complexity is $\cO\big(\abs{\cX}^2\big)$.
\subsection{Computing the Pruning Values and Anti-Concentration Inequalities}
We end this section by describing the values used for the pruning step.
We need a function $\varphi(\cdot,\cdot,\cdot,\cdot)$
satisfying the following anti-concentration inequality.
For all $m\in\bN$, let $s\in\cX$ and $t_1,\dots,t_m\in\cX$ such that
$\forall i\leq m,~p(t_i)=s$ and $d(s,t_i)\leq \Delta$,
and finally $d(t_i,t_j)\geq \alpha$.
Then $\varphi$ is such that:
\begin{equation}
\label{eq:varphi}
\Pr\Big[\max_{i\leq m}f(t_i)-f(s) \geq \varphi(\alpha,\Delta,m,u) \Big]>1-e^{-u}\,.
\end{equation}
A function $\varphi$ satisfying this hypothesis is described in Lemma~\ref{lem:max_one_lvl}
in Appendix~\ref{sec:proof_lower_bound}.
Then the value $V_h(s)$ of a node $s\in\cT_h$ is computed with $\Delta_i(s) = \sup_{x\succ s} d(x,s)$ as:
\[V_h(s) = \sup_{x\succ s} \sum_{i>h} \varphi\Big(\frac 1 2 \Delta_h(x),\Delta_h(x),m,u\Big) \one_{p_i(x)\text{ is a pruned node}}\,.\]
The two steps proving Theorem~\ref{thm:lower_bound} are:
first, show that $\sup_{x\succ s}f(x)-f(s) \geq c_u V_h(s)$ for $c_u>0$ with probability at least $1-e^{-u}$,
second, show that $V_h(s) \geq c_u'\sup_{x\succ s}\sum_{i>h}\Delta_i(x)2^{\frac i 2}$
for $c_u'>0$.
The full proof of this theorem can be found in Appendix~\ref{sec:proof_lower_bound}.
\paragraph{Acknowledgements.}
We thank C\'edric Malherbe and Kevin Scaman for fruitful discussions.
|
\section{Introduction}
Primordial nucleosynthesis is one of the cornerstones
of the hot big-bang cosmology. The agreement between
the predictions for the abundances of D, $^3$He, $^4$He
and $^7$Li and their inferred primordial abundances provides
the big-bang cosmology's earliest, and perhaps most, stringent test.
Further, big-bang nucleosynthesis has been used to provide
the best determination of the baryon density \cite{ytsso,walker}
and to provide crucial tests of particle-physics theories, e.g.,
the stringent bound to the number of light
neutrino species \cite{nulimit,mathews}.
Over the years various aspects of
the effect of a decaying tau neutrino on primordial nucleosynthesis
have been considered \cite{ks,st1,st2,st,kaw,ketal,dol,osu}.
Each previous study focused on a specific
decay mode and incorporated different microphysics. To be sure,
no one study was complete or exhaustive. Our purpose here is to consider
all the effects of a decaying tau neutrino on nucleosynthesis
in an comprehensive and coherent manner. In particular,
for the first time interactions of decay-produced electron
neutrinos and antineutrinos, which can be important for
lifetimes shorter than $100\,{\rm sec}$ or so, are taken into account.
The nucleosynthesis limits to the mass of an unstable tau
neutrino are currently of great interest as the best laboratory
upper mass limits \cite{labmass}, $31\,{\rm MeV}$ by
the ARGUS Collaboration and $32.6\,{\rm MeV}$ by the CLEO
Collaboration,\footnote{Both are 95\% C.L.
mass limits based upon end-point analyses of tau decays to
final states containing five pions. The CLEO data set contains
113 such decays and the ARGUS data set contains 20 such decays \cite{labmass}.}
are tantalizingly close to the mass range excluded by nucleosynthesis,
approximately $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$ for lifetimes
greater than about $300\,{\rm sec}$. If the upper range
of the cosmologically excluded band can be
convincingly shown to be greater than the upper
bound to the mass from laboratory experiments, the two bounds
together imply that a long-lived tau-neutrino
must be less massive than about $0.4\,{\rm MeV}$. This was the major
motivation for our study.
The effects of a massive, decaying tau neutrino on primordial
nucleosynthesis fall into
three broad categories: (i) the energy density of the tau neutrino
and its daughter product(s) increase the expansion rate, tending
to increase $^4$He, D, and $^3$He production; (ii) the
electromagnetic (EM) plasma is heated by the
daughter product(s) that interact electromagnetically
(photons and $e^\pm$ pairs), diluting the baryon-to-photon
ratio and decreasing $^4$He production and increasing
D and $^3$He production; and (iii) electron neutrino
(and antineutrino) daughters increase
the weak interaction rates that govern the neutron-to-proton
ratio, leading to decreased $^4$He production for short lifetimes
($\mathrel{\mathpalette\fun <} 30\,{\rm sec}$) and masses less than about $10\,{\rm MeV}$
and increased $^4$He production for long lifetimes.
Decays that take place long after nucleosynthesis ($\tau_\nu
\sim 10^5\,{\rm sec} -10^6\,{\rm sec}$) can lead to the destruction of
the light elements through fission
reactions and additional constraints \cite{fission}, neither
of which are considered here.
In terms of the effects on primordial nucleosynthesis
there are, broadly speaking, four generic decay modes:
\begin{enumerate}
\item Tau neutrino decays to daughter products that
are all sterile, e.g., $\nu_\tau \rightarrow \nu_\mu
+\phi$ ($\phi$ is a very weakly interacting boson).
Here, only effect (i) comes into play. Aspects of this case
were treated in Refs.~\cite{ks,st2,ketal,dol,osu}; the very recent
work in Ref.~\cite{osu} is the most complete study of this mode.
\item Tau neutrino decays to a sterile daughter product(s)
plus a daughter product(s) that interacts electromagnetically,
e.g., $\nu_\tau \rightarrow \nu_\mu + \gamma$. Here,
effects (i) and (ii) come into play. This case was
treated in Ref.~\cite{st1}, though not specifically
for a decaying tau neutrino.
\item Tau neutrino decays into an electron neutrino and sterile
daughter product(s), e.g., $\nu_\tau \rightarrow \nu_e
+\phi$. Here, effects (i) and (iii) come into play. This case
was treated in Ref.~\cite{st}; however, the interactions
of electron neutrinos and antineutrinos with the ambient
thermal plasma were not taken into account. They can be important:
The interaction rate of a high-energy electron neutrino produced
by the decay of a massive tau neutrino relative to the
expansion rate $\Gamma /H \sim (m_\nu /\,{\rm MeV})(\,{\rm sec} /t)$.
\item Tau neutrino decays into an electron neutrino
and daughter product(s) that interact electromagnetically,
e.g., $\nu_\tau \rightarrow \nu_e +e^\pm$. Here,
all three effects come into play. Aspects of this case were
treated in Ref.~\cite{kaw}, though interactions of
electron neutrinos and antineutrinos with the ambient thermal
plasma were neglected and the $\nu_e$-spectrum
was taken to be a delta function.
\end{enumerate}
{\it As we shall emphasize more than once, the effect of a tau neutrino of a
given mass and lifetime---and therefore limits to its
mass/lifetime---depends very much upon decay mode.}
\medskip
While these four generic decay modes serve to bracket
the possibilities, the situation is actually somewhat more complicated.
Muon neutrinos are not completely
sterile, as they are strongly coupled
to the electromagnetic plasma down to temperatures of order
a few MeV (times of order a fraction of a second), and thus
can transfer energy to the electromagnetic plasma. However,
for lifetimes longer than a few seconds, their interactions with the
electromagnetic plasma are not very significant (see Ref.~\cite{sd}),
and so to a reasonable approximation muon-neutrino
daughter products can be considered sterile.
Precisely how much electromagnetic entropy is produced
and the effect of high-energy neutrinos on the proton-neutron
interconversion rates depend upon the energy distribution
of the daughter products and their interactions with the
ambient plasma (photons, $e^\pm$ pairs, and neutrinos), which
in turn depends upon the number of daughter products and
the decay matrix element.
Without going to extremes,
one can easily identify more than ten possible decay modes.
However, we believe the four generic decay modes
serve well to illustrate how the nucleosynthesis
mass-lifetime limits depend upon the decay mode and provide
reasonable estimates thereof. In that regard,
input assumptions, e.g., the acceptable range for the
primordial abundances and the relic neutrino
abundance\footnote{The variation between different calculations
of the tau-neutrino abundance are of the order of 10\% to
20\%; they arise to different treatments of thermal averaging,
particle statistics, and so on. Since we use
the asymptotic value of the tau-neutrino abundance
our abundances are in general smaller, making our limits
more conservative.} probably lead to comparable, if not greater,
uncertainties in the precise limits.
Finally, a brief summary of our treatment of the microphysics:
(1) The relic abundance of the tau neutrino is determined by standard
electroweak annihilations and is assumed to be frozen out
at its asymptotic value during the epoch
of nucleosynthesis, thereafter decreasing due to decays only.
Because we assume that the relic abundance of the tau
neutrino has frozen out we cannot
accurately treat the case of short lifetimes, $\tau_\nu
\mathrel{\mathpalette\fun <} (m_\nu /\,{\rm MeV} )^{-2}\,{\rm sec}$, where inverse decays
can significantly affect the tau-neutrino abundance and that of its daughter
products \cite{inversedecay}.\footnote{For generic decay mode (1) the effect
of inverse decays for short lifetimes was considered in Ref.~\cite{osu};
it leads to additional mass constraints for short lifetimes.}
(2) Sterile daughter products, other than neutrinos, are assumed to
have a negligible pre-decay abundance (if this is not true,
the nucleosynthesis limits become even more stringent).
(3) The electromagnetic energy produced by tau-neutrino
decays is assumed to be quickly thermalized and to increase the entropy
in the electromagnetic plasma according to the first law of
thermodynamics. (4) The perturbations to the
phase-space distributions of electron and muon neutrinos
due to tau-neutrino decays and partial coupling to the electromagnetic
plasma are computed. (5) The change in the weak rates that interconvert
neutrons and protons due to the distorted electron-neutrino
distribution are calculated. (6) The total energy of the Universe includes
that of photons, $e^\pm$ pairs, neutrinos, and sterile daughter product(s).
The paper is organized as follows; in the next Section we discuss
the modifications that we have made to the standard nucleosynthesis
code. In Section 3 we present our results, discussing how
a decaying tau neutrino affects the yields of nucleosynthesis and
deriving the mass/lifetime limits for the four generic decay
modes. In Section 4 we discuss other astrophysical and laboratory
limits to the mass/lifetime of the tau neutrino, and finish
in Section 5 with a brief summary and concluding remarks.
\section{Modifications to the Standard Code}
In the standard treatment of nucleosynthesis \cite{kawano} it is
assumed that there are three massless neutrino species that
completely decouple from the electromagnetic plasma at a
temperature well above that of the electron mass ($T \sim
10\,{\rm MeV}\gg m_e$). Thus the neutrino species do not interact with
the electromagnetic plasma and do not share in the ``heating'' of
the photons when the $e^\pm$ pairs disappear.
In order to treat the most general case of a decaying tau
neutrino we have made a number of modifications to the standard
code. These modifications are of four kinds: (1) Change the
total energy density to account for the massive tau neutrino and
its daughter products; (2) Change the first-law of
thermodynamics for the electromagnetic plasma to account for the
injection of energy by tau decays and interactions with the
other two neutrino seas; (3) Follow the Boltzmann equations for the
phase-space distributions for electron and muon neutrinos,
accounting for their interactions with one another and the
electromagnetic plasma; (4) Modify the weak interaction rates
that interconvert neutrons and protons to take account of the
perturbations to the electron-neutrino spectrum.
These modifications required tracking five quantities as a
function of $T \equiv R^{-1}$, the
neutrino temperature in the fully decoupled limit ($R=$ the
cosmic-scale factor). They are: $\rho_{\nu_\tau}$,
$\rho_\phi$ (where $\phi$ is any sterile, relativistic decay
product), $T_\gamma$, and $\Delta_e$ and $\Delta_\mu$, the
perturbations to the electron-neutrino and mu-neutrino
phase-space distributions.
Our calculations were done with two separate codes. The
first code tracks $\rho_{\nu_\tau}$,
$\rho_\phi$, $T_\gamma$, $\Delta_e$, and $\Delta_\mu$
as a function of $T$, for simplicity, using Boltzmann statistics.
These five quantities were then converted to functions of the
photon temperature using the $T(T_\gamma )$
relationship calculated, and their values
were then passed to the second code, a modified version
of the standard nucleosynthesis code \cite{kawano}.\footnote{The
correct statistics for all species are of course used in the
nucleosynthesis code; the five quantities are passed
as fractional changes (to the energy density, temperature and rates)
to minimize the error made by using Boltzmann statistics in the first code.}
We now discuss in more detail the four modifications.
\subsection{Energy density}
There are four contributions to
the energy density: massive tau neutrinos, sterile decay
products, two massless neutrino species, and the EM
plasma. Let us consider each in turn.
As mentioned earlier, we fix the relic abundance of tau
neutrinos assuming that freeze out occurs before nucleosynthesis
commences ($t\ll 1\,{\rm sec}$). We follow Ref.~\cite{ketal} in writing
\begin{equation}
\rho_{\nu_\tau} = r \left[ \frac{\sqrt{(3.151 T )^2 +
{m_\nu}^2}}{{3.151 T }}\right] \rho_\nu(m_\nu =0)\,\exp (-t/\tau_\nu ),
\end{equation}
where $r$ is the ratio of the number density of massive neutrinos to a massless
neutrino species, the $(3.151T)^2$ term takes account of the
kinetic energy of the neutrino, and the exponential factor
takes account of decays. The relic abundance is taken
from Ref.~\cite{ketal}; for a Dirac neutrino it is assumed
that all four degrees are freedom are populated for masses
greater than $0.3\,{\rm MeV}$ (see Ref.~\cite{ketal} for further
discussion).
Note that for temperatures much
less than the mass of the tau neutrino, $\rho_{\nu_\tau}/\rho_\nu
(m_\nu =0) = rm_\nu e^{-t/\tau_\nu}/3.151T$,
which increases as the scale factor
until the tau neutrinos decay; further, $rm_\nu$ determines the
energy density contributed by massive tau neutrinos and hence
essentially all of their effects on nucleosynthesis.
The relic neutrino abundance times mass ($rm_\nu$) is shown in Fig.~1.
The energy density of the sterile decay products is slightly more
complicated. Since the $\phi$'s are massless, their energy
density is governed by
\begin{equation} \label{eq:phi}
\frac{d\rho_\phi}{dT} = \frac{4\rho_\phi}{T} - {f_\phi\over T}
\frac{\rho_{\nu_\tau}} {H \tau_\nu},
\end{equation}
where the first term accounts for the effect of the expansion
of the Universe and the second accounts for the energy dumped into the
sterile sector by tau-neutrino decays. The quantity $f_\phi$
is the fraction of the tau-neutrino decay energy that goes into
sterile daughters: for $\nu_\tau \rightarrow$ all-sterile daughter
products, $f_\phi =
1$; for $\nu_\tau \rightarrow \phi + \nu_e$ or EM, $f_\phi =0.5$;
and for all other modes $f_\phi =0$. Eq.~(\ref{eq:phi})
was integrated numerically, and
$\rho_\phi$ was passed to the nucleosynthesis code
by means of a look-up table.
The neutrino seas were the most complicated to treat.
The contribution of the neutrino seas was divided into two parts, the
standard, unperturbed thermal contribution and the perturbation
due to the slight coupling of neutrinos to the EM plasma and
tau-neutrino decays,
\begin{equation}
\rho_\nu = \rho_{\nu 0} + \delta \rho_\nu.
\end{equation}
The thermal contribution is simply $6T^4/\pi^2$ per massless
neutrino species (two in our case). The second term is given
as an integral over the perturbation to the neutrino phase-space
distribution,
\begin{equation}
\delta \rho_\nu = \sum_{i=e,\mu} {2\over (2\pi )^3}
\int p d^3 p \Delta_i(p,t) ,
\end{equation}
where the factor of two accounts for neutrinos and antineutrinos.
Finally, there is the energy density of the EM plasma.
Since the electromagnetic plasma is in thermal equilibrium
it only depends upon $T_\gamma$:
\begin{equation}
\rho_{\rm EM} =
\frac{6{T_\gamma}^4}{\pi^2} + \frac{2{m_e}^3 T_\gamma} {\pi^2}
\left[K_1(m_e/T_\gamma ) + \frac{3 K_2(m_e/T_\gamma )}
{m_e/T_\gamma }\right] ,
\end{equation}
where $K_1$ and $K_2$ are modified Bessel functions.
We numerically compute $T_\gamma$ as a function of $T$ by
using the first law of thermodynamics.
\subsection{First law of thermodynamics}
Energy conservation in the expanding Universe is
governed by the first law of thermodynamics,
\begin{equation}\label{eq:first}
d[\rho_{\rm TOT} R^3] = -p_{\rm TOT} dR^3,
\end{equation}
where in our case $\rho_{\rm TOT} = \rho_{\rm EM} + \rho_{\nu 0} +
\delta \rho_\nu + \rho_\phi + \rho_{\nu_\tau}$, $p_{\rm TOT} =
p_{EM} + p_{\nu 0} + \delta p_\nu + p_\phi + p_{\nu_\tau}$,
$\delta p_\nu = \delta\rho_\nu /3$, $p_\phi = \rho_\phi /3$, and
\begin{equation}
p_{\rm EM} = {2T_\gamma^4\over \pi^2} +
{2m_e^2T_\gamma^2\over \pi^2}\,K_2(m_e/T_\gamma ) .
\end{equation}
Eq.~(\ref{eq:first}) can be rewritten in a more useful form,
\begin{equation}
\frac{dT_\gamma}{dt} =
{-3H\left(\rho_{\rm TOT}+ p_{\rm TOT} -4\rho_{\nu 0} /3\right)
-d(\delta\rho_\nu + \rho_\phi + \rho_{\nu_\tau})/dt \over
d\rho_{\rm EM}/dT_\gamma}.
\end{equation}
The quantity $d\rho_{\rm EM}/dT_\gamma$ is easily calculated,
and the time derivatives of the densities can either be solved for
analytically, or taken from the previous time step.
\subsection{Neutrino phase-space distribution functions}
The Boltzmann equations governing the neutrino phase-space
distribution functions in the standard case
were derived and solved in Ref.~\cite{sd}.
We briefly summarize that treatment here, focusing on the
modifications required to include massive tau-neutrino decays.
We start with the Boltzmann equation for the phase-space
distribution of neutrino species $a$ in the absence of decays:
\begin{eqnarray}\label{eq:boltzmann}
\frac{\partial f_a}{\partial t} -{H |p|^2\over E_a}
\frac{\partial f_a}{\partial E} & = & -\frac{1}{2E_a}
\sum_{processes} \int d\Pi_1 d\Pi_2 d\Pi_3(2\pi)^4
\delta^4({p}_a+{p}_1-{p}_2-{p}_3)\nonumber\\
&\times & |{\cal M}_{a+1\leftrightarrow 2+3}|^2 [f_a f_1-f_2 f_3],
\end{eqnarray}
where the processes summed over include
all the standard electroweak $2\leftrightarrow 2$ interactions
of neutrinos with themselves and the electromagnetic plasma, and
Boltzmann statistics have been used throughout.
We write the distribution functions for the electron and
muon neutrinos as an unperturbed part plus a small perturbation:
\begin{equation}
f_i (p,t) = \exp (-p/T) + \Delta_i(p,t),
\end{equation}
where we have assumed that both species are effectively
massless. During nucleosynthesis the photon temperature
begins to deviate from the neutrino temperature $T$, and
we define
$$\delta (t) = T_\gamma /T -1.$$
Eq.~(\ref{eq:boltzmann}) is expanded to lowest order in
$\Delta_i$ and $\delta (t)$, leading to master equations of the form:
\begin{equation}\label{eq:master}
\frac{p}{T}{\dot\Delta}_i(p,t) = 4 G_F^2 T^3
[-A_i(p,t)\Delta_i(p,t) + B_i(p,t)\delta(t)
+C_i(p,t) + C_i^\prime(p,t)],
\end{equation}
where $i=e,\mu$ and the expressions for $A_i$, $B_i$, $C_i$,
and $C_i^\prime$ are given in Ref.~\cite{sd} [in Eq.~(2.11d)
for $C_\mu$ the coefficient $(c+8)$ should be $(c+7)$].
In context of tau-neutrino decays we treat decay-produced
muon neutrinos as a sterile species, and thus
we are only interested in modifying the master equation for electron
neutrinos to allow for decays. In the case of two-body decays
(e.g., $\nu_\tau \rightarrow \nu_e+\phi$
or $\nu_\tau \rightarrow \nu_e +$ EM) the additional term
that arises on the right-hand side of Eq.~(\ref{eq:master}) is
\begin{equation}
{p\over T}{\dot\Delta}_e(p,T) = \cdots +
{n_{\nu_\tau}\over \tau_\nu}\,{2\pi^2 \over pT}\,\delta (p-m_\nu /2),
\end{equation}
where $n_{\nu_\tau}$ is the number density of massive tau
neutrinos.\footnote{For $m_\nu$ we actually use our expression
for the total tau-neutrino energy $E_\nu= \sqrt{m_\nu^2+(3.151T)^2}$.
Except for very short lifetimes and small masses, $E_\nu\approx m_\nu$.}
The decay mode $\nu_\tau \rightarrow \nu_e +e^\pm$ has a three-body
final state, so that the energy distribution of electron neutrinos
is no longer a delta function. In this case, the source term is
\begin{equation}
{p\over T}{\dot\Delta}_e(p,T) = \cdots +
{32\pi^2 n_{\nu_\tau} p (3-4p/m_\nu ) \over \tau_\nu m_\nu^3 T}
\,\theta (p-m_\nu /2),
\end{equation}
where for simplicity we have assumed that all particles except
the massive tau neutrino are ultrarelativistic.
\subsection{Weak-interaction rates}
Given $\Delta_e $, it is simple to
calculate the perturbations to the weak interaction rates
that convert protons to neutrons and vice versa
(see Ref.~\cite{sd} for details). The perturbations to
the weak rates are obtained by substituting $\exp (-p/T) +\Delta_e (p,t)$
for the electron phase-space distribution in the usual expressions
for the rates \cite{sd} and then expanding to lowest order.
The perturbations to the rates for proton-to-neutron conversion
and neutron-to-proton conversion (per nucleon) are respectively
\begin{eqnarray}
\delta\lambda_{pn} & = & \frac{1}{\lambda_0 \tau_n} \int_{m_e}^\infty EdE
(E^2-m_e^2)^{1/2} (E + Q)^2 \Delta_e(E+Q), \\
\delta\lambda_{np} & = & \frac{1}{\lambda_0 \tau_n} \int_{m_e}^\infty EdE
(E^2-m_e^2)^{1/2} (E - Q)^2 \Delta_e(E-Q),
\end{eqnarray}
where Boltzmann statistics have been used for all species,
$\tau_n$ is the neutron mean lifetime, $Q=1.293\,{\rm MeV}$ is the
neutron-proton mass difference, and
$$\lambda_0 \equiv \int_{m_e}^Q EdE(E^2-m_e^2)^{1/2}(E-Q)^2 .$$
The perturbations to the weak rates are computed in the first
code and passed to the nucleosynthesis code by means of a
look-up table. The unperturbed part of the weak rates are
computed by numerical integration in the nucleosynthesis
code; for all calculations we took the neutron mean lifetime to be $889\,{\rm sec}$.
\section{Results}
In this section we present our results for the four generic decay modes.
Mode by mode we discuss how the light-element abundances
depend upon the mass and lifetime of the tau neutrino
and derive mass/lifetime limits. We exclude a mass and
lifetime if, for no value of the baryon-to-photon ratio,
the light-element abundances can satisfy:
\begin{eqnarray}
Y_P & \le & 0.24 ;\\
{\rm D/H} & \ge & 10^{-5}; \\
({\rm D} +^3{\rm He})/{\rm H} & \le & 10^{-4};\\
{\rm Li}/{\rm H} &\le & 1.4\times 10^{-10}.
\end{eqnarray}
For further discussion of this choice of constraints to
the light-element abundances we refer the reader
to Ref.~\cite{walker}.
The $^4$He and D + $^3$He abundances play the most
important role in determining the excluded regions.
The mass/lifetime limits that follow necessarily
depend upon the range of acceptable primordial abundances that one
adopts, a fact that should be kept in mind when comparing
the work of different authors and assessing confidence levels.
Further, the relic abundances used by different
authors differ by 10\% to 20\%.
Lastly, the precise limit for a specific decay mode
will of course differ slightly from that derived for its ``generic class.''
In illustrating how the effects of a decaying tau neutrino
depend upon lifetime and in comparing different decay modes we
use as a standard case an initial (i.e., before decay
and $e^\pm$ annihilations) baryon-to-photon ratio $\eta_i =
8.25\times 10^{-10}$. In the absence of entropy production
(no decaying tau neutrino or decay modes 1 and 3 which produce
no EM entropy) the final
baryon-to-photon ratio $\eta_0 = 4\eta_i /11 = 3\times 10^{-10}$,
where $4/11$ is the usual factor that arises due to the entropy transfer
from $e^\pm$ pairs to photons. In the case of decay modes
2 and 4 there can be significant EM entropy production, and
the final baryon-to-photon ratio $\eta = \eta_0/(S_f/S_i)
\le \eta_0$ ($S_f/S_i$ is the ratio of the EM entropy per
comoving volume after decays to that before decays).
Even though $\eta_0$ does not
correspond to the present baryon-to-photon ratio if there has
been entropy production, we believe
that comparisons for fixed $\eta_0$ are best for
isolating the three different effects of a decaying tau
neutrino on nucleosynthesis. For reference,
in the absence of a decaying tau neutrino the $^4$He mass
fraction for our standard case is: $Y_P=0.2228$ (two massless
neutrino species) and $0.2371$ (three massless neutrino species).
\subsection{$\nu_\tau \rightarrow$ sterile daughter products}
Since we are considering lifetimes greater than $0.1\,{\rm sec}$, by which time muon
neutrinos are essentially decoupled, the muon neutrino is by our definition
effectively sterile, and examples of this decay mode include,
$\nu_\tau \rightarrow \nu_\mu + \phi$ where $\phi$ is some
very weakly interacting scalar particle (e.g., majoron)
or $\nu_\tau \rightarrow \nu_\mu +\nu_\mu + {\bar\nu}_\mu$.
For this decay mode the only effect of the unstable tau
neutrino on nucleosynthesis involves the energy density it and
its daughter products contribute. Thus, it is the simplest
case, and we use it as ``benchmark'' for comparison to the
other decay modes. The light-element abundances
as a function of tau-neutrino lifetime are shown in Figs.~2-4
for a Dirac neutrino of mass $20\,{\rm MeV}$.
The energy density of the massive
tau neutrino grows relative to a massless neutrino species
as $rm_\nu/3T$ until the tau neutrino decays, after
which the ratio of energy density in the daughter products
to a massless neutrino species remains constant.
For tau-neutrino masses in the $0.3\,{\rm MeV}$ to $30\,{\rm MeV}$ mass
range and lifetimes greater than about a second the energy
density of the massive tau neutrino exceeds that of a massless neutrino
species before it decays, in spite of its smaller abundance
(i.e., $r\ll 1$). The higher energy density increases the
expansion rate and ultimately $^4$He production because
it causes the neutron-to-proton ratio to freeze out earlier and at
a higher value and because fewer neutrons decay before nucleosynthesis
begins. Since the neutron-to-proton ratio freezes out around
$1\,{\rm sec}$ and nucleosynthesis occurs at
around a few hundred seconds, the $^4$He abundance is only sensitive
to the expansion rate between one and a few hundred seconds.
In Fig.~2 we see that for short lifetimes ($\tau_\nu\ll 1\,{\rm sec}$)
the $^4$He mass fraction approaches that for two massless neutrinos
(tau neutrinos decay before their energy density becomes
significant). As expected, the $^4$He mass fraction increases with
lifetime leveling off at a few hundred
seconds at a value that is significantly greater than that
for three massless neutrino species.
The yields of D and $^3$He depend upon how much of these isotopes
are not burnt to $^4$He. This in turn depends upon competition
between the expansion rate and nuclear reaction rates: Faster expansion
results in more unburnt D and $^3$He. Thus the yields of D and
$^3$He increase with tau-neutrino lifetime, and begin to level
off for lifetimes of a few hundred seconds as this is when
nucleosynthesis is taking place (see Fig.~3).
The effect on the yield of $^7$Li is a bit more complicated.
Lithium production decreases with increasing $\eta$
for $\eta \mathrel{\mathpalette\fun <} 3\times 10^{-10}$ because the final abundance
is determined by competition between the expansion rate and
nuclear processes that destroy $^7$Li, and increases
with increasing $\eta$ for $\eta \mathrel{\mathpalette\fun >} 3\times 10^{-10}$
because the final abundance is determined by competition between
the expansion rate and nuclear processes that produce $^7$Li.
Thus, an increase in expansion rate leads to increased $^7$Li
production for $\eta \mathrel{\mathpalette\fun <} 3\times 10^{-10}$ and decreased $^7$Li
production for $\eta \mathrel{\mathpalette\fun >} 3\times 10^{-10}$; this is shown
in Fig.~4. Put another way
the valley in the $^7$Li production curve shifts to
larger $\eta$ with increasing tau-neutrino lifetime.
We show in Figs.~5 and 6 the excluded
region of the mass/lifetime plane for a Dirac
and Majorana tau neutrino respectively.
As expected, the excluded mass range grows with lifetime,
asymptotically approaching $0.3\,{\rm MeV}$ to
$33\,{\rm MeV}$ (Dirac) and $0.4\,{\rm MeV}$ to
$30\,{\rm MeV}$ (Majorana). We note the significant dependence of the excluded
region on lifetime; our results are in good agreement with the
one other work where comparison is straightforward \cite{ketal},
and in general agreement with Refs.~\cite{st2,osu}.
\subsection{$\nu_\tau \rightarrow$ sterile + electromagnetic daughter products}
Again, based upon our definition of sterility, the sterile
daughter could be a muon neutrino; thus, examples of this
generic decay mode include $\nu_\tau \rightarrow
\nu_\mu + \gamma$ or $\nu_\tau \rightarrow \nu_\mu + e^\pm$.
Our results here are based upon a two-body decay (e.g.,
$\nu_\tau \rightarrow \nu_\mu + \gamma$), and
change only slightly in the case of a three-body decay
(e.g., $\nu_\tau \rightarrow \nu_\mu + e^\pm$), where a larger
fraction of the tau-neutrino mass goes into electromagnetic entropy.
Two effects now come into play: the energy density
of the massive tau neutrino and its daughter products speed
up the expansion rate, tending to increase $^4$He, $^3$He, and D
production; and EM entropy production due to tau-neutrino decays reduce
the baryon-to-photon ratio (at the time of nucleosynthesis),
tending to decrease $^4$He production
and to increase D and $^3$He production. Both
effects tend to shift the $^7$Li valley (as a function
of $\eta_0$) to larger $\eta_0$.
While the two effects have the ``same sign'' for D, $^3$He, and
$^7$Li, they have opposite signs for $^4$He.
It is instructive to compare $^4$He production as a function of
lifetime to the previous ``all-sterile'' decay mode. Because of
the effect of entropy production, there is little
increase in $^4$He production
until a lifetime greater than $1000\,{\rm sec}$ or so. For
lifetimes greater than $1000\,{\rm sec}$ the bulk of the entropy
release takes place after nucleosynthesis,
and therefore does not affect the value of $\eta$ during nucleosynthesis.
Because of the competing effects on $^4$He production,
the impact of an unstable,
massive tau neutrino on nucleosynthesis is significantly less
than that in the all-sterile decay mode for lifetimes less than
about $1000\,{\rm sec}$. The excluded region of the mass/lifetime
plane is shown in Figs.~5 and 6. For lifetimes greater than about $1000\,{\rm sec}$
the excluded mass interval is essentially the same as
that for the all-sterile decay mode; for shorter lifetimes it
is significantly smaller.
Finally, because of entropy production, the final value of the
baryon-to-photon ratio is smaller for fixed initial
baryon-to-photon ratio: it is reduced by the
factor by which the entropy per comoving volume is increased.
In the limit of significant entropy production ($S_f/S_i
\gg 1$), this factor is given by, cf. Eq. (5.73) of Ref.~\cite{kt},
\begin{equation}\label{eq:entropy}
S_f/S_i \simeq 0.13 rm_\nu \sqrt{\tau_\nu /{m_{\rm Pl}}} \simeq 1.5
\,{rm_\nu \over \,{\rm MeV}}\,\sqrt{\tau_\nu \over 1000 \,{\rm sec}}.
\end{equation}
A precise calculation of entropy production for this decay
mode is shown in Fig.~7. As can be seen in the figure
or from Eq.~(\ref{eq:entropy}), entropy production becomes
significant for lifetimes longer than about $100\,{\rm sec}$.
\subsection{$\nu_\tau \rightarrow \nu_e$ + sterile daughter products}
Once again, by our definition of sterility this includes
decay modes such as $\nu_\tau \rightarrow \nu_e + \phi$
or $\nu_\tau \rightarrow \nu_e +\nu_\mu {\bar\nu}_\mu$.
Here, we specifically considered the two-body decay mode $\nu_\tau
\rightarrow \nu_e + \phi$, though the results for the
three-body mode are very similar.
Two effects come into play: the energy density
of the massive tau neutrino and its daughter products
and the interaction of daughter electron
neutrinos with the nucleons and the ambient plasma.
The first effect has been discussed previously.
The second effect leads to some interesting new effects.
Electron neutrinos and antineutrinos produced by tau-neutrino decays
increase the weak rates that govern the neutron-to-proton
ratio. For short lifetimes ($\mathrel{\mathpalette\fun <} 30\,{\rm sec}$) and masses less
than about $10\,{\rm MeV}$ the main effect is to delay slightly
the ``freeze out'' of the neutron-to-proton ratio, thereby
decreasing the neutron fraction at the time of nucleosynthesis
and ultimately $^4$He production. For long lifetimes, or short lifetimes
and large masses, the perturbations to the $n\rightarrow p$
and $p\rightarrow n$ rates (per nucleon) are comparable; since after freeze out
of the neutron-to-proton ratio there are about six times as
many protons as neutrons, this has the effect of increasing
the neutron fraction and $^4$He production. This is illustrated in Fig.~8.
The slight shift in the neutron fraction does not affect
the other light-element abundances significantly.
The excluded portion of the mass/lifetime plane is shown
in Figs.~5 and 6. It agrees qualitatively with the results
of Ref.~\cite{st}.\footnote{The authors of Ref.~\cite{st}
use a less stringent constraint to $^4$He production,
$Y_P\le 0.26$; in spite of this, in some regions of
the $m_\nu -\tau_\nu$ plane their bounds are as, or even more, stringent.
This is presumably due to the neglect of electron-neutrino
interactions with the ambient plasma.} Comparing the limits for this
decay mode with the all-sterile mode, the effects of electron-neutrino
daughter products are clear: for long lifetimes
much higher mass tau neutrinos are excluded
and for short lifetimes low-mass tau neutrinos are allowed.
\subsection{$\nu_\tau \rightarrow \nu_e$ + electromagnetic daughter products}
Now we consider the most complex of the decay modes, where
none of the daughter products is sterile.
Specifically, we consider the decay mode $\nu_\tau \rightarrow
\nu_e+e^\pm$, though our results change very little for
the two-body decay $\nu_\tau \rightarrow \nu_e + \gamma$.
In this case all three effects previously discussed come into
play: the energy density of the massive tau neutrino and
its daughter products speed up the expansion rate; the
entropy released dilutes the baryon-to-photon ratio;
and daughter electron neutrinos increase the weak-interaction
rates that control the neutron fraction. The net effect
on $^4$He production is shown in Fig.~9 for a variety of
tau-neutrino masses. The main difference between this
decay mode and the previous one, $\nu_\tau \rightarrow \nu_e$ +
sterile, is for lifetimes between
$30\,{\rm sec}$ and $300\,{\rm sec}$, where the increase in $^4$He production is
less due to the entropy production which
reduces the baryon-to-photon ratio at the time of nucleosynthesis.
The excluded region of the mass/lifetime plane is shown in
Figs.~5 and 6. It agrees qualitatively with the results
of Ref.~\cite{kaw}.\footnote{The authors of Ref.~\cite{kaw}
use a less stringent constraint to $^4$He production,
$Y_P\le 0.26$; in spite of this, in some regions of
the $m_\nu -\tau_\nu$ plane their bounds are as, or even more, stringent.
This is presumably due to the neglect of electron-neutrino
interactions with the ambient plasma.}
The excluded region for this decay mode
is similar to that of the previous
decay mode, except that lifetimes less than about $100\,{\rm sec}$ are
not excluded as entropy production has diminished $^4$He
production in this lifetime interval.
\subsection{Limits to a generic light species}
We can apply the arguments for the four decay
modes discussed above to a hypothetical species whose
relic abundance has frozen out at a value $r$ relative
to a massless neutrino species before the epoch of
primordial nucleosynthesis (also see Refs.~\cite{st1,st2}).
The previous limits become limits to $rm$ as a function
of lifetime $\tau$ and mass $m$, which are difficult
to display. With the exception of the effect that involves
daughter electron neutrinos, all other effects only depend
upon $rm$, which sets the energy density of the massive
particle and its daughter products. In Fig.~10, we show
that for lifetimes greater than about $100\,{\rm sec}$ and masses
greater than about $10\,{\rm MeV}$, the $^4$He production is
relatively insensitive to the mass of the decaying particle.
This means that for lifetimes greater than about $100\,{\rm sec}$
the limit to $rm$ should be relatively insensitive to
particle mass.
We show in Fig.~11 the excluded regions
of the $rm$-$\tau$ plane for a $20\,{\rm MeV}$ decaying particle.
In deriving these limits we used the same criteria for
acceptable light-element abundances and assumed three massless
neutrino species. The limits to $rm$ for decay modes without
electron-neutrino daughter products are strictly independent
of mass; the two other should be relatively insensitive
to the particle mass for $\tau \mathrel{\mathpalette\fun >} 100\,{\rm sec}$ (and the actual
limits are more stringent for $m > 20\,{\rm MeV}$).
\section{Laboratory and Other Limits}
There are a host of other constraints to the mass and lifetime
of the tau neutrino~\cite{sarkar}.
As a general rule, cosmological arguments, such as the one
presented above, pose {\it upper} limits to the tau-neutrino
lifetime for a given mass: cosmology
has nothing to say about a particle
that decays very early since it would not have affected
the ``known cosmological history.'' Laboratory experiments
on the other hand pose {\it lower}
limits to the lifetime because nothing happens inside
a detector if the lifetime of the decaying particle is too long.
Finally, astrophysical considerations generally rule out
bands of lifetime since ``signals'' can only be detected if (a) the tau
neutrinos escape the object of interest before decaying
and (b) decay before they pass by earthly detectors.
\subsection{Laboratory}
The most important limits of course are the direct limits to the
tau-neutrino mass. These have come down steadily over the past few
years. The current upper limits are $31\,{\rm MeV}$ and
$32.6\,{\rm MeV}$ \cite{labmass}.
If the tau neutrino has a mass greater than $2m_e = 1.02\,{\rm MeV}$,
then the decay $\nu_\tau
\rightarrow \nu_e+e^\pm$ takes place through ordinary
electroweak interactions at a rate
\begin{equation}\label{UET}
\Gamma = {G_{F}^2 m_\nu^5\over 192\pi^3} \vert U_{e\tau} \vert^2
\vert U_{ee} \vert^2 \simeq
{ (m_\nu /{\,{\rm MeV}})^5 \vert U_{e\tau} \vert^2
\over 2.9\times 10^4 ~{\rm sec}} ,
\end{equation}
where $U_{e\tau}$ and $U_{ee}$ are elements of the
unitary matrix that relates
mass eigenstates to weak eigenstates, the leptonic
equivalent of the Cabbibo-Kobayashi-Maskawa matrix. We note that
the rate could be larger (or even perhaps smaller) in models where
the decay proceeds through new interactions. Thus, limits to
$U_{e\tau}$ give rise to model-dependent limits to the tau-neutrino lifetime.
A number of experiments have set limits to $U_{e\tau}$.
The most sensitive experiment in the mass range $1.5\,{\rm MeV}
< m_\nu < 4\,{\rm MeV}$ was performed at the power reactor in Gosgen,
Switzerland~\cite{gosgen}, which produces tau
neutrinos at a rate proportional to $\vert U_{e\tau}\vert^2$
through decay of heavy nuclei and $\nu_e-\nu_\tau$ mixing.
Above this mass range, experiments that search for
additional peaks in the positron spectrum of the
$\pi^+ \rightarrow e^+\nu$ decay (due to $\nu_e-\nu_\tau$ mixing)
provide the strictest limits. In the
mass range $4\,{\rm MeV} < m_\nu < 20\,{\rm MeV}$,
Bryman et al. \cite{bryman}\
set the limits shown in Fig.~12; for larger masses the best limits
come from Ref.~\cite{leener}.
There are also direct accelerator bounds to the lifetime of a
an unstable tau neutrino that produces a photon or $e^\pm$ pair.
In particular, as has been recently emphasized by
Babu et al. \cite{babu}, the BEBC beam dump experiment~\cite{bebc}
provides model-independent limits based upon the direct search for the
EM decay products. These limits, while not quite as
strict as those mentioned above, are of interest
since they apply to the photon mode and to the $e^\pm$ mode
even if the decay proceeds through new interactions. The limit,
\begin{equation}
\tau_\nu > 0.18\,(m_\nu /\,{\rm MeV})\,\,{\rm sec} ,
\end{equation} is shown in Fig.~12.
\subsection{Astrophysical}
The standard picture of type II supernovae has the binding
energy of the newly born neutron star (about $3\times
10^{53}{\,\rm erg}$) shared equally by neutrinos
of all species emitted from a neutrinosphere of temperature of about $4\,{\rm MeV}$.
There are two types of limits based upon SN 1987A, and combined they
rule out a large region of $m_\nu - \tau_\nu$ plane.
First, if the tau neutrino decayed
after it left the progenitor supergiant, which has a radius
$R\simeq 3\times10^{12}\,{\rm cm}$, the high-energy daughter
photons could have been detected \cite{smm,ktsn,pvo}. The Solar Maximum
Mission (SMM) Gamma-ray Spectrometer set an upper
limit to the fluence of $\gamma$ rays during the ten
seconds in which neutrinos were detected:
\begin{equation}
f_\gamma < 0.9~{\,{\rm cm}}^{-2}; \qquad 4.1\,{\rm MeV} < E_\gamma < 6.4\,{\rm MeV}.
\end{equation}
As we will see shortly, if only one in $10^{10}$
of the tau neutrinos leaving the supernova produced a photon,
this limit would have been saturated. In the mass regime
of interest there are two ways out of
this constraint: The lifetime can be so
long that the arrival time was more than ten seconds
after the electron antineutrinos arrived, or
the lifetime can be so short that
the daughter photons were produced
inside the progenitor. We can take account of both of these
possibilities in the following formula for the
expected fluence of $\gamma$ rays:
\begin{equation}
f_{\gamma,10} = f_{\nu\bar\nu} W_\gamma B_\gamma\langle F_1 F_2 \rangle
\end{equation}
where the subscript $10$ reminds us that we are
only interested in the first ten seconds,
$f_{\nu\bar\nu} \simeq 1.4\times 10^{10}$ cm$^{-2}$ is the fluence of
a massless neutrino species, $W_\gamma \sim 1/4$ is the
fraction of decay photons produced with energies between
$4.1\,{\rm MeV}$ and $6.4\,{\rm MeV}$, $F_1$ is
the fraction of tau neutrinos that decay outside the progenitor,
and $F_2$ is the fraction of these that decay early enough so that the decay
products were delayed by less than ten seconds. The quantity
$B_\gamma$ is the branching ratio to a decay mode that includes a photon.
For $m_\nu \mathrel{\mathpalette\fun >} 1\,{\rm MeV}$ one expects the $\nu_e+e^\pm$ mode to be dominant;
however, ordinary radiative corrections should lead to
$B_\gamma \simeq 10^{-3}$ \cite{mohapatra}. Finally angular brackets denote
an average over the Fermi-Dirac distribution of neutrino momenta,
\begin{equation}
\langle A\, \rangle \equiv {1\over 1.5\zeta(3) T^3}
\int_0^\infty {A\,dp\,p^2\over e^{E/T} + 1},
\end{equation}
where $T\simeq 4$ MeV is the temperature of the neutrinosphere and
$E = (p^2 + m_\nu^2)^{1/2}$.
To evaluate the fluence of gamma rays we need to know
$F_1$ and $F_2$. The fraction $F_1$ that decay outside the
progenitor is simply $e^{-t_1/\tau_L}$ where $t_1 = R/v = RE/p$
and the ``lab'' lifetime $\tau_L
= \tau E/m_\nu$. Of these, the fraction
whose decay products arrive {\it after}
ten seconds is $e^{-t_2/\tau_L}/e^{-t_1/\tau_L}$ where $t_2 = 10\,{\rm sec}
/(1-v/c)$; thus, $F_2 = 1 - e^{(t_1-t_2)/\tau_L}$.
Figure 12 shows this constraint assuming
a branching ratio $B_\gamma=10^{-3}$.
The second constraint comes from observing that if tau
neutrinos decayed within the progenitor supergiant,
the energy deposited (up to about $10^{53}{\,\rm erg}$) would have
``heated up'' the progenitor so much as to conflict
with the observed optical luminosity of SN 1987A (and other
type II supernovae) \cite{mohapatra,schramm}.
We require
\begin{equation}
E_{\rm input} = \langle (1-F_1) \rangle E_\nu \mathrel{\mathpalette\fun <} 10^{47} {\,\rm erg} ,
\end{equation}
where $E_\nu \sim 10^{53}{\,\rm erg}$ is the energy carried off
by a massless neutrino species, and $1-F_1$ is the fraction
of tau neutrinos that decay within
the progenitor. This constraint is mode-independent since decay-produced
photons or $e^\pm$ pairs will equally well ``overheat'' the progenitor.
As Fig.~12 shows, the ``supernova-light'' bound is extremely powerful.
Finally, a note regarding our SN 1987A constraints.
We have assumed that a massive tau-neutrino
species has a Fermi-Dirac distribution with the same
temperature as a massless ($m_\nu \ll 10\,{\rm MeV}$)
neutrino species. This is almost certainly false.
Massive ($m_\nu \mathrel{\mathpalette\fun >} 10\,{\rm MeV}$ or so)
tau neutrinos will drop out of chemical equilibrium
(maintained by pair creation/annihilations and possibly
decays/inverse decays) interior to the usual neutrinosphere
as the Boltzmann factor suppresses annihilation
and pair creation rates relative to scattering rates.
This leads us to believe that we have actually {\it underestimated}
the fluence of massive neutrinos.
While the problem has yet to be treated rigorously,
we are confident that, if anything, our simplified treatment
results in limits that are overly conservative.
Accurate limits await a more detailed analysis \cite{sigl}.
\subsection{Cosmological}
The most stringent cosmological constraint for masses
$0.1\,{\rm MeV} \mathrel{\mathpalette\fun <} m \mathrel{\mathpalette\fun <} 100\,{\rm MeV}$ is the nucleosynthesis bound
discussed in this paper. Nonetheless, it is worthwhile to
mention some of the other cosmological limits since they are
based upon independent arguments. A stable tau neutrino
with mass in the MeV range contributes much more energy density than is
consistent with the age of the Universe.
Such a neutrino must be unstable, with a lifetime short
enough for its decay products to lose enough most of their energy
to ``red shifting '' \cite{dicus}.
The lifetime limit is mass dependent; a neutrino with a mass of about
$1\,{\rm MeV}$ must have a lifetime shorter than about $10^9\,{\rm sec}$,
and the constraint gets less severe for larger or smaller masses.
There is an even more stringent bound based the necessity
of the Universe being matter dominated by a red shift of
about $10^4$ in order to produce the observed large-scale structure
\cite{steigman}. Finally, there are other
nucleosynthesis bounds based upon the dissociation of the light
elements by decay-produced photons or electron-neutrinos \cite{fission}
and by $e^\pm$ pairs produced by the continuing annihilations
of tau neutrinos \cite{josh}.
\section{Summary and Discussion}
We have presented a comprehensive study of the effect of
an unstable tau neutrino on primordial nucleosynthesis.
The effects on the primordial abundances and the mass/lifetime
limits that follow depend crucially upon the decay
mode. In the context of primordial nucleosynthesis
we have identified four generic decay modes that bracket
the larger range of possibilities: (1)
all-sterile daughter products; (2) sterile daughter product(s)
+ EM daughter product(s); (3) $\nu_e$ + sterile daughter product(s);
and (4) $\nu_e$ + EM daughter product(s). The excluded
regions of the tau-neutrino mass/lifetime plane for these
four decay modes are shown in Figs.~5 (Dirac) and 6 (Majorana).
In the limit of long lifetime ($\tau_\nu \gg 100\,{\rm sec}$), the
excluded mass range is: $0.3\,{\rm MeV} -33\,{\rm MeV}$ (Dirac) and
$0.4\,{\rm MeV} - 30\,{\rm MeV}$ (Majorana). Together with current
laboratory upper mass limits, $31\,{\rm MeV}$ (ARGUS) and $32.6\,{\rm MeV}$
(CLEO), our results very nearly exclude a long-lived, tau neutrino
more massive than about $0.4\,{\rm MeV}$. Moreover, other
astrophysical and laboratory data exclude a tau-neutrino
in the $0.3\,{\rm MeV} - 50\,{\rm MeV}$ mass range if its decay product(s)
include a photon or $e^\pm$ pair. Thus, if the mass of the
tau neutrino is the range $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$, then its decay
products cannot include a photon or an $e^\pm$ pair and its
lifetime must be shorter than a few hundred seconds.
We note that the results of Ref.~\cite{osu} for the all-sterile
decay mode are more restrictive than ours, excluding masses
from about $0.1\,{\rm MeV}$ to about $50\,{\rm MeV}$ for $\tau_\nu \gg
100\,{\rm sec}$. This traces in
almost equal parts to (i) small ($\Delta Y \simeq +0.003$), but significant,
corrections to the $^4$He mass fraction and
(ii) slightly larger relic neutrino abundance.
With regard to the first difference, this illustrates the
sensitivity to the third significant figure of the $^4$He
mass fraction. With regard to the second difference, it is
probably correct that within the assumptions made
the tau-neutrino abundance during nucleosynthesis is
larger than what we used. However, other effects that have
been neglected probably lead to differences in
the tau-neutrino abundance of the
same magnitude. For example, for tau-neutrino masses
around the upper range of excluded masses, $50\,{\rm MeV} -100\,{\rm MeV}$,
finite-temperature corrections, hadronic final states
(e.g., a single pion), and tau-neutrino mixing have not
been included in the annihilation cross section and
are likely to be important at the 10\% level.
So is a tau neutrino with lifetime greater than
a few hundred seconds and mass greater than a fraction
of an $\,{\rm MeV}$ ruled out or not? Unlike a limit based upon
a laboratory experiment, it is impossible to place
standard error flags on an astrophysical or cosmological
bound. This is because of assumptions that
must be made and modeling that must be done. For example,
the precise limits that one derives depend
upon the adopted range of acceptable light-element abundances.
To be specific, in Ref.~\cite{osu} the upper limit of the
excluded mass range drops to around $38\,{\rm MeV}$ and the lower
limit increases to about $0.4\,{\rm MeV}$ when the
primordial $^4$He mass fraction is allowed to be as large as 0.245
(rather than 0.240). {\it In our opinion, a very strong case has been
made against a tau-neutrino mass in the mass range $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$
with lifetime much greater than $100\,{\rm sec}$; together
with the laboratory limits this very nearly excludes a
long-lived, tau neutrino of mass greater than $0.4\,{\rm MeV}$.}
Perhaps the most interesting thing found in our study is the fact that
a tau neutrino of mass $1\,{\rm MeV}$ to $10\,{\rm MeV}$ and lifetime
$0.1\,{\rm sec}$ to $10\,{\rm sec}$ that decays to an electron neutrino
and a sterile daughter product can very significantly decrease
the $^4$He mass fraction (to as low as 0.18 or so). It has
long been realized that the standard picture of nucleosynthesis
would be in trouble if the primordial $^4$He mass fraction
were found to be smaller than about 0.23; within the standard
framework we have found one way out: an unstable tau
neutrino.\footnote{Based upon dimensional considerations the
lifetime for the mode $\nu_\tau \rightarrow \nu_e+\phi$
is expected to be $\tau_\nu \sim 8\pi f^2/m_\nu^3$,
where $f$ is the energy scale of the superweak interactions
that mediate the decay. For $\tau_\nu \sim 10\,{\rm sec}$ and
$m_\nu\sim 10\,{\rm MeV}$, $f\sim 10^9\,{\rm GeV}$.}
In principle, the possibility of an unstable tau neutrino
also loosens the primordial-nucleosynthesis
bound to the number of light species which is largely
based on the overproduction of $^4$He. However, an
unstable tau neutrino does not directly affect the primordial
important nucleosynthesis bound to the
baryon-to-photon ratio (and $\Omega_B$) as this bound involves
the abundances of D, $^3$He, and $^7$Li and not $^4$He.
Finally, we translated our results for the tau neutrino into limits to the
relic abundance of an unstable, hypothetical particle species that
decays into one of the four generic decay models discussed.
Those very stringent limits are shown in Fig.~11.
\vskip 1.5cm
\noindent We thank Robert Scherrer, David Schramm, Gary Steigman,
and Terry Walker for useful comments. This work was supported in part by the
DOE (at Chicago and Fermilab), by the NASA through
NAGW-2381 (at Fermilab), and GG's NSF predoctoral fellowship.
MST thanks the Aspen Center for Physics for its hospitality
where some of this work was carried out.
\vskip 2 cm
|
\section{Introduction}
Understanding the structure evolution in amorphous alloys during
thermal and mechanical treatments is important for tuning their
physical and mechanical properties~\cite{Greer16}. It is well
accepted by now that in contrast to crystalline solids where
plasticity is governed by topological line defects, known as
disclinations, the elementary plastic events in amorphous materials
involve collective rearrangements of a few tens of atoms or the
so-called shear transformations~\cite{Spaepen77,Argon79}. In a
driven system, these rearrangements can assemble into shear bands
where flow becomes sharply localized and act as a precursor for
fracture~\cite{Wang15,Zhong16,Zaccone17,Scudino17}. Once a shear
band is formed, the structural integrity can be recovered either by
heating a sample above the glass transition temperature and then
cooling back to the glass phase (resetting the structure) or,
alternatively, via mechanical agitation. For example, it was shown
using atomistic simulations that cracks in nanocrystalline metals
can be completely healed via formation of wedge disclinations during
stress-driven grain boundary migration~\cite{Demkowicz13}. It was
also found experimentally and by means of atomistic simulations that
after steady deformation of bulk metallic glasses, the shear bands
relax during annealing below the glass transition temperature and
the local diffusion coefficient exhibits a nonmonotonic
behavior~\cite{Binkowski16}. In the case of amorphous solids,
however, the effects of periodic loading and initial glass stability
on structural relaxation within the shear band domain, degree of
annealing, and change in mechanical properties yet remain to be
understood.
\vskip 0.05in
During the last decade, molecular dynamics simulations were
particularly valuable in elucidating the atomic mechanisms of
structural relaxation, rejuvenation, and yielding in amorphous
materials under periodic loading
conditions~\cite{Lacks04,Priezjev13,
Sastry13,Reichhardt13,Priezjev14,IdoNature15,Priezjev16,Kawasaki16,
Priezjev16a,Sastry17,Priezjev17,OHern17,Priezjev18,Priezjev18a,
NVP18strload,Sastry18,PriMakrho05,PriMakrho09,Sastry19band,
PriezSHALT19,Ido2020,Priez20ba,Peng20,Jana20,Kawasaki20,KawBer20,BhaSastry20,
Priez20alt,Pelletier20,Priez20del}. Remarkably, it was found that in
athermal, disordered solids subjected to oscillatory shear in the
elastic range, the trajectories of atoms after a number of transient
cycles become exactly reversible and fall into the so-called `limit
cycles'~\cite{Reichhardt13,IdoNature15}. On the other hand, in the
presence of thermal fluctuations, the relaxation process generally
continues during thousands of cycles and the decay of the potential
energy becomes progressively slower over
time~\cite{Priezjev18,NVP18strload,PriezSHALT19}. More recently,
it was shown that the critical strain amplitude increases in more
stable athermal glasses~\cite{KawBer20,BhaSastry20}, whereas the
yielding transition can be significantly delayed in mechanically
annealed binary glasses at finite temperature~\cite{Priez20del}. In
general, the formation of a shear band during the yielding
transition is accelerated in more rapidly annealed glasses
periodically loaded at a higher strain amplitude or when the shear
orientation is alternated in two or three spatial
dimensions~\cite{Priezjev17,Priezjev18a,Sastry19band,
Priez20ba,Priez20alt}. Interestingly, after a shear band is formed
during cyclic loading, the glass outside the band remains well
annealed, and upon reducing strain amplitude below yield, the
initial shear band anneals out, which leads to reversible dynamics
in the whole domain~\cite{Sastry19band}. However, despite
extensive efforts, it remains unclear whether mechanical annealing
of a shear band or a crack in metallic glasses depends on the
preparation history, sample size and loading conditions.
\vskip 0.05in
In this paper, the influence of periodic shear deformation in the
elastic range on shear band annealing and mechanical properties of
binary glasses is studied using molecular dynamics simulations. The
system-spanning shear band is initially formed in stable glasses
that were either thermally or mechanically annealed. It will be
shown that small-amplitude oscillatory shear anneals out the shear
band and leads to nearly reversible deformation after a few hundred
cycles at finite temperature. Moreover, upon loading at higher
strain amplitudes, the glasses become increasingly better annealed,
which results in higher yield stress.
\vskip 0.05in
The rest of the paper is outlined as follows. The preparation
procedure, deformation protocol as well as the details of the
simulation model are described in the next section. The time
dependence of the potential energy, mechanical properties, and
spatial organization of atoms with large nonaffine displacements are
presented in section\,\ref{sec:Results}. The results are briefly
summarized in the last section.
\section{Molecular dynamics (MD) simulations}
\label{sec:MD_Model}
In the present study, the amorphous alloy is represented by the
binary (80:20) Lennard-Jones (LJ) mixture originally introduced by
Kob and Andersen (KA) about twenty years ago~\cite{KobAnd95}. In
this model, the interaction between different types of atoms is
strongly non-additive, thus, allowing formation of a disordered
structure upon slow cooling below the glass transition
temperature~\cite{KobAnd95}. More specifically, the pairwise
interaction is modeled via the LJ potential, as follows:
\begin{equation}
V_{\alpha\beta}(r)=4\,\varepsilon_{\alpha\beta}\,\Big[\Big(\frac{\sigma_{\alpha\beta}}{r}\Big)^{12}\!-
\Big(\frac{\sigma_{\alpha\beta}}{r}\Big)^{6}\,\Big],
\label{Eq:LJ_KA}
\end{equation}
with the parameters: $\varepsilon_{AA}=1.0$, $\varepsilon_{AB}=1.5$,
$\varepsilon_{BB}=0.5$, $\sigma_{AA}=1.0$, $\sigma_{AB}=0.8$,
$\sigma_{BB}=0.88$, and $m_{A}=m_{B}$~\cite{KobAnd95}. It should be
mentioned that a similar parametrization was used by Weber and
Stillinger to study the amorphous metal-metalloid alloy
$\text{Ni}_{80}\text{P}_{20}$~\cite{Weber85}. To save computational
time, the LJ potential was truncated at the cutoff radius
$r_{c,\,\alpha\beta}=2.5\,\sigma_{\alpha\beta}$. The total number of
atoms is fixed $N=60\,000$ throughout the study. For clarity, all
physical quantities are reported in terms of the reduced units of
length, mass, and energy $\sigma=\sigma_{AA}$, $m=m_{A}$, and
$\varepsilon=\varepsilon_{AA}$. Using the LAMMPS parallel code, the
equations of motion were integrated via the velocity Verlet
algorithm with the time step $\triangle t_{MD}=0.005\,\tau$, where
$\tau=\sigma\sqrt{m/\varepsilon}$ is the LJ
time~\cite{Allen87,Lammps}.
\vskip 0.05in
All simulations were carried out at a constant density
$\rho=\rho_A+\rho_B=1.2\,\sigma^{-3}$ in a periodic box of linear
size $L=36.84\,\sigma$. It was previously found that the computer
glass transition temperature of the KA model at the density
$\rho=1.2\,\sigma^{-3}$ is
$T_c=0.435\,\varepsilon/k_B$~\cite{KobAnd95}. The system temperature
was maintained via the Nos\'{e}-Hoover
thermostat~\cite{Allen87,Lammps}. After thorough equilibration and
gradual annealing at the temperature $T_{LJ}=0.01\,\varepsilon/k_B$,
the system was subjected to periodic shear deformation along the
$xz$ plane as follows:
\begin{equation}
\gamma(t)=\gamma_0\,\text{sin}(2\pi t/T),
\label{Eq:shear}
\end{equation}
where $\gamma_0$ is the strain amplitude and $T=5000\,\tau$ is the
period of oscillation. The corresponding oscillation frequency is
$\omega=2\pi/T=1.26\times10^{-3}\,\tau^{-1}$. Once a shear band was
formed at $\gamma_0=0.080$, the glasses were periodically strained
at the strain amplitudes $\gamma_0=0.030$, $0.040$, $0.050$,
$0.060$, and $0.065$ during 3000 cycles. It was previously found
that in the case of poorly annealed (rapidly cooled) glasses, the
critical value of the strain amplitude at the temperature
$T_{LJ}=0.01\,\varepsilon/k_B$ and density $\rho=1.2\,\sigma^{-3}$
is $\gamma_0\approx0.067$~\cite{Priez20alt}. The typical simulation
during 3000 cycles takes about 80 days using 40 processors in
parallel.
\vskip 0.05in
For the simulation results presented in the next section, the
preparation and the initial loading protocols are the same as the
ones used in the previous MD study on the yielding transition in
stable glasses~\cite{Priez20del}. Briefly, the binary mixture was
first equilibrated at $T_{LJ}=1.0\,\varepsilon/k_B$ and
$\rho=1.2\,\sigma^{-3}$ and then slowly cooled with the rate
$10^{-5}\varepsilon/k_{B}\tau$ to $T_{LJ}=0.30\,\varepsilon/k_B$.
Furthermore, one sample was cooled down to
$T_{LJ}=0.01\,\varepsilon/k_B$ during the time interval $10^4\,\tau$
(see Fig.\,\ref{fig:shapshot}). The other sample was mechanically
annealed at $T_{LJ}=0.30\,\varepsilon/k_B$ via cyclic loading at
$\gamma_0=0.035$ during 600 cycles, and only then cooled to
$T_{LJ}=0.01\,\varepsilon/k_B$ during $10^4\,\tau$. Thus, after
relocating glasses to $T_{LJ}=0.01\,\varepsilon/k_B$, two glass
samples with different processing history and potential energies
were obtained. In what follows, these samples will be referred to as
\textit{thermally annealed} and \textit{mechanically annealed}
glasses.
\section{Results}
\label{sec:Results}
Amorphous alloys typically undergo physical aging, when a system
slowly evolves towards lower energy states, and generally this
process can be accelerated by external cyclic deformation within the
elastic range~\cite{Qiao19}. Thus, the structural relaxation of
disordered solids under periodic loading proceeds via collective,
irreversible rearrangements of
atoms~\cite{Sastry13,Priezjev18,Priezjev18a,Sastry18}, while at
sufficiently low energy levels, mechanical annealing becomes
inefficient~\cite{KawBer20}. The two glass samples considered in the
present study were prepared either via mechanical annealing at a
temperature not far below the glass transition temperature or by
computationally slow cooling from the liquid state. It was
previously shown that small-amplitude periodic shear deformation at
temperatures well below $T_g$ does not lead to further annealing of
these glasses~\cite{Priez20del}. Rather, the results presented
below focus on the annealing process of a shear band, introduced in
these samples by large periodic strain, and subsequent recovery of
their mechanical properties.
\vskip 0.05in
The time dependence of the potential energy at the end of each cycle
is reported in Fig.\,\ref{fig:poten_Quench_SB_heal} for the
\textit{thermally annealed} glass. In this case, the glass was first
subjected to oscillatory shear during 200 cycles with the strain
amplitude $\gamma_0=0.080$ (see the black curve in
Fig.\,\ref{fig:poten_Quench_SB_heal}). The strain amplitude
$\gamma_0=0.080$ is slightly larger than the critical strain
amplitude $\gamma_0\approx0.067$ at $T_{LJ}=1.0\,\varepsilon/k_B$
and $\rho=1.2\,\sigma^{-3}$~\cite{Priez20alt}, and, therefore, the
periodic loading induced the formation of a shear band across the
system after about 20 cycles. As shown in
Fig.\,\ref{fig:poten_Quench_SB_heal}, the process of shear band
formation is associated with a sharp increase in the potential
energy followed by a plateau at $U\approx-8.26\,\varepsilon$ with
pronounced fluctuations due to plastic flow within the band. It was
previously demonstrated that during the plateau period, the periodic
deformation involves two well separated domains with diffusive and
reversible dynamics~\cite{Priez20del}.
\vskip 0.05in
After the shear band became fully developed in the \textit{thermally
annealed} glass, the strain amplitude of periodic deformation was
reduced in the range $0.030\leqslant \gamma_0 \leqslant 0.065$ when
$t=200\,T$. The results in Fig.\,\ref{fig:poten_Quench_SB_heal}
indicate that the potential energy of the system is gradually
reduced when $t>200\,T$, and the energy drop increases at higher
strain amplitudes (except for $\gamma_0=0.065$). Notice that the
potential energy levels out at $t\gtrsim 1300\,T$ for
$\gamma_0=0.030$, $0.040$, and $0.050$, while the relaxation process
continues up to $t=3200\,T$ for $\gamma_0=0.060$. These results
imply that the shear band becomes effectively annealed by the
small-amplitude oscillatory shear, leading to nearly reversible
dynamics in the whole sample, as will be illustrated below via the
analysis of nonaffine displacements. By contrast, the deformation
within the shear band remains irreversible at the higher strain
amplitude $\gamma_0=0.065$ (denoted by the fluctuating grey curve in
Fig.\,\ref{fig:poten_Quench_SB_heal}). This observation can be
rationalized by realizing that the strain remains localized within
the shear band, and the effective strain amplitude within the band
is greater than the critical value
$\gamma_0\approx0.067$~\cite{Priez20alt}.
\vskip 0.05in
The potential energy minima for the \textit{mechanically annealed}
glass are presented in Fig.\,\ref{fig:poten_600cyc_SB_heal} for the
indicated strain amplitudes. It should be commented that the
preparation protocol, which included 600 cycles at $\gamma_0=0.035$
and $T_{LJ}=0.30\,\varepsilon/k_B$, produced an atomic configuration
with a relatively deep potential energy level, \textit{i.e.},
$U\approx-8.337\,\varepsilon$. Upon periodic loading at
$\gamma_0=0.080$ and $T_{LJ}=0.01\,\varepsilon/k_B$, the yielding
transition is delayed by about 450 cycles, as shown by the black
curve in Fig.\,\ref{fig:poten_600cyc_SB_heal} (the same data as in
Ref.\,\cite{Priez20del}). Similarly to the case of thermally
annealed glasses, the potential energy in
Fig.\,\ref{fig:poten_600cyc_SB_heal} is gradually reduced when the
strain amplitude is changed from $\gamma_0=0.080$ to the selected
values in the range $0.030\leqslant \gamma_0 \leqslant 0.065$.
Interestingly, the largest decrease in the potential energy at the
strain amplitude $\gamma_0=0.060$ is nearly the same ($\Delta
U\approx 0.03\,\varepsilon$) for both thermally and mechanically
annealed glasses. In addition, it can be commented that in both
cases presented in Figs.\,\ref{fig:poten_Quench_SB_heal} and
\ref{fig:poten_600cyc_SB_heal}, the potential energy remains above
the energy levels of initially stable glasses (before a shear band
is formed) even for loading at the strain amplitude
$\gamma_0=0.060$. The results of a previous MD study on mechanical
annealing of \textit{rapidly quenched} glasses imply that the energy
level $U\approx-8.31\,\varepsilon$ can be reached via cyclic loading
at $T_{LJ}=0.01\,\varepsilon/k_B$ but it might take thousands of
additional cycles~\cite{Priez20alt}.
\vskip 0.05in
While the potential energy within a shear band becomes relatively
large, the energy of the glass outside the band remains largely
unaffected during the yielding transition. As shown above, the
\textit{mechanically annealed} glass is initially more stable (has a
lower potential energy) than the \textit{thermally annealed} glass.
This in turn implies that the boundary conditions for the subyield
loading of the shear band are different in the two cases, and,
therefore, the potential energy change during the relaxation
process, in principle, might also vary. In other words, the
annealing of the shear band by small-amplitude periodic deformation
might be affected by the atomic structure of the adjacent glass.
However, the results in Figs.\,\ref{fig:poten_Quench_SB_heal} and
\ref{fig:poten_600cyc_SB_heal} suggest that the potential energy
change is roughly the same in both cases; although a more careful
analysis might be needed in the future to clarify this point.
\vskip 0.05in
We next report the results of mechanical tests that involve startup
continuous shear deformation in order to probe the effect of
small-amplitude periodic loading on the yield stress. The shear
modulus, $G$, and the peak value of the stress overshoot,
$\sigma_Y$, are plotted in Figs.\,\ref{fig:G_and_Y_thermq} and
\ref{fig:G_and_Y_600cyc} for glasses that were periodically deformed
with the strain amplitudes $\gamma_0=0.030$ and $0.060$. In each
case, the startup deformation was imposed along the $xy$, $xz$, and
$yz$ planes with the constant strain rate
$\dot{\gamma}=10^{-5}\,\tau^{-1}$. The data are somewhat scattered,
since simulations were carried out only for one realization of
disorder, but the trends are evident. First, both $G$ and $\sigma_Y$
are relatively small when shear is applied along the $xz$ plane at
$t=200\,T$ in Fig.\,\ref{fig:G_and_Y_thermq} and at $t=1000\,T$ in
Fig.\,\ref{fig:G_and_Y_600cyc} because of the shear band that was
formed previously at $\gamma_0=0.080$. Second, the shear modulus
and yield stress increase towards plateau levels during the next few
hundred cycles, and their magnitudes are greater for the larger
strain amplitude $\gamma_0=0.060$, since those samples were annealed
to deeper energy states (see Figs.\,\ref{fig:poten_Quench_SB_heal}
and \ref{fig:poten_600cyc_SB_heal}).
\vskip 0.05in
The results in Figures\,\ref{fig:G_and_Y_thermq}\,(b) and
\ref{fig:G_and_Y_600cyc}\,(b) show that the yield stress is only
weakly dependent on the number of cycles in glasses that were
periodically strained at the smaller amplitude $\gamma_0=0.030$,
whereas for $\gamma_0=0.060$, the yield stress increases noticeably
and levels out at $\sigma_Y\approx0.9\,\varepsilon\sigma^{-3}$ for
the \textit{mechanically annealed} glass and at a slightly smaller
value for the \textit{thermally annealed} glass. It was previously
shown that the yield stress is slightly larger, \textit{i.e.},
$\sigma_Y\approx1.05\,\varepsilon\sigma^{-3}$, for rapidly quenched
glasses that were mechanically annealed at the strain amplitude
$\gamma_0=0.060$ for similar loading conditions~\cite{PriezSHALT19}.
This discrepancy might arise because in Ref.\,\cite{PriezSHALT19}
the glass was homogenously annealed starting from the rapidly
quenched state, while in the present study, the potential energy
within the annealed shear-band domain always remains higher than in
the rest of the sample, thus resulting in spatially heterogeneous
structure. On the other hand, it was recently shown that the
presence of an interface between relaxed and rejuvenated domains in
a relatively large sample might impede strain
localization~\cite{Kosiba19}.
\vskip 0.05in
The relative rearrangements of atoms with respect to their neighbors
in a deformed amorphous system can be conveniently quantified via
the so-called nonaffine displacements. By definition, the nonaffine
measure $D^2(t, \Delta t)$ for an atom $i$ is computed via the
transformation matrix $\mathbf{J}_i$ that minimizes the following
expression for a group of neighboring atoms:
\begin{equation}
D^2(t, \Delta t)=\frac{1}{N_i}\sum_{j=1}^{N_i}\Big\{
\mathbf{r}_{j}(t+\Delta t)-\mathbf{r}_{i}(t+\Delta t)-\mathbf{J}_i
\big[ \mathbf{r}_{j}(t) - \mathbf{r}_{i}(t) \big] \Big\}^2,
\label{Eq:D2min}
\end{equation}
where $\Delta t$ is the time interval between two atomic
configurations, and the summation is performed over the nearest
neighbors located within $1.5\,\sigma$ from the position of the
$i$-th atom at $\mathbf{r}_{i}(t)$. The nonaffine quantity defined
by Eq.\,(\ref{Eq:D2min}) was originally introduced by Falk and
Langer in order to accurately detect the localized shear
transformations that involved swift rearrangements of small groups
of atoms in driven disordered solids~\cite{Falk98}. In the last few
years, this method was widely used to study the collective,
irreversible dynamics of atoms in binary glasses subjected to time
periodic~\cite{Priezjev16,Priezjev16a,Priezjev17,Priezjev18,Priezjev18a,
PriezSHALT19,Priez20ba,Peng20,KawBer20,Priez20alt} and startup
continuous~\cite{HorbachJR16,Schall07,Pastewka19,Priez20tfic,Priez19star,Ozawa20,ShiBai20}
shear deformation, tension-compression cyclic
loading~\cite{NVP18strload,Jana20}, prolonged elastostatic
compression~\cite{PriezELAST19,PriezELAST20}, creep~\cite{Eckert21}
and thermal cyclic
loading~\cite{Priez19one,Priez19tcyc,Priez19T2000,Priez19T5000,Guan20}.
\vskip 0.05in
The representative snapshots of \textit{thermally annealed} glasses
are presented in
Fig.\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
for the strain amplitude $\gamma_0=0.030$ and in
Fig.\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000}
for $\gamma_0=0.060$. For clarity, only atoms with relatively large
nonaffine displacements during one oscillation period are displayed.
Note that the typical cage size at $\rho=1.2\,\sigma^{-3}$ is about
$0.1\,\sigma$~\cite{Priezjev13}, and, therefore, the displacements
of atoms with $D^2(n\,T, T)>0.04\,\sigma^2$ correspond to
cage-breaking events. It can be clearly seen in the panel (a) of
Figures\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
and
\ref{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000},
that the shear band runs along the $yz$ plane right after switching
to the subyield loading regime. As expected, the magnitude of
$D^2(200\,T, T)$ on average decays towards the interfaces. Upon
continued loading, the shear band becomes thinner and eventually
breaks up into isolated clusters whose size is reduced over time.
The coarsening process is significantly slower for the strain
amplitude $\gamma_0=0.060$ (about 1000 cycles) than for
$\gamma_0=0.030$ (about 200 cycles). This trend is consistent with
the decay of the potential energy denoted in
Fig.\,\ref{fig:poten_Quench_SB_heal} by the red and orange curves.
\vskip 0.05in
Similar conclusions can be drawn by visual inspection of consecutive
snapshots of the \textit{mechanically annealed} glass cyclically
loaded at the strain amplitude $\gamma_0=0.030$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100})
and at $\gamma_0=0.060$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}).
It can be observed that the shear band is initially oriented along
the $xy$ plane, which is consistent with a relatively large value of
the yield stress along the $xy$ direction at $t=1000\,T$ in
Fig.\,\ref{fig:G_and_Y_600cyc}. The atomic trajectories become
nearly reversible already after about 10 cycles at the strain
amplitude $\gamma_0=0.030$, as shown in
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100},
while isolated clusters of atoms with large nonaffine displacements
are still present after about 2000 cycles at $\gamma_0=0.060$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}).
Altogether these results indicate that oscillatory shear deformation
with a strain amplitude just below the critical value can be used to
effectively anneal a shear band and make the amorphous material
stronger.
\section{Conclusions}
In summary, the process of shear band annealing in metallic glasses
subjected to small-amplitude periodic shear deformation was examined
using molecular dynamics simulations. The glass was modeled as a
binary mixture with non-additive interaction between atoms of
different types, and the shear band was initially developed in
stable glasses under oscillatory shear above the yielding point. It
was shown that periodic loading in the elastic range results in a
gradual decay of the potential energy over consecutive cycles, and
upon increasing strain amplitude, lower energy states can be
accessed after thousands of cycles. Furthermore, the spatiotemporal
analysis of nonaffine displacements demonstrated that a shear band
becomes thinner and breaks into separate clusters whose size is
reduced upon continued loading. Thus, in a wide range of strain
amplitudes below yield, the cyclic loading leads to a nearly
reversible dynamics of atoms at finite temperature. Lastly, both the
shear modulus and yield stress saturate to higher values as the
shear band region becomes better annealed at higher strain
amplitudes.
\section*{Acknowledgments}
Financial support from the National Science Foundation (CNS-1531923)
is gratefully acknowledged. The article was prepared within the
framework of the HSE University Basic Research Program and funded in
part by the Russian Academic Excellence Project `5-100'. The
simulations were performed at Wright State University's Computing
Facility and the Ohio Supercomputer Center. The molecular dynamics
simulations were carried out using the parallel LAMMPS code
developed at Sandia National Laboratories~\cite{Lammps}.
\begin{figure}[t]
\includegraphics[width=9.0cm,angle=0]{system_snapshot_T001.pdf}
\caption{(Color online) A snapshot of the \textit{thermally
annealed} glass at the temperature $T_{LJ}=0.01\,\varepsilon/k_B$.
The system consists of 48\,000 atoms of type \textit{A} (large blue
circles) and 12\,000 atoms of type \textit{B} (small red circles) in
a periodic box of linear size $L=36.84\,\sigma$. Atoms are not shown
to scale. The black arrows indicate the direction of oscillatory
shear deformation along the $xz$ plane. }
\label{fig:shapshot}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{poten_xz_T001_quench_amp080_log_heal_amp030_060.pdf}
\caption{(Color online) The dependence of the potential energy
minima (at zero strain) on the number of cycles for the indicated
values of the strain amplitude. The shear band was formed in the
\textit{thermally annealed} glass during the first 200 cycles at the
strain amplitude $\gamma_0=0.080$ (the black curve). The system
temperature is $T_{LJ}=0.01\,\varepsilon/k_B$ and the oscillation
period is $T=5000\,\tau$. }
\label{fig:poten_Quench_SB_heal}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{poten_xz_T001_600cyc_amp080_log_heal_amp030_060.pdf}
\caption{(Color online) The variation of the potential energy (at
the end of each cycle) as a function of the cycle number for the
selected strain amplitudes. The shear band was introduced in the
\textit{mechanically annealed} glass after 1000 cycles at the strain
amplitude $\gamma_0=0.080$ (the black curve; see text for details).
The time is reported in terms of oscillation periods, \textit{i.e.},
$T=5000\,\tau$. The temperature is $T_{LJ}=0.01\,\varepsilon/k_B$. }
\label{fig:poten_600cyc_SB_heal}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{G_sigY_cycle_thermq.pdf}
\caption{(Color online) The shear modulus $G$ (in units of
$\varepsilon\sigma^{-3}$) and yielding peak $\sigma_Y$ (in units of
$\varepsilon\sigma^{-3}$) as a function of the cycle number for the
\textit{thermally annealed} glass. The startup continuous shear
with the strain rate $\dot{\gamma}=10^{-5}\,\tau^{-1}$ was applied
along the $xy$ plane (circles), $xz$ plane (squares), and $yz$ plane
(diamonds). Before startup deformation, the samples were
periodically deformed with the strain amplitudes $\gamma_0=0.030$
(solid blue) and $\gamma_0=0.060$ (dashed red). The time range is
the same as in Fig.\,\ref{fig:poten_Quench_SB_heal}. }
\label{fig:G_and_Y_thermq}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{G_sigY_cycle_600cyc.pdf}
\caption{(Color online) The shear modulus $G$ (in units of
$\varepsilon\sigma^{-3}$) and yielding peak $\sigma_Y$ (in units of
$\varepsilon\sigma^{-3}$) versus cycle number for the
\textit{mechanically annealed} glass. The startup shear deformation
with the strain rate $\dot{\gamma}=10^{-5}\,\tau^{-1}$ was imposed
along the $xy$ plane (circles), $xz$ plane (squares), and $yz$ plane
(diamonds). Before continuous shear, the samples were cyclically
deformed with the strain amplitudes $\gamma_0=0.030$ (solid blue)
and $\gamma_0=0.060$ (dashed red). The same cycle range as in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. }
\label{fig:G_and_Y_600cyc}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100.pdf}
\caption{(Color online) A series of snapshots of atomic
configurations during periodic shear with the strain amplitude
$\gamma_0=0.030$. The loading conditions are the same as in
Fig.\,\ref{fig:poten_Quench_SB_heal} (the red curve). The nonaffine
measure in Eq.\,(\ref{Eq:D2min}) is (a) $D^2(200\,T,
T)>0.04\,\sigma^2$, (b) $D^2(205\,T, T)>0.04\,\sigma^2$, (c)
$D^2(220\,T, T)>0.04\,\sigma^2$, and (d) $D^2(300\,T,
T)>0.04\,\sigma^2$. The colorcode in the legend denotes the
magnitude of $D^2$. Atoms are not shown to scale. }
\label{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000.pdf}
\caption{(Color online) The position of atoms in the thermally
annealed glass subjected to periodic shear with the strain amplitude
$\gamma_0=0.060$. The corresponding potential energy is denoted by
the orange curve in Fig.\,\ref{fig:poten_Quench_SB_heal}. The
nonaffine measure is (a) $D^2(200\,T, T)>0.04\,\sigma^2$, (b)
$D^2(220\,T, T)>0.04\,\sigma^2$, (c) $D^2(300\,T,
T)>0.04\,\sigma^2$, and (d) $D^2(1200\,T, T)>0.04\,\sigma^2$. The
magnitude of $D^2$ is defined in the legend. }
\label{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100.pdf}
\caption{(Color online) Instantaneous snapshots of the binary glass
periodically sheared with the strain amplitude $\gamma_0=0.030$. The
data correspond to the red curve in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. The nonaffine quantity is (a)
$D^2(1000\,T, T)>0.04\,\sigma^2$, (b) $D^2(1005\,T,
T)>0.04\,\sigma^2$, (c) $D^2(1010\,T, T)>0.04\,\sigma^2$, and (d)
$D^2(1100\,T, T)>0.04\,\sigma^2$. The colorcode denotes the
magnitude of $D^2$. }
\label{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000.pdf}
\caption{(Color online) Atomic positions in the binary glass
cyclically loaded at the strain amplitude $\gamma_0=0.060$. The data
are taken from the selected time intervals along the orange curve in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. The nonaffine quantity is (a)
$D^2(1000\,T, T)>0.04\,\sigma^2$, (b) $D^2(1100\,T,
T)>0.04\,\sigma^2$, (c) $D^2(1200\,T, T)>0.04\,\sigma^2$, and (d)
$D^2(3000\,T, T)>0.04\,\sigma^2$. $D^2$ is defined in the legend. }
\label{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}
\end{figure}
\bibliographystyle{prsty}
|
\section{Phonetics} \label{sec:intro}
Phonetic variation in speech is a complex and fascinating phenomenon. The sound of our speech is influenced by the communities and groups we belong to, places we come from, the immediate social context of speech, and many physiological factors. There is acoustic variation in speech due to sex and gender specific differences in articulation \citep{Huber1999}, age \citep{Safavi2018}, social class and ethnicity \citep{Clayards}, and individual idiosyncrasies of sound production \citep{Noiray2014VariabilityAcoustics}. This linguistic variation is relevant to many fields of study like anthropology, economics and demography
\citep{Ginsburgh2014}, and has connections to the study of speech production and perception in the human brain. It helps us understand how languages developed in the past, and the evolutionary links that still exist between languages today \citep{Pigoli}. Modelling phonetic variation is also important for many practical applications, like speech recognition and speech synthesis. In this work, we study one source of variation in particular: geographical accent variation.
To describe phonetic variation conveniently, in his seminal work \textit{Accents of English}, \citet{wells_1982} introduced lexical sets, which are groups of words containing vowels that are pronounced the same way within an accent. The \textit{trap} lexical set contains words like trap, cat and man, and the \textit{bath} lexical set contains words like bath, class and grass. In Northern English accents both \textit{trap} and \textit{bath} words use the `short a' vowel /\ae/. In Southern English accents \textit{trap} words use /\ae/ and \textit{bath} words use the `long a' vowel /\textipa{A}/; this is known as the trap-bath split.
The trap-bath split is one of the most well studied geographical accent differences. The geographical accent variation in sounds like these has historically been studied using written transcriptions of speech from surveys and interviews by trained linguists. These were used to construct isogloss maps (see Figure~\ref{fig:isogloss}) to visualise regions having the same dialect. \citet{Upton1996} explain that in reality these isoglosses are not sharp boundaries, and they are drawn to show only the most prominent linguistic variation in a region for the sake of simplicity. The boundaries are also constantly moving and changing over time.
\begin{figure}[h]
\centering
\includegraphics[width=2.5in]{figs/trap-bath-isogloss.jpg}
\caption{Isoglosses for the ``class'' vowel in England. Reproduced with permission from \citet[][p.\ 6--7]{Upton1996}.}
\label{fig:isogloss}
\end{figure}
More recently, advances in statistical methods and technology have allowed accent variation to be modelled by directly using audio recordings of speech. A sound can be represented as a set of smooth curves, and functional data analysis \citep[FDA;][]{Ramsay2005,ferraty:vieu:2006,horvath:2012:book} offers techniques to model variation in these curves. This work demonstrates one such approach, in which we analyse variation in vowel sounds using techniques from FDA and generalised linear models.
This paper has two main contributions. The first contribution is to use functional data analysis to classify vowels by directly using speech recordings: we demonstrate two approaches for classifying \textit{bath} vowels as Northern or Southern. The first approach models variation in formant curves (see Section~\ref{sec:formants}) using a functional linear model. The second approach models variation in mel-frequency cepstral coefficient (MFCC) curves (see Section~\ref{sec:mfcc}) through penalised logistic regression on functional principal components, and it can be used to resynthesise vowel sounds in different accents, allowing us to ``listen to the model''. Both approaches classify accents using the temporal dynamics of the MFCC or formant curves in sounds. These two classifiers were trained using a dataset of labelled audio recordings\hl[]{ that was collected for this paper in an experimental setup} \citep{Koshy2020_shahin}.
The second contribution is to construct maps that visualise geographic variation in the \textit{bath} vowel that can be attributed to typical Northern and Southern accent differences, using a soap film smoother. For this we use the audio BNC dataset \citep{BNC}, which is a representative sample of accents in Great Britain. The resulting maps show a geographical variation in the vowel similar to what is seen in isogloss maps like Figure~\ref{fig:isogloss}.
The paper is structured as follows. In Section~\ref{sec:preprocessing}, we introduce two ways of representing vowel sounds as multivariate curves. Section~\ref{sec:data} introduces the two datasets used in this analysis, and the preprocessing steps involved. Section~\ref{sec:classify} gives the two models for classifying \textit{bath} vowels, and Section~\ref{sec:maps} presents the maps constructed to visualise geographical accent variation. We conclude with a discussion of the results in Section~\ref{sec:discussion}.
\section{Sound as data objects} \label{sec:preprocessing}
Sound is a longitudinal air pressure wave. Microphones measure the air pressure at fixed rates, for example at 16 kHz (Hz is a unit of frequency representing samples per second). The waveform of the vowel in the word ``class'' in Figure~\ref{fig:waveform} shows this rapidly oscillating air pressure wave as measured by a microphone. This signal can be transformed in several ways to study it; for example as a spectrogram, formants, or mel-frequency cepstral coefficients (MFCCs), see Sections~\ref{sec:spec}, \ref{sec:formants} and \ref{sec:mfcc}.
\begin{figure}[hb]
\centering
\includegraphics[width=3in]{figs/waveform.pdf}
\caption{Sound wave of the vowel from a single ``last'' utterance.}
\label{fig:waveform}
\end{figure}
\subsection{Spectrograms}
\label{sec:spec}
We begin by defining the spectrogram of a sound. A spectrogram is a time-frequency representation of a sound: it reveals how the most prominent frequencies in a sound change over time. To define it precisely, let us denote the sound wave as a time series $\{s(t): t = 1, \ldots, T\}$, where $s(t)$ is the deviation from normal air pressure at time $t$. We can define $s(t)=0$ for $t\le 0$ or $t>T$. Let $w: \mathbb{R} \rightarrow \mathbb{R}$ be a symmetric window function which is non-zero only in the interval $[-\frac{M}{2},\frac{M}{2}]$ for some $M<T$. The Short-Time Fourier Transform of $\{s(t)\}_{t=1}^T$ is computed as
\begin{align*}
\text{STFT}(s)(t, \omega) &= \sum_{u=-\infty}^\infty s(u)w(u-t)\text{exp}(-i\omega u) \\
& = \sum_{u=1}^T s(u)w(u-t)\text{exp}(-i\omega u),
\end{align*}
for $t=1,\ldots,T$, and $\omega \in \{2\pi k/N: k=0, \ldots, N-1\}$ for some $N\ge T$ which is a power of 2. The window width $M$ is often chosen to correspond to a 20 ms interval.
The spectrogram of $\{s(t)\}_{t=1}^T$ is then defined as
\begin{align*}
\text{Spec}(s)(t, \omega) & = |\text{STFT}(s)(t, \omega)|^2.
\end{align*}
At a time point $t$, the spectrogram shows the magnitude of different frequency components $\omega$ in the sound. Figure~\ref{fig:formants} shows spectrograms of recordings of different vowels, with time on the x-axis, frequency on the y-axis, and colour representing the amplitude of each frequency. The dark bands are frequency peaks in the sound, which leads us to the concept of formants.
\begin{figure}[p]
\centering
\includegraphics[width=4in]{figs/Spectrograms_of_syllables_dee_dah_doo.png}
\caption{In these spectrograms of the syllables \textit{dee, dah, doo}, the dark
bands are the formants of each vowel and the overlaid red dotted lines are estimated formant trajectories. The y axis represents frequency and
darkness represents intensity \citep{kluk2007}. }
\label{fig:formants}
\end{figure}
\subsection{Formants}
\label{sec:formants}
Formants are the strongest frequencies in a vowel sound, observed as high-intensity bands in the spectrogram of the sound. By convention they are numbered in order of increasing frequency, $\text{F}_1, \text{F}_2, \ldots$.
Formants are produced by the resonating cavities and tissues of the vocal tract \citep{Johnson2005}. The resonant frequencies depend on the shape of the vocal tract, which is influenced by factors like rounding of the lips, and height and shape of the tongue (illustrated in Figure~\ref{fig:vocaltract}). The pattern of these frequencies is what distinguishes different vowels. They are particularly important for speech perception because of their connection to the vocal tract itself, and not the vocal cords. Listeners use formants to identify vowels even when they are spoken at different pitches, or when the vowels are whispered and the vocal cords don't vibrate at all \citep{Johnson2005}. One can also sometimes ``hear'' a person smile as they speak, because the act of smiling changes the shapes of the vocal cavities and hence the formants produced \citep{Ponsot2018}.
\begin{figure}[h]
\centering
\includegraphics[width=2in]{figs/vocaltract-cc.pdf}
\caption{This diagram shows how varying the height of the tongue creates different vowels \citep{CC2008_shahin}.}
\label{fig:vocaltract}
\end{figure}
\subsection{Mel-Frequency Cepstral Coefficients}
\label{sec:mfcc}
Mel-frequency cepstral coefficients (MFCCs) are a further transformation of the spectrogram, and are often used in speech recognition and speech synthesis. The way they are constructed is related to how the human auditory system processes acoustic input; in particular, how different frequency ranges are filtered through the cochlea in the inner ear. This filtering is the reason humans can distinguish between low frequencies better than high frequencies. MFCCs roughly correspond to the energy contained in different frequency bands, but are not otherwise easily interpretable.
There are many variants of MFCCs; we use the one from \citet{Erro2011,Erro2014} which allow for high fidelity sound resynthesis.
MFCCs are computed in two steps as follows \citep{Tavakoli2019}. First the mel-spectrogram is computed from the spectrogram, using a mel scale filter bank with $F$ filters $(b_{f,k})_{k=0,\ldots,N-1}$, $f=0, \ldots, F$. The mel scale is a perceptual scale of pitches, under which pairs of sounds that are perceptually equidistant in pitch are also equidistant in mel units.
This is unlike the linear Hz scale, in which a pair of low frequencies will sound further apart than an equidistant pair of high frequencies.
The mapping from Hz ($f$) to mels ($m$) is given by $m=2595\, \text{log}_{10}(1+f/700)$, shown in Figure~\ref{fig:melfilter}. The mel-spectrogram is defined as
\begin{align*}
\text{MelSpec}(s)(t, f) &= \sum_{k=0}^{N-1} \text{Spec}(s)(t, 2\pi k/N)b_{f,k}.
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[height=3in]{figs/melhz.pdf}
\caption{Mapping from Hz to mel. A pair of high frequencies on the Hz scale sound more similar to the human ear than an equidistant pair at low frequencies. This is captured by the mel scale.}
\label{fig:melfilter}
\end{figure}
In the second step, we take the inverse Fourier transform of the logarithm of this mel-spectrogram. The first $M$ resulting coefficients are the MFCCs,
\begin{align*}
\text{MFCC}(s)(t, m) &= \frac{1}{F}\sum_{f=0}^{F} \text{log}\left(\text{MelSpec}(s)(t, f)\right) \text{exp} \left( i\frac{2\pi (m-1)f}{F+1} \right).
\end{align*}
At each time point $t$ we have $M$ MFCCs. We use the \texttt{ahocoder} software \citep{Erro2014} to extract MFCCs, which uses $M=40$ at each time point. Thus we represent each vowel sound by 40 MFCC curves.
Formants are a low-dimensional summary of the original sound which allow interpretation of the vocal tract position. MFCCs retain a lot of information about speech sounds and do not simplify the representation in an immediately interpretable way, but the model with MFCCs allows us to resynthesise \textit{bath} vowels along the /\ae/ to /\textipa{A}/ spectrum. MFCCs and formants therefore have different strengths and limitations for analysis, depending on the goal. In this paper we demonstrate classifiers using both representations.
Regardless of whether we work with vowel formants or MFCCs, we can view the chosen sound representation as a smooth multivariate curve over time, $X(t) \in \mathbb{R}^d$, where $t \in [0,1]$ is normalised time. In practice we assume $X(t)$ is observed with additive noise due to differences in recording devices and background noise in the recording environment.
\section{Data sources} \label{sec:data}
In this section we describe the two data sources used in this paper.
\subsection{North-South Class Vowels} \label{sec:nscv}
The North-South Class Vowels \citep[NSCV;][]{Koshy2020_shahin}
dataset is a collection of 400 speech recordings of the vowels /\ae/ and /\textipa{A}/ that distinguish stereotypical Northern and Southern accents in the \textit{bath} lexical set. The vowels were spoken by a group of 4 native English speakers (100 recordings per speaker).
It was collected in order to have a high-quality labelled dataset of the /\ae/ and /\textipa{A}/ vowel sounds in \textit{bath} words.
The NSCV dataset was collected with ethical approval from the Biomedical and Scientific Research Ethics Committee of the University of Warwick.
The speech recordings were collected in an experimental setup. The speakers were two male and two female adults between the ages of 18 and 55. In order to participate they were required to be native English speakers but were not required to be proficient in Southern and Northern accents.
To access the vowels, they were shown audio recordings as pronunciation guides, and example rhyming words such as `cat' for the /\ae/ vowel and `father' for the /\textipa{A}/ vowel. They were allowed to practice using the two vowels in the list of words, before being recorded saying a list of words using both vowels. The words were \textit{class, grass, last, fast}, and \textit{pass}. Each word was repeated 5 times using each vowel, by each speaker. The speech was simultaneously recorded with two different microphones.
The purpose of this dataset is to demonstrate a method of training accent classification models. By using vowels as a proxy for accent, it allows us to train models to distinguish between Northern and Southern accents, to the extent that they differ by this vowel. Using two microphones and having the same speaker producing both vowels allows us to train models that are robust to microphone and speaker effects. Despite the small number of speakers in this dataset, we are still able to classify vowels with high accuracy and resynthesise vowels well. A limitation of the dataset is that the speakers were not required to be native speakers of both Northern and Southern accents or have any phonetic training.
\subsection{British National Corpus} \label{sec:bnc}
The audio edition of the British National Corpus (BNC) is a collection of recordings taken across the UK in the mid 1990s, now publicly available for research \citep{BNC}. A wide range of people had their speech recorded as they went about their daily activities, and the audio recordings were annotated (transcriptions of the conversations, with information about the speakers). From this corpus we analyse utterances of the following words from the \textit{bath} lexical set, which we call the ``class'' words: \textit{class, glass, grass, past, last, brass, blast, ask, cast, fast}, and \textit{pass}.
Among the sound segments in the BNC labelled as a ``class'' word, not all of them do correspond to a true utterance of a ``class'' word by a British speaker\hl[]{, and some are not of good quality}. Some sounds were removed from the dataset using the procedure described in Appendix~\ref{app:exploration}.
The resulting dataset contains 3852 recordings from 529 speakers in 124 locations across England, Scotland and Wales. Figure~\ref{fig:obsnum} shows the number of sounds and speakers at each location. Some speakers were recorded at multiple locations, but 94\% of them have all their recording locations within a 10 kilometre radius. 88\% of all speakers only have one recording location in this dataset.
\begin{figure}[t]
\centering
\includegraphics[width=.4\linewidth]{figs/obsnum.pdf}
\includegraphics[width=.4\linewidth]{figs/idnum.pdf}
\caption{Each bubble is centred at a location at which we have observations in the BNC, and its size corresponds to the number of recordings (left plot) and number of speakers (right plot) at each location.}
\label{fig:obsnum}
\end{figure}
This dataset captures a wide range of geographical locations and socio-economic characteristics, and speakers were recorded in their natural environment. It has, however, some limitations for our analysis. For example, we do not know the true origin of a speaker, so unless the metadata shows otherwise, we must assume that speakers' accents are representative of the location where they were recorded. There are very few speech recordings available from the North, especially Scotland. The timestamps used to identify word boundaries are often inaccurate, and the sound quality varies widely between recordings, due to background noise and the different recording devices used.
\subsection{Transforming sounds into data objects} \label{sec:preprocess-steps}
Each speech recording in the BNC and NSCV datasets was stored as a mono-channel 16 kHz \texttt{.wav} file. The raw formants were computed using the \texttt{wrassp} R package \citep{wrassp}. At each single time point the first four formants were computed, and this is done at 200 points per second. A sound of length 1 second is thus represented as a $200\times4$ matrix, where each column corresponds to one formant curve. For each vowel sound, raw MFCCs were extracted using the \texttt{ahocoder} software \citep{Erro2011, Erro2014}, which also computes them at 200 points per second. Hence a sound of length 1 second would be represented as a $200\times40$ matrix, where each column represents one MFCC curve.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/Preprocessing.pdf}
\caption{Summary of preprocessing steps.}
\label{fig:flowchart}
\end{figure}
We smooth the raw formants and raw MFCCs in order to remove unwanted variation due to noise, and to renormalise the length of the curves by evaluating each smoothed curve at a fixed number of time points \citep{Ramsay2005}.
Assuming a signal plus noise model on the raw formants and raw MFCCs, we smooth and resample them on an equidistant grid of length $T=40$. Since the raw formants exhibit large jumps that are physiologically implausible, we smooth them using robust loess \citep[R function \texttt{loess}][]{Cleveland1979} with smoothing parameter $l=0.4$ and using locally linear regression. The raw MFCCs are less rough, and we smooth them using cubic splines \citep[R function \texttt{smooth.spline},][]{R2020} with knots chosen at each point on the time grid and smoothing parameter chosen by cross-validation for each curve. We have used $T=40$ in this analysis because it captures the main features while not inflating the dataset too much. We do not model vowel duration, which also depends on other factors, such as speech context \citep{Clayards}. Other implementations and smoothing methods could be used here, such as the R package \texttt{mgcv} for smoothing MFCCs with cubic splines, and robust smoothing for formants using the scaled t family.
Finally, we perform an alignment step to reduce misalignments between NSCV curves and BNC curves. This is necessary because the BNC speech recordings often have inaccurate timestamps and this can cause their vowels to be misaligned with the NSCV curves. Since we classify BNC vowels using models trained on NSCV curves, these misalignments can cause inaccuracies in the predictions. We consider the differences in relative timing of the vowel in the sound to be due to a random phase variation; alignment or registration of curves allows us to reduce the effect of this phase variation \citep{Ramsay2005}. We use the approach of \citet{Srivastava2011}, where the Fisher--Rao metric distance between two curves is minimised by applying a nonlinear warping function to one of the curves.
The first MFCC curve (MFCC 1) of each sound contains the volume dynamics. To align NSCV vowels, we first align all NSCV MFCC 1 curves together. These warping functions are then applied to the formant curves and other MFCC curves from the same vowels, since they come from the same underlying sounds. For each BNC vowel, we first align its MFCC 1 curve to the mean aligned NSCV MFCC 1 curve, and then use the obtained warping function to align all the other MFCC curves and formant curves from the same vowel. Alignment was performed using the R package \texttt{fdasrvf} \citep{fdasrvf2020}, and the preprocessing steps are summarised in Figure~\ref{fig:flowchart}.
\section{Classifying accents} \label{sec:classify}
In this section, we will present two models for classifying \textit{bath} vowels as Southern or Northern.
\subsection{Modeling formants} \label{sec:formant-model}
Our first task is to build a classifier to classify \textit{bath} vowels as Northern or Southern. The model uses the fact that the first two formants $\text{F}_1$ and $\text{F}_2$ are known to predominantly differentiate vowels, and higher formants do not play as significant a role in discriminating them \citep{Adank, Johnson2005}. It has been suggested that the entire trajectory of formants are informative even for stationary vowels like the \textit{bath} vowels, and they should not be considered as static points in the formant space \citep{Johnson2005}; (see also the discussion in Section \ref{sec:discussion}). This suggests the use of formant curves as functional covariates when modelling the vowel sounds.
Since the dynamic changes in the formant curves are not drastic, we do not believe there are time-localised effects of the formants, so we use the entire formant curve as a covariate with a roughness penalty. Due to the nested structure of the NSCV corpus with 100 speech recordings from each speaker, we also include a random effect term to account for variation between speakers.
Now we can propose the following functional logistic regression model to classify accents:
\begin{equation}
\mathrm{logit}(p_{ij}) = \beta_0 + \int_{0}^{1}\text{F}_{2ij}(t)\beta_1(t)dt + \gamma_j, \label{eq:loggam}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=4in]{figs/f2.pdf}
\caption{Smoothed and aligned $\text{F}_2$ formant curves for the NSCV vowels. Each curve corresponds to one vowel sound.}
\label{fig:f2}
\end{figure}
where $p_{ij}$ is the probability of sound $i$ from speaker $j$ being Southern, $\text{F}_{2ij}(t)$ is the value of the $\text{F}_2$ curve at time $t$ for sound $i$ from speaker $j$, and $\gamma_j \sim N(0, \sigma_s^2)$ is a random effect for speaker $j$. The functional covariate contributes to the predictor through a linear functional term. The integral is from 0 to 1 since we have normalised the length of all sounds during preprocessing. The function $\beta_1(t)$ is represented with a cubic spline with knots at each time point on the grid, and its ``wiggliness'' is controlled by penalising its second derivative. Model selection was done by comparing the adjusted AIC \citep{Wood} to decide which other terms should be included in the model. Further details from the model selection procedure are given in Appendix \ref{app:modelselection}, where we also consider simpler non-functional models. The model was fitted using the \texttt{mgcv} package in R \citep{Wood2011}.
The fitted coefficient curve $\hat \beta_1(t)$, shown in Figure~\ref{fig:betahatt}, reveals that middle section of the $\text{F}_2$ curve is important in distinguishing the vowels. A lower $\text{F}_2$ curve in this region indicates a Northern /\ae/ vowel. From a speech production perspective, this corresponds to the Northern vowel being more ``front'', which indicates that the highest point of the tongue is closer to the front of the mouth, compared to the Southern vowel.
The point estimate for $\beta_0$ is 328.0 (p-value $= 0.267$, 95\% CI $[-250.85, 906.87]$). The variance component explained by the speaker random effects is $\hat{\sigma}_s^2 = 0.006$ (p-value $= 0.776$).
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/betahat.pdf}
\caption{$\hat{\beta_{1}}(t)$ shows that a lower $\text{F}_2$ region towards the middle of the sound indicates a more Northern vowel sound. The dashed lines are 95\% pointwise confidence intervals of the coefficient curve.}
\label{fig:betahatt}
\end{figure}
This model assigns a ``probability of being Southern'' to a given vowel sound, by first aligning the sound to the mean NSCV sound using MFCC 1, and then plugging its formants into \eqref{eq:loggam}. We classify a vowel sound as Southern if its predicted probability of being Southern is higher than $0.5$.
We can estimate the classification accuracy of this model through cross-validation. The model was cross-validated by training it on 3 speakers and testing on the fourth speaker's vowels, and repeating this 4 times by holding out each speaker in the dataset. Using a random split of the data instead would lead to overestimated accuracy, because different utterances by the same speaker cannot be considered independent. The cross-validated accuracy is 96.75\%, and the corresponding confusion matrix is shown in Table~\ref{table:conf}. We can also compare the performance of this model for different classification thresholds, using the ROC curve in Figure~\ref{fig:roc}.
\begin{figure}[h]
\centering
\includegraphics{figs/flm-roc.pdf}
\caption{ROC curve for the functional logistic regression model. The dotted line corresponds to random guessing and the red dot corresponds to using a threshold of 0.5 to classify vowels.}
\label{fig:roc}
\end{figure}
\begin{table}[h]
\caption{Cross-validated confusion matrix for the functional logistic regression.}
\label{table:conf}
\centering
\begin{tabular}{rcrr}
\toprule
{} &\phantom{} &\multicolumn{2}{c}{\textbf{Truth}}\\
\cmidrule{3-4}
&& North & South \\
\textbf{Prediction} \\
North && 196 & 4 \\
South && 4 & 196\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Modeling MFCCs} \label{sec:mfcc-model}
We will now present another approach to classifying vowel sounds, which uses the MFCC curves obtained from each vowel recording. We have 40 smoothed MFCC curves for each sound.
Unlike with formants, we do not have prior knowledge about which curves contain information about the vowel quality. Additionally, since MFCC curves contain information about different parts of the frequency spectrum, they are not independent and the co-variation between curves is important. For example, setting an MFCC curve (or a region of the curve) to a constant value distorts the resulting sound. Hence a multivariate functional regression approach with $\ell_1$ penalty to remove certain curves from the model would not be appropriate, and we need to incorporate information from potentially all the MFCC curves in our model. The problem of concurvity between MFCC curves can also destabilise the resulting coefficient curve estimates in such an approach.
Interpreting the shapes of the curves is also not as useful here since MFCC trajectories do not have a physical interpretation as formants do. We are more interested in the model's ability to resynthesise vowels by capturing as much relevant information about vowel quality as possible. Hence we use functional principal components analysis to capture the co-variation of the MFCC curves. This step essentially generates new features by reparametrising the MFCC curves, which we can then use to fit the classification model.
We select the most informative functional principal components to be in the model through $\ell_1$ penalisation.
\subsubsection{Functional Principal Component Analysis}
Functional principal component analysis \citep[FPCA;][]{Ramsay2005} is an unsupervised learning technique which identifies the different modes of variation in a set of observed smooth curves $\{X_i: [0,1] \rightarrow \mathbb{R},\, i = 1 ,\ldots, n\}$. It is very similar to standard principal component analysis, except that the variables are curves instead of scalar features, and each functional principal component (FPC) is also a curve instead of a vector.
Assuming that the curves $\{X_i\}$ are centred, the $k$th FPC is a smooth curve $\varphi_k:[0,1] \rightarrow \mathbb{R}$ which maximises
\[
\frac{1}{n} \sum_{i=1}^{n} \left( \int \varphi_k(t) X_i(t) dt \right) ^2,
\]
subject to $\int \varphi_k(t)^2 dt = 1$ and $\int\varphi_k(t)\varphi_j(t)dt = 0$ for all $j < k$; there is no constraint for $k=1$. The functional principal component score (FPC score) of curve $i$ with respect to principal component $\varphi_k$ is $s_{ik} = \int \varphi_k(t)X_i(t)dt$.
In multivariate FPCA, each observation is a curve in $\mathbb{R}^M$, and the set of observations is $\{ {\boldsymbol X}_i=(X_i^{(1)}, X_i^{(2)}, \ldots, X_i^{(M)}): [0,1] \rightarrow \mathbb{R}^M,\, i = 1 ,\ldots, n\}$. Amongst the existing variants of multivariate FPCA \citep{chiou2014multivariate,Happ2018}, we use the following one:
assuming that the curves $\{\boldsymbol X_i\}$ are centred, the $k$th FPC is a smooth multivariate curve, defined as ${\boldsymbol \varphi}_k = (\varphi_k^{(1)}, \varphi_k^{(2)}, \ldots, \varphi_k^{(M)}):[0,1] \rightarrow \mathbb{R}^M$ which maximises
\[
\frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{M} \left( \int \varphi_k^{(j)}(t) X_i^{(j)}(t) dt\right)^2
\]
subject to $\sum_{j=1}^M \int [ \varphi_k^{(j)}(t) ]^2 dt = 1$ and $\sum_{j=1}^{M} \int\varphi_k^{(j)}(t)\varphi_l^{(j)}(t)dt = 0$ for all $l < k$. The $k$-th FPC score of ${\boldsymbol X}_i$ is defined as $s_{ik} = \sum_{j=1}^M \int \varphi_k^{(j)}(t) X_i^{(j)}(t) dt$.
In our case, the curves $\{ {\boldsymbol X}_i\}$ are the MFCC curves with $M=40$. Each curve $\boldsymbol{X_i}$ discretised on a grid of $T$ equally spaced time points, yielding a $T \times M$ matrix, which is then transformed by stacking the rows into a vector in $\mathbb{R}^{MT}$. The whole dataset is then represented as an $n \times MT$ matrix, which contains observations as rows. The (discretised) FPCs and their scores can therefore be directly computed using a standard implementation of (non-functional) PCA, such as \texttt{prcomp} in R \citep{R2020}.
Before performing the FPCA we centre each MFCC 1 curve at zero, because the average level of MFCC 1 mainly contains differences in the overall volume of the sound, which is influenced by factors other than accent. Centring the curve at zero retains the volume dynamics in the vowel while normalising the overall volume between sounds. Since there are 400 observations in the NSCV training data, we can decompose the MFCC curves into (at most) 400 functional principal components. The first 25 eigenvalues of the FPCs obtained are plotted in Figure~\ref{fig:screeplot}.
\begin{figure}[h]
\centering
\includegraphics[width=4.5in]{figs/mfcc-screeplot.pdf}
\caption{First 25 eigenvalues of the functional principal components of the MFCCs.}
\label{fig:screeplot}
\end{figure}
\subsubsection{$\ell_1$-Penalised Logistic Regression}
$\ell_1$-penalised logistic regression \citep[PLR;][]{Hastie2017} can be used for binary classification problems when we have many covariates (here we have $p=400$ FPC scores which we could include in the model, which corresponds to a reparametrisation of the MFCC curves without losing any information). Through the penalisation and model fitting procedure, a smaller subset of covariates are chosen in the final model.
The model is the same as for the usual logistic regression: if $Y$ is a Bernoulli random variable and $\boldsymbol{X} \in \mathbb{R}^p$ is its covariate vector, the model is
\[
\mathrm{logit}( \mathbb{P}(Y = 1 | \boldsymbol{X} = \boldsymbol{x}) ) = \beta_0 + \boldsymbol{\beta}^\mathsf{T} \boldsymbol{x},
\]
but it is fitted with an added $\ell_1$ penalty on the regression coefficients to deal with high-dimensionality, which encourages sparsity and yields a parsimonious model.
In our setting,
if $y_i = 1$ if sound $i$ is Southern, $y_i=0$ if it is Northern, and $\boldsymbol{x}_i \in \mathbb{R}^{400}$ is a vector of its 400 FPC scores, PLR is fitted by solving
\begin{equation}
\label{eq:PLR}
(\hat{\beta_0}, \hat{\boldsymbol \beta}) = \arg \max_{\beta_0, {\boldsymbol \beta}} \sum_{i=1}^n \left(y_i (\beta_0 + {\boldsymbol \beta}^\mathsf{T} {\boldsymbol x}_i) - \log(1 + e^{\beta_0+ {\boldsymbol \beta}^\mathsf{T} {\boldsymbol x}_i})\right) - \lambda \sum_{j=1}^{p} \lvert \beta_j \rvert,
\end{equation}
where $\lambda \geq 0$ is a penalty weight.
Notice that the first term in \eqref{eq:PLR} is the usual log-likelihood, and the second term is an $\ell_1$ penalty term. The penalty $\lambda$ is chosen by 10-fold cross-validation.
A new sound with FPC scores vector $\boldsymbol{x_*}$ is assigned a ``probability of being Southern'' of $\mathrm{ilogit}( \hat \beta_0 + \boldsymbol{\hat \beta}^\mathsf{T} \boldsymbol{x_*} )$, where
$\mathrm{ilogit}(\cdot)$ is the inverse logit function. We classify the sound as Southern if $\mathrm{ilogit}( \hat \beta_0 + \boldsymbol{\hat \beta}^\mathsf{T} \boldsymbol{x_*} ) \geq 0.5$.
We can estimate the accuracy of the model by cross-validating using individual speakers as folds, as in the functional linear model of Section~\ref{sec:formant-model}. Within each training set, we first perform the FPCA to obtain the FPCs and their scores. Then we cross-validate the penalised logistic regression model to find the optimal penalty $\lambda$, and retrain on the whole training set with this $\lambda$. Finally, we project the test speaker's sounds onto the FPCs from the training set to obtain the test FPC scores, and use them to classify the vowel of each sound using the predicted probabilities from the trained model. This process is repeated holding out each speaker in turn. The cross-validated accuracy of this model is 95.25\%. The confusion matrix is shown in Table~\ref{table:plrconf}, and the ROC curve is shown in Figure~\ref{fig:plr_roc}.
To fit the full model, we use the entire dataset to cross-validate to choose the best $\lambda$, and then refit on the entire dataset using this penalty. The entries of $\bf \beta$ are essentially weights for the corresponding FPCs. By identifying the FPC scores which have nonzero coefficients, we can visualise the weighted linear combination of the corresponding FPCs which distinguish Northern and Southern vowels. In total 10 FPCs had nonzero weights, and all of the chosen FPCs were within the first 20. A plot of the first 25 coefficient values is given in Figure~\ref{fig:plr_coefs}.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/plr-roc.pdf}
\caption{ROC curve for the MFCC model using penalised logistic regression classifier.}
\label{fig:plr_roc}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/plrmodel-25coefs.pdf}
\caption{The first 25 entries of $\hat{\boldsymbol \beta}$ maximising \eqref{eq:PLR}. Nonzero entries are shown in red. All the later entries are zero, not shown here.}
\label{fig:plr_coefs}
\end{figure}
\begin{table}[h]
\caption{Cross-validated confusion matrix for the penalised logistic regression classifier.}
\label{table:plrconf}
\centering
\begin{tabular}{rcrr}
\toprule
{} &\phantom{} &\multicolumn{2}{c}{\textbf{Truth}}\\
\cmidrule{3-4} && North & South \\
\textbf{Prediction} \\
North && 189 & 8 \\
South && 11 & 192 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Resynthesising vowel sounds}
\label{sec:resynthesising}
The combined effect of the functional principal components that are predictive of accent is given by the function
\begin{equation}
\label{eq:mfcc_making_more_southern}
\sum_{k=1}^{400} \hat{\beta}_{k} \hat {\boldsymbol \varphi_k}: [0,1] \to \mathbb{R}^{40}.
\end{equation}
Discretising this function on an equispaced grid of $T$ points yields a $T \times 40$ matrix, which can be visualised (Figure~\ref{fig:contrib}), or interpreted as a set of MFCC curves (Figure~\ref{fig:contrib_first_9}).
This MFCC matrix captures the difference between the /\ae/ and /\textipa{A}/ vowels. Since MFCCs can be used to synthesise speech sounds, we can now make a given \textit{bath} vowel sound more Southern or Northern, through the following procedure:
We first extract the MFCCs for the entire utterance of a \textit{bath} word, as a $T \times 40$ matrix where $T$ is determined by the length of the sound. With manually identified timestamps we find the $T_v$ rows of this matrix which correspond to the vowel in the word. We align MFCC 1 of this vowel to the mean NSCV MFCC 1 curve, to obtain the optimal warping function for the sound. The MFCC matrix in Figure~\ref{fig:contrib} is `unwarped' using the inverse of this warping function, resampled at $T_v$ equidistant time points, and padded with $T - T_v$ rows of zeroes corresponding to the rest of the sound's MFCCs (which we do not change). We can then add multiples of this $T\times40$ matrix to the original sound's MFCC matrix and synthesise the resulting sounds using \texttt{ahodecoder} \citep{Erro2014}. Adding positive multiples of the matrix makes the vowel sound more Southern, while subtracting multiples makes it sound more Northern. In the supplementary material we provide audio files with examples of this: \texttt{blast-StoN.wav} contains the word ``blast'' uttered in a Southern accent and perturbed towards a Northern accent, and \texttt{class-NtoS.wav} contains the word ``class'' uttered in a Northern accent and perturbed towards a Southern accent. Both of these original vowels were new recordings, and not from the NSCV corpus.
\begin{figure}[h]
\centering
\includegraphics[width=4.5in]{figs/perturb.pdf}
\caption{This image shows the
MFCCs of \eqref{eq:mfcc_making_more_southern} which make a vowel sound more Southern. Each row of the image is an MFCC curve.}
\label{fig:contrib}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/perturb-first9-cv.pdf}
\caption{The first 9 MFCCs from model \eqref{eq:mfcc_making_more_southern}, which correspond to the bottom 9 rows of the matrix in Figure~\ref{fig:contrib}, plotted sequentially. We can see that MFCC 3 and 5 have large contributions. The grey lines are the MFCC curves obtained in each cross-validation fold and thicker black lines are from the final model.}
\label{fig:contrib_first_9}
\end{figure}
\section{Modelling geographic variation} \label{sec:maps}
In this section we demonstrate an approach for visualising the trap--bath split by combining data from the BNC with the trained accent classifiers described in Sections~\ref{sec:formant-model} and~\ref{sec:mfcc-model}. For each BNC speaker we predict the probability of their vowel sound being Southern (using in turn the formant model and the MFCC model), and then smooth the predicted probabilities spatially using a soap film smoother.
The BNC \textit{bath} vowels contain more variation than the NSCV dataset. This is partly because of more natural variation in conversational speech, as well as other factors like poor quality of some recordings and background noise. The BNC recordings also contain whole words and not only the vowel portion of the utterance. The timestamps for word boundaries are often inaccurate and many sounds are either a partial word, or contain parts of other words or speech from other speakers. It is hard to automatically detect the vowel portions within these recordings. We address this issue through the alignment step described in Section \ref{sec:preprocessing} to align each sound to the NSCV using the mean NSCV MFCC 1 curve.
A single representative sound can be constructed for each speaker by taking an average of these aligned formant and MFCC curves from the speaker's utterances. By resynthesising the sound of the average MFCC curves, we can hear that it retains the quality of a \textit{bath} vowel, so we use these average MFCCs and formants as representative of each speaker's vowel sound. For each speaker we obtain two predicted probabilities of their accent being Southern (one based on the formants, and one on the MFCCs), using models of Sections~\ref{sec:formant-model} and \ref{sec:mfcc-model}. Notice that for each speaker, plugging this average sound's formants (MFCCs) into the trained models of Sections~\ref{sec:formant-model} (Section~\ref{sec:mfcc-model}) yields the same predicted logit probability as if we averaged the logit probabilities from each sound's aligned formants (aligned MFCCs). The averaging step used to get speaker-specific probabilities ensures that the model is not unduly influenced by individual speakers who have many recordings at one location, while also reducing the predicted probability uncertainties. Where a speaker has recordings at multiple locations, we attribute their average sound to the location with most recordings.
At each location $(\texttt{lon}, \texttt{lat})$ in Great Britain, we denote by $f(\texttt{lon}, \texttt{lat})$ the logit of the expected probability of a randomly chosen person's accent being Southern.
We will estimate this surface using a spatial Beta regression model:
\begin{eqnarray}
\label{eq:betaReg}
p_{ij} &\stackrel{\text{iid}}{\sim}& \text{Beta}(\mu_i \nu, \nu (1-\mu_i)), \quad j \in \{1, \ldots, n_i\}\\
\mathrm{logit}(\mu_i) &=& f(\texttt{lon}_i, \texttt{lat}_i), \nonumber
\end{eqnarray}
where $p_{ij} \in [0,1]$ is the predicted probability of the $j$-th speaker's accent at location $(\texttt{lon}_i, \texttt{lat}_i)$ being Southern, $j=1,\ldots, n_i$.
The surface $f$ is estimated using a soap film smoother within the geographic boundary of Great Britain.
A single value of $\nu > 0$ is estimated for all observations, as in GLMs. Notice that $\mathrm{ilogit}(f(\texttt{lon}_i,\texttt{lat}_i)) = \mu_i = \mathbb{E}(p_{ij}) \in [0,1]$ represents the expected probability of the accent of a randomly chosen person being Southern at location $(\texttt{lon}_i, \texttt{lat}_i)$.
One may instead consider fitting a linear model directly on the estimated linear predictor scores obtained from the formant or MFCC models. This linear approach would not be robust to the estimated linear predictor taking large values (which is the case with our data), even though the associated probabilities are essentially equal to one (or zero). Our approach alleviates this by smoothing predictions on the probability scale which makes it less influenced by outliers. Link functions other than the logit could also be used in alternative approaches.
Let us now recall the soap film smoother.
The soap film smoother \citep{Wood2008} is a nonparametric solution to spatial smoothing problems, which avoids smoothing across boundaries of a bounded non-convex spatial domain.
We observe data points $\{(x_i, y_i, z_i), i=1,\ldots,n\}$, where $z_i$ are the responses with random noise and $\{(x_i, y_i)\}$ lie in a bounded region $\Omega \subset \mathbb{R}^2$. The objective is to find the function $f:\Omega \rightarrow \mathbb{R}$ which minimises
\[
\sum_{i=1}^n (z_i - f(x_i, y_i))^2 + \lambda \int_{\Omega} \left( \frac{\partial^2f}{\partial x^2} + \frac{\partial^2f}{\partial y^2}\right)^2 dx dy.
\]
The smoothing parameter $\lambda$ is chosen through cross-validation. The soap film smoother is implemented in the R package \texttt{mgcv} \citep{Wood2011}.
In our model \eqref{eq:betaReg}, the predicted Southern accent probabilities $\{p_{ij}\}$ of individual speakers are observations at different locations $\{(\texttt{lon}_i, \texttt{lat}_i)\}$ in Great Britain, and we use the soap film smoother to construct a smooth surface $f(\cdot, \cdot)$ to account for the geographic variation. We can compare the results using accent predictions from the two classification models proposed in the previous section.
Plots of the fitted response surfaces $\hat \mu(\texttt{lon}, \texttt{lat}) = \mathrm{ilogit}( \hat f(\texttt{lon}, \texttt{lat}) )$ using the formant and the MFCC classification models are given in Figure~\ref{fig:accent_maps}.
Both maps seem to suggest a North against Southeast split, similar to the isogloss map in Figure~\ref{fig:isogloss}. The predicted probabilities are usually not close to 0 or 1, because the BNC contains more variation than we have in the NSCV training data, due for instance to the variation in recording environments, and since not all speakers have a stereotypical Northern or Southern accent.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.49\textwidth}
\centering
\includegraphics[trim=50 0 20 0, clip=TRUE, width=0.9\textwidth]{figs/FLR-map.pdf}
\caption{Map using formants.}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\centering
\includegraphics[trim=50 0 20 0, clip=TRUE, width=0.9\textwidth]{figs/PLR-map.pdf}
\caption{Map using MFCCs.}
\end{subfigure}
\caption{Smoothed predicted probabilities of a vowel sound being Southern, when using the two models of Section~\ref{sec:classify}. Black crosses are recording locations.}
\label{fig:accent_maps}
\end{figure}
To visualise the uncertainty associated with the contours in Figure~\ref{fig:accent_maps}, Figure~\ref{fig:accent_SE_maps} shows the approximate 95\% pointwise confidence intervals for $\mu$. These are computed as $[\mathrm{ilogit}( \hat{f} - 1.96\times \hat{\texttt{se}}(\hat{f})), \mathrm{ilogit}(\hat{f} + 1.96\times \hat{\texttt{se}}(\hat{f}))]$, based on a normal approximation on the link function scale. Notice that the uncertainty for both models is high in Northern England, Scotland and Wales, due to fewer observations in those regions. However, the North-Southeast variation is consistent and Greater London emerges as a region with significantly Southern accents.
\begin{figure}[p]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=0.4\textheight]{figs/FLR-se-maps.pdf}
\caption{Pointwise confidence intervals for $\mu(\cdot, \cdot)$ for the formant model.}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=0.4\textheight]{figs/PLR-se-maps.pdf}
\caption{Pointwise confidence intervals for $\mu(\cdot, \cdot)$ for the MFCC model.}
\end{subfigure}
\caption{Contours of the spatially smoothed probabilities, showing the lower and upper bounds of a 95\% pointwise confidence interval for $\mu(\cdot, \cdot)$, constructed using a pointwise Normal approximation on the logit scale.}
\label{fig:accent_SE_maps}
\end{figure}
\section{Discussion} \label{sec:discussion}
We first demonstrated two principled and interpretable approaches to modelling accent variation in speech sounds, using techniques from functional data analysis and generalised additive models. We presented a model that uses formant trajectories to classify \textit{bath} vowel sounds as Northern or Southern based on their similarity to /\ae/ and /\textipa{A/} vowels, trained on a set of labelled vowels\hl[]{ collected in an experimental setup}. The same audio dataset was also used in a different model using MFCC curves, by using functional principal components analysis to generate new features from the MFCC curves, and then classifying the sounds using $\ell_1$-penalised logistic regression of the FPC scores. We showed in Section~\ref{sec:resynthesising} how this MFCC model allowed us to resynthesise vowel sounds along a spectrum between /\ae/ and /\textipa{A}/.
These formant and MFCC models were used to predict the probability of a Southern accent for vowels from the audio BNC \citep{BNC}, our second dataset. The predictions were smoothed spatially to visualise the trap--bath split in England, Wales and Scotland, using a spatial beta regression with a soap film smoother. The resulting maps show a North versus South-east difference in accents which we can directly attribute to the variation in the /\ae/ or /\textipa{A}/ vowel quality of BNC sounds.
This analysis demonstrates how we can combine information from a labelled audio dataset such as the NSCV dataset, with the unlabelled BNC dataset. Despite the small sample of 4 speakers in the NSCV dataset, it allowed for vowel classification models to be trained. From cross-validation it seems that these classification models are highly accurate, a property that we believe would hold in similar recording conditions (such as background noise level) as the training data.
However, the classifiers can only distinguish between Northern and Southern BNC \textit{bath} vowels to the extent that they differ by the /\ae/ and /\textipa{A}/ vowels captured in the NSCV training dataset. To produce a more valid characterisation of accent variation, one could use a labelled dataset of speech recordings from a larger corpus of speakers who can produce both accents accurately. Another limitation of this analysis is that we cannot verify the assumption of smooth spatial accent variation since we have no accent labels for BNC sounds. An extension of this work could involve augmenting the BNC by having human listeners manually classify a random sample of BNC vowels as Northern or Southern. These labels could then be used to train accent classifiers directly on BNC vowels, and also to validate the assumption of smooth spatial accent variation.
In phonetics, an ongoing research question has been whether dynamic information about formants is necessary for differentiating between vowels, or whether formant values at the midpoint of the vowel or sampled at fewer time points are sufficient \citep{watson1999, strange1983}. In Appendix \ref{app:modelselection} we have compared the functional formant model \ref{eq:loggam} to simpler models using $\text{F}_1$ and $\text{F}_2$ formants measured at the middle of the vowel, or at 25\%, 50\% and 75\% of the vowel. Even though the \textit{bath} vowel is a monophthong which doesn't contain significant vowel transitions, we see that the functional models show slight increases in cross-validated classification accuracy as compared to their non-functional versions, and sampling formants at more time points does not hinder classification. This is due to the regularisation of smooth terms in the generalised additive models used. The functional modelling approach also doesn't require specific time points for sampling to be chosen in advance, so it can be easily used with other vowels which have different formant trajectories. The MFCC model is another higher dimensional approach to modelling variation in vowels, as it uses information from more frequency bands. It has a slightly lower accuracy than the functional formant model, but its ability to resynthesise vowel sounds may be desirable for some applications.
We also observe some differences between the predictions of the formant model and the MFCC model. These models agree on vowel classification about 94\% of the time for the NSCV vowels and 73\% of the time for BNC vowels. The disagreements occur when the vowel quality is not clear, for example when formant curves are close to the boundary between the two vowels, or in the BNC, when there is considerable background noise and more variation in conversational speech. Nevertheless, the resulting spatial maps (Figure~\ref{fig:accent_maps} and Figure \ref{fig:accent_SE_maps}) show many similarities. Another way to compare the two models is to resynthesise vowels along a spectrum using the MFCC model, and classify these new vowels using the formant model. Changing the ``Southernness'' of a vowel with the MFCC model does change the corresponding prediction from the formant model, suggesting that similar information about vowel quality is being used by both models. We have added more detail on comparing the models in Appendix \ref{app:comparing-models}.
It is also possible to combine both MFCCs and formants from the same sound in one model. This can be done similarly to the MFCC model \ref{eq:PLR}, by appending the matrix of formant curves to the matrix of MFCC curves from each sound, and performing FPCA and $\ell_1$-penalised logistic regression as before. The disadvantage of this model is that we can neither interpret it from a speech production perspective (since it contains MFCCs which don't have a physical interpretation), nor use it to resynthesise vowels (since we cannot resynthesise vowels using formants). We have nevertheless trained this model, which has an accuracy of 92.75\%, and the results are in Appendix \ref{app:combined-model}.
The functional approach to modelling accent variation which we have demonstrated can easily be used with a larger corpus with more words or speakers. It can also be applied to to other vowels, including diphthongs (vowels which contain a transition, such as in ``house'') to visualise other accent variation in Great Britain, or other geographic regions.
\section*{Supplementary Material}
\textbf{Data and R code:} The R code and preprocessed data used to generate these results can be obtained \hl[from the authors, on request]{online at https://doi.org/10.5281/zenodo.4003815}.
\noindent
\textbf{Other outputs:} \texttt{nscv.gif} shows animated formant trajectories of the NSCV data. Resynthesised vowels are in \texttt{class-NtoS.wav} (perturbing the vowel in ``class'' from /\ae/ towards /\textipa{A/}), and \texttt{blast-StoN.wav} (perturbing the vowel in ``blast'' from /\textipa{A/} towards /\ae/).
\section*{Acknowledgements}
We thank the Associate Editor and three referees for their comments that helped to improve the quality of the paper.
\bibliographystyle{agsm}
\citestyle{agsm}
|
\section{Introduction}
A hypothetical pseudoscalar particle called axion is predicted by the theory
related to solving the CP-invariance violation problem in QCD. The most
important parameter determining the axion properties is the energy scale $f_a$
of the so-called U(1) Peccei-Quinn symmetry violation. It determines both the
axion mass and the strength of its coupling to fermions and gauge bosons
including photons. However, in spite of the numerous direct experiments, they
have not been discovered so far. Meanwhile, these experiments together with the
astrophysical and cosmological limitations leave a rather narrow band for the
permissible parameters of invisible axion (e.g.
$10^{-6} eV \leqslant m_a \leqslant 10^{-2} eV$~\citep{ref01,ref02}), which is
also a well-motivated cold dark matter candidate in this mass region
\citep{ref01,ref02}.
A whole family of axion-like particles (ALP) with their own features may exist
along with axions having the similar Lagrangian structure relative to the
Peccei-Quinn axion, as well as their own distinctive features. It consists in
the fact that if they exist, the connection between their mass and their
constant of coupling to photons must be highly weakened, as opposed to the
axions. It should be also mentioned that the phenomenon of photon-ALP mixing in
the presence of the electromagnetic field not only leads to the classic
neutrino-like photon-ALP oscillations, but also causes the change in the
polarization state of the photons (the $a \gamma \gamma$ coupling acts like a
polarimeter \citep{ref03}) propagating in the strong enough magnetic fields. It
is generally assumed that there are light ALPs coupled only to two photons,
although the realistic models of ALPs with couplings both to photons and to
matter are not excluded \citep{ref04}. Anyway, they may be considered a
well-motivated cold dark matter candidate \citep{ref01,ref02} under certain
conditions, just like axions.
It is interesting to note that the photon-ALP mixing in magnetic fields of
different astrophysical objects including active galaxies, clusters of
galaxies, intergalactic space and the Milky Way, may be the cause of the
remarkable phenomena like dimming of stars luminosity (e.g. supernovae in the
extragalactic magnetic field \citep{ref06,ref07}) and ``light shining through
a wall'' (e.g. light from very distant objects, travelling through the Universe
\citep{ref03,ref05}). In the former case the luminosity of an astrophysical
object is dimmed because some part of photons transforms into axions in the
object's magnetic field. In the latter case photons produced by the object are
initially converted into axions in the object's magnetic field, and then after
passing some distance (the width of the ``wall'') are converted back into
photons in another magnetic field (e.g. in the Milky Way), thus emulating the
process of effective free path growth for the photons in astrophysical medium
\citep{ref08,ref09}.
For the sake of simplicity let us hereinafter refer to all such particles as
axions if not stated otherwise.
In the present paper we consider the possible existence of the axion mechanism
of Sun luminosity variations\footnote{Let us point out that the axion mechanism of Sun
luminosity used for estimating the axion mass was described for the first time
in 1978
by \cite{ref10}.} based on the ``light shining through a wall'' effect. To be
more exact, we attempt to explain the axion mechanism of Sun luminosity variations by the
``light shining through a wall'', when the photons born mainly in the solar
core are at first converted into axions via the Primakoff effect \citep{ref11}
in its magnetic field, and then are converted back into photons after passing
the solar radiative zone and getting into the magnetic field of the overshoot
tachocline. We estimate this magnetic field within the framework of the
Ettingshausen-Nernst effect. In addition to that we obtain the consistent
estimates for the axion mass ($m_a$) and the axion coupling constant to photons
($g_{a \gamma}$), basing on this mechanism, and verify their values against the
axion model results and the known experiments including CAST, ADMX, RBF.
\section{Photon-axion conversion and the case of maximal mixing}
Let us give some implications and extractions from the photon-axion
oscillations theory which describes the process of the photon conversion into
an axion and back under the constant magnetic field $B$ of the length $L$. It
is easy to show \citep{ref05,Raffelt-Stodolsky1988,ref07,Hochmuth2007} that in
the case of the negligible photon absorption coefficient
($\Gamma _{\gamma} \to 0$) and axions decay rate ($\Gamma _{a} \to 0$) the
conversion probability is
\begin{equation}
P_{a \rightarrow \gamma} = \left( \Delta_{a \gamma}L \right)^2 \sin ^2 \left( \frac{ \Delta_{osc}L}{2} \right) \Big/ \left( \frac{ \Delta_{osc}L}{2}
\right)^2 \label{eq01}\, ,
\end{equation}
where the oscillation wavenumber $\Delta_{osc}$ is given by
\begin{equation}
\Delta_{osc}^2 = \left( \Delta_{pl} + \Delta_{Q,\perp} - \Delta_{a} \right)^2 + 4 \Delta_{a \gamma} ^2
\label{eq02}
\end{equation}
while the mixing parameter $\Delta _{a \gamma}$, the axion-mass parameter
$\Delta_{a}$, the refraction parameter $\Delta_{pl}$ and the QED dispersion
parameter $\Delta_{Q,\perp}$ may be represented by the following expressions:
\begin{equation}
\Delta _{a \gamma} = \frac{g_{a \gamma} B}{2} = 540 \left( \frac{g_{a \gamma}}{10^{-10} GeV^{-1}} \right) \left( \frac{B}{1 G} \right) ~~ pc^{-1}\, ,
\label{eq03}
\end{equation}
\begin{equation}
\Delta _{a} = \frac{m_a^2}{2 E_a} = 7.8 \cdot 10^{-11} \left( \frac{m_a}{10^{-7} eV} \right)^2 \left( \frac{10^{19} eV}{E_a} \right) ~~ pc^{-1}\, ,
\label{eq04}
\end{equation}
\begin{equation}
\Delta _{pl} = \frac{\omega ^2 _{pl}}{2 E_a} = 1.1 \cdot 10^{-6} \left( \frac{n_e}{10^{11} cm^{-3}} \right) \left( \frac{10^{19} eV}{E_a} \right) ~~ pc^{-1},
\label{eq05}
\end{equation}
\begin{equation}
\Delta _{Q,\perp} = \frac{m_{\gamma, \perp}^2}{2 E_a} .
\label{eq06}
\end{equation}
Here $g_{a \gamma}$ is the constant of axion coupling to photons; $B$ is the
transverse magnetic field; $m_a$ and $E_a$ are the axion mass and energy;
$\omega ^2 _{pl} = 4 \pi \alpha n_e / m_e$ is an effective photon mass in terms
of the plasma frequency if the process does not take place in vacuum, $n_e$ is
the electron density, $\alpha$ is the fine-structure constant, $m_e$ is the
electron mass; $m_{\gamma, \perp}^2$ is the effective mass square of the
transverse photon which arises due to interaction with the external magnetic
field.
The conversion probability (\ref{eq01}) is energy-independent, when
$2 \Delta _{a \gamma} \approx \Delta_{osc}$, i.e.
\begin{equation}
P_{a \rightarrow \gamma} \cong \sin^2 \left( \Delta _{a \gamma} L \right)\, ,
\label{eq07}
\end{equation}
or, whenever the oscillatory term in (\ref{eq01}) is small
($\Delta_{osc} L / 2 \to 0$), implying the limiting coherent behavior
\begin{equation}
P_{a \rightarrow \gamma} \cong \left( \frac{g_{a \gamma} B L}{2} \right)^2\, .
\label{eq08}
\end{equation}
It is worth noting that the oscillation length corresponding to (\ref{eq07})
reads
\begin{equation}
L_{osc} = \frac{\pi}{\Delta_{a \gamma}} = \frac{2 \pi}{g_{a \gamma} B} \cong 5.8 \cdot 10^{-3}
\left( \frac{10^{-10} GeV^{-1}}{g_{a \gamma}} \right)
\left( \frac{1G}{B} \right) ~pc
\label{eq13}
\end{equation}
\noindent assuming a purely transverse field. In the case of the appropriate
size $L$ of the region a complete transition between photons and axions is
possible.
From now on we are going to be interested in the energy-independent case
(\ref{eq07}) or (\ref{eq08}) which plays the key role in determination of the parameters
for the axion mechanism of Sun luminosity variations hypothesis (the axion coupling
constant to photons $g_{a \gamma}$, the transverse magnetic field $B$ of length
$L$ and the axion mass $m_a$).
\section{Axion mechanism of Sun luminosity variations}
Our hypothesis is that the solar axions which are born in the solar core
\citep{ref01,ref02} through the known Primakoff effect \citep{ref11}, may be
converted back into $\gamma$-quanta in the magnetic field of the solar
tachocline (the base of the solar convective zone). The magnetic field
variations in the tachocline cause the converted $\gamma$-quanta intensity
variations in this case, which in their turn cause the variations of the Sun
luminosity known as the active and quiet Sun states. Let us consider this
phenomenon in more detail below.
As we noted above, the expression (\ref{eq01}) for the probability of the
axion-photon oscillations in the transversal magnetic field was obtained for
the media with the quasi-zero refraction, i.e. for the media with a negligible
photon absorption coefficient ($\Gamma_{\gamma} \to 0$). It means that in order
for the axion-photon oscillations to take place without any significant losses,
a medium with a very low or quasi-zero density is required, which would
suppress the processes of photon absorption almost entirely.
Surprisingly enough, it turns out that such ``transparent'' media can take
place, and not only in plasmas in general, but straight in the convective zone
of the Sun. Here we generally mean the so-called magnetic flux tubes, the
properties of which are examined below.
\subsection{Ideal photon channeling conditions inside the magnetic flux tubes}
\label{subsec-channeling}
The idea of the energy flow channeling along a fanning magnetic field has been
suggested for the first time by
\cite{ref12} as an explanation for
darkness of umbra of sunspots. It was incorporated in a simple sunspot model by
\cite{ref13}.
\cite{ref14} extended this suggestion to smaller
flux tubes to explain the dark pores and the bright faculae as well.
Summarizing the research of the convective zone magnetic fields in the form of
the isolated flux tubes,
\cite{ref15} suggested a simple mathematical model for the behavior of thin
magnetic flux tubes, dealing with the nature of the solar cycle, sunspot
structure, the origin of spicules and the source of mechanical heating in the
solar atmosphere. In this model, the so-called thin tube approximation is used
(see \cite{ref15} and references therein), i.e. the field is conceived to exist
in the form of slender bundles of field lines (flux tubes) embedded in a
field-free fluid (Fig.~\ref{fig01}). Mechanical equilibrium between the tube
and its surrounding is ensured by a reduction of the gas pressure inside the
tube, which compensates the force exerted by the magnetic field. In our
opinion, this is exactly the kind of mechanism
\cite{Parker1955} was thinking about when he wrote about the problem of flux
emergence: ``Once the field has been amplified by the dynamo, it needs to be
released into the convection zone by some mechanism, where it can be
transported to the surface by magnetic buoyancy''~\citep{ref17}.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{TachoclineFluxTubes-3.pdf}
\end{center}
\caption{(a) Vertical cut through an active region illustrating the connection
between a sunspot at the surface and its origins in the toroidal field layer at
the base of the convection zone. Horizontal fields stored at the base of the
convection zone (the overshoot tachocline zone) during the cycle. Active
regions form from sections brought up by buoyancy (one shown in the process of
rising). After eruption through the solar surface a nearly potential field is
set up in the atmosphere (broken lines), connecting to the base of the
convective zone via almost vertical flux tube. Hypothetical small scale
structure of a sunspot is shown in the inset (Adopted from
\cite{ref18}
and
\cite{ref15}).
(b) Detection of emerging sunspot regions in the solar interior~\citep{ref18}.
Acoustic ray paths with lower turning points between 42 and 75 Mm
(1 Mm=1000 km) crossing a region of emerging flux. For simplicity, only four
out of total of 31 ray paths used in this study (the time-distance
helioseismology experiment) are shown here. Adopted from~\cite{ref19}.
(c) Emerging and anchoring of stable flux tubes in the overshoot tachocline
zone, and its time-evolution in the convective zone. Adopted from \cite{ref20}.
(d) Vector magnetogram of the white light image of a sunspot (taken with SOT on
a board of the Hinode satellite -- see inset) showing in red the direction of
the magnetic field and its strength (length of the bar). The movie shows the
evolution in the photospheric fields that has led to an X class flare in the
lower part of the active region. Adopted from~\cite{ref21}.}
\label{fig01}
\end{figure*}
In order to understand magnetic buoyancy, let us consider an isolated
horizontal flux tube in pressure equilibrium with its non-magnetic surroundings
so that
in cgs units
\begin{equation}
p_{ext} = p_{int} + \frac{\vert \vec{B} \vert^2}{8 \pi} ,
\label{eq21}
\end{equation}
\noindent where $p_{int}$ and $p_{ext}$ are the internal and external gas
pressures respectively, $B$ denotes the uniform field strength in the flux
tube. If the internal and external temperatures are equal so that $T_{int} =
T_{ext}$ (thermal equilibrium), then since $p_{ext} > p_{int}$, the gas in the
tube is less dense than its surrounding ($\rho _{ext} > \rho _{int}$), implying
that the tube will rise under the influence of gravity.
In spite of the obvious, though turned out to be surmountable, difficulties of
the application to real problems, it was shown (see~\cite{ref15} and Refs.
therein) that strong buoyancy forces act in magnetic flux tubes of the required
field strength ($10^4 - 10^5 ~G$~\citep{ref23}). Under their influence tubes
either float to the surface as a whole (e.g. Fig.~1 in \citep{ref24}) or they
form loops of which the tops break through the surface (e.g. Fig.~1
in~\citep{ref14}) and lower parts descend to the bottom of the convective zone,
i.e. to the overshoot tachocline zone. The convective zone, being unstable,
enhanced this process~\citep{ref25,ref26}. Small tubes take longer to erupt
through the surface because they feel stronger drag forces. It is interesting
to note here that the phenomenon of the drag force which raises the magnetic
flux tubes to the convective surface with the speeds about 0.3-0.6~km/s, was
discovered in direct experiments using the method of time-distance
helioseismology~\citep{ref19}. Detailed calculations of the
process~\citep{ref27} show that even a tube with the size of a very small spot,
if located within the convective zone, will erupt in less than two years. Yet,
according to~\cite{ref27}, horizontal fields are needed in overshoot tachocline
zone, which survive for about 11~yr, in order to produce an activity cycle.
A simplified scenario of magnetic flux tubes (MFT) birth and space-time
evolution (Fig.~\ref{fig01}a) may be presented as follows. MFT is born in the
overshoot tachocline zone (Fig.~\ref{fig01}c) and rises up to the convective
zone surface (Fig.~\ref{fig01}b) without separation from the tachocline (the
anchoring effect), where it forms the sunspot (Fig.~\ref{fig01}d) or other
kinds of active solar regions when intersecting the photosphere. There are
more fine details of MFT physics expounded in overviews by
\cite{ref17} and
\cite{ref24}, where certain fundamental questions, which need to be addressed
to understand the basic nature of magnetic activity, are discussed in detail:
How is the magnetic field generated, maintained and dispersed? What are its
properties such as structure, strength, geometry? What are the dynamical
processes associated with magnetic fields? \textbf{What role do magnetic fields
play in energy transport?}
Dwelling on the last extremely important question associated with the energy
transport, let us note that it is known that thin magnetic flux tubes can
support longitudinal (also called sausage), transverse (also called kink),
torsional (also called torsional Alfv\'{e}n), and fluting modes
(e.g.~\cite{ref28,ref29,ref30,ref31,ref32}); for the tube modes supported by
wide magnetic flux tubes, see
\cite{ref31}. Focusing on the longitudinal tube waves known to be an important
heating agent of solar magnetic regions, it is necessary to mention the recent
papers by
\cite{ref33}, which showed that the longitudinal flux tube waves are identified
as insufficient to heat the solar transition region and corona in agreement
with previous studies~\citep{ref34}.
\textbf{In other words, the problem of generation and transport of energy by
magnetic flux tubes remains unsolved in spite of its key role in physics of
various types of solar active regions.}
It is clear that this unsolved problem of energy transport by magnetic flux
tubes at the same time represents another unsolved problem related to the
energy transport and sunspot darkness (see 2.2 in \cite{Rempel2011}). From a
number of known concepts playing a noticeable role in understanding of the
connection between the energy transport and sunspot darkness, let us consider
the most significant theory, according to our vision. It is based on the
Parker-Biermann cooling effect \citep{ref41-3,Biermann1941,ref43-3} and
originates from the works of~\cite{Biermann1941} and~\cite{Alfven1942}.
The main point of the Parker-Biermann cooling effect is that the classical
mechanism of the magnetic tubes buoyancy (e.g. Fig.~\ref{fig04-3}a,
\cite{ref41-3}), emerging as a result of the shear flows instability
development in the tachocline, should be supplemented with the following
results of the~\cite{Biermann1941} postulate and the theory developed by
\cite{ref41-3,ref43-3}: the electric conductivity in the strongly ionized
plasma may be so high that the magnetic field becomes frozen into plasma and
causes the split magnetic tube (Fig.~\ref{fig04-3}b,c) to cool inside.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=12cm]{Parker-empty-tubes-6.pdf}
\end{center}
\caption{The possible ways of a toroidal magnetic flux tube development into a
sunspot.
(a) A rough representation of the form a tube can take after the rise to the
surface by magnetic buoyancy (adopted from Fig.~2a in \cite{ref41-3});
(b) demonstrates the ``crowding'' below the photosphere surface because of
cooling (adopted from Fig.~2b in \cite{ref41-3});
(c) demonstrates the tube splitting as a consequence of the inner region
cooling under the conditions when the tube is in the thermal disequilibrium
with its surroundings and the convective heat transfer is suppressed
\mbox{\citep{Biermann1941}} above $\sim 0.71 R_{Sun}$. This effect as well as
the mechanism of the neutral atoms appearance inside the magnetic tubes
are discussed further in the text (see \mbox{Fig.~\ref{fig-lampochka}a}).
Adopted from Fig.~2c in \cite{ref41-3}.
}
\label{fig04-3}
\end{figure}
Biermann understood that the magnetic field within the sunspots might itself be
a reason of their darkness. Around the sunspots, the heat is transported up to
the surface of the Sun by means of convection (see 2.2.1 in~\cite{Rempel2011}),
while~\cite{Biermann1941} noted that such transport is strongly inhibited by
the nearly vertical magnetic field within the sunspot, thereby providing a
direct explanation for the reduced temperature at the visible surface. Thus,
the sunspot is dark because it is cooler than its surroundings, and it is
cooler because the convection is inhibited underneath.
Still, the missing cause of a very high conductivity in strongly ionized
plasma, which would produce a strong magnetic field ``frozen'' into this
plasma, has been the major flaw of the so called~\cite{Biermann1941} postulate.
Let us show a solution to the known problem of the Parker-Biermann cooling
effect, which is defined by the nature of the very large poloidal magnetic
fields in the tachocline (determined by the thermomagnetic Ettingshausen-Nernst
effect) and provides the physical basis for the photon channeling conditions
inside the magnetic flux tubes.
\subsubsection{The thermomagnetic Ettingshausen-Nernst effect and poloidal magnetic field in the tachocline}
For the dynamo theories of planetary, stellar and spiral galactic magnetism the
Coriolis force is of crucial importance. However, the assumed large solar
dynamo leads to very large magnetic fields ($\sim 5 \cdot 10^7$ gauss
\citep{Fowler1955,Couvidat2003}, not observed on the surface of the Sun. This
requires an explanation of how these fields are screened from reaching the
surface.
As is known~\citep{Schwarzschild1958}, the temperature dependence of the
thermonuclear reaction rate in the region of 10$^7$K goes in proportion to
T$^{4.5}$. This means there is a sharp boundary between a much hotter region
where most of the thermonuclear reactions occur and a cooler region where they
are largely absent~\citep{Winterberg2015}. This boundary between radiative and
convective zones is the tachocline. It is the thermomagnetic
Ettingshausen-Nernst
effect~\citep{Ettingshausen1886,Sondheimer1948,Spitzer1956,Kim1969} which by
the large temperature gradient in the tachocline between the hotter and cooler
region leads to large currents shielding the large magnetic field of the dynamo
\citep{Winterberg2015}.
Subject to a quasi-steady state characterized by a balance of the magnetic
field of the dynamo, in the limit of weak collision (the collision frequency
much less than the cyclotron frequency of positive ions), a thermomagnetic
current can be generated in a magnetized
plasma~\citep{Spitzer1962,Spitzer2006}. For a fully ionized gases plasma the
thermomagnetic Ettingshausen-Nernst effect leads to a current density given by
(see Eqs.~(5-49) in~\citep{Spitzer1962,Spitzer2006}):
\begin{equation}
\vec{j} _{\perp} = \frac{3 k n_e c}{2 B^2} \vec{B} \times \nabla T
\label{eq06-01}
\end{equation}
\noindent where $n_e$ is the electron number density, $B$ is the magnetic
field, and $T$ is the absolute temperature (K). With $n_e = \left[ Z / (Z+1)
\right] n$, where $n = n_e + n_i$, and $n_i = n_e / Z$ is the ion number
density for a $Z$-times ionized plasma, the following is obtained:
\begin{equation}
\vec{j} _{\perp} = \frac{3 k n c}{2 B^2} \frac{Z}{Z+1} \vec{B} \times \nabla
T\, . \label{eq06-02}
\end{equation}
It exerts a force on plasma, with the force density $F$ given by
\begin{equation}
\vec{F} = \frac{1}{c} \vec{j} _{\perp} \times \vec{B} =
\frac{3 n k}{2 B^2} \frac{Z}{Z+1} \left( \vec{B} \times \nabla T \right)
\times \vec{B}
\label{eq06-03}
\end{equation}
or with $\nabla T$ perpendicular to $\vec{B}$
\begin{equation}
\vec{F} = \frac{3 n k}{2} \frac{Z}{Z+1} \nabla T
\label{eq06-04}
\end{equation}
leading to the magnetic equilibrium condition (see Eqs.~(4-1)
in~\citep{Spitzer1962})
\begin{equation}
\vec{F} = \frac{1}{c} \vec{j} _{\perp} \times \vec{B} = \nabla p
\label{eq06-05}
\end{equation}
with $p = (\rho / m) kT = nkT$. And by equating~(\ref{eq06-04})
and~(\ref{eq06-05}),
\begin{equation}
\frac{3 n k}{2} \frac{Z}{Z+1} \nabla T = nk \nabla T + kT \nabla n
\label{eq06-06}
\end{equation}
\noindent or
\begin{equation}
a \frac{\nabla T}{T} + \frac{\nabla n}{n} = 0,
~~~ where ~~ a = \frac{2 - Z}{2(Z+1)} ,
\label{eq06-06a}
\end{equation}
\noindent we obtain the condition:
\begin{equation}
T ^a n = const .
\label{eq06-07}
\end{equation}
For a singly-ionized plasma with $Z=1$, one has
\begin{equation}
T ^{1/4} n = const .
\label{eq06-08}
\end{equation}
For a doubly-ionized plasma ($Z=2$) one has $n=const$. Finally, in the limit $Z
\rightarrow A$, one has $T^{-1/2}n = const$. Therefore, $n$ does not strongly
depend on $T$, unlike in a plasma of constant pressure, in which $Tn=const$. It
shows that the thermomagnetic currents may change the pressure distribution in
magnetized plasma considerably.
Taking a Cartesian coordinate system with $z$ directed along $\nabla T$, the
magnetic field in the $x$-direction and the Ettingshausen-Nernst current in the
$y$-direction, and supposing a fully ionized hydrogen plasma with $Z=1$ in the
tachocline, one has
\begin{equation}
{j} _{\perp} = {j} _y = - \frac{3 n k c}{4 B} \frac{dT}{dz}.
\label{eq06-09}
\end{equation}
From Maxwell's equation $4 \pi \vec{j}_{\perp}/ c = curl \vec{B}$, one has
\begin{equation}
{j} _y = \frac{c}{4 \pi} \frac{dB}{dz},
\label{eq06-10}
\end{equation}
and thus by equating~(\ref{eq06-09}) and~(\ref{eq06-10}) we obtain:
\begin{equation}
2B \frac{dB}{dz} = -6 \pi k n \frac{dT}{dz}.
\label{eq06-11}
\end{equation}
From~(\ref{eq06-08}) one has
\begin{equation}
n = \frac{n _{OT} T_{OT}^{1/4}}{T^{1/4}},
\label{eq06-12}
\end{equation}
\noindent where the values $n = n_{OT}$ and $T = T_{OT}$ correspond to the
overshoot tachocline. Inserting~(\ref{eq06-12}) into~(\ref{eq06-11}), one finds
\begin{equation}
dB^2 = -\frac{6 \pi k n _{OT} T_{OT}^{1/4}}{T^{1/4}} dT,
\label{eq06-13}
\end{equation}
\noindent and hence, as a result of integration in the limits $[B_{OT},0]$ in
the left-hand side and $[0,T_{OT}]$ in the right-hand side,
\begin{equation}
\frac{B_{OT}^2}{8 \pi} = n _{OT} kT_{OT}
\label{eq06-14}
\end{equation}
which shows that the magnetic field of the thermomagnetic current in the
overshoot tachocline neutralizes the magnetic field of the dynamo reaching the
overshoot tachocline (see Fig.~\ref{fig-R-MagField}).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=12cm]{MagField-Radius-TurckChieze-1.pdf}
\end{center}
\caption{The reconstructed solar magnetic field (in blue) simulation
from~\cite{Couvidat2003}: 10$^3$-10$^4$~Tesla (left), 30-50~Tesla (middle) and
2-3~Tesla (right), with a temperature of $\sim$9~MK, $\sim$2~MK
and~$\sim$200~kK, respectively. The thin lines show the estimated range of
values for each magnetic field component. Internal rotation was not included in
the calculation. An additional axion production at those places can modify both
intensity and shape of the solar axion spectrum (Courtesy Sylvaine
Turck-Chi\`{e}ze (see Fig.~2 in~\cite{Zioutas2007})). The reconstructed solar
magnetic field (in red) simulation from~(\ref{eq06-16}): $4 \cdot 10^3$~T in
tachocline ($\sim0.7 R_{Sun}$).}
\label{fig-R-MagField}
\end{figure}
Hence, it is not hard to understand what forces compress the field into intense
filaments, in opposition to the enormous magnetic pressure
\begin{equation}
\frac{B_{OT}^2}{8 \pi} = p_{ext} \approx 6.5 \cdot 10^{13} \frac{erg}{cm^3} ~~
at ~~ 0.7 R_{Sun},
\label{eq06-15}
\end{equation}
\noindent where the gas pressure $p_{ext}$ at the tachocline of the Sun ($\rho
\approx 0.2 ~g\cdot cm^{-3}$ and $T \approx 2.3 \cdot 10^6 K$ \citep{ref45-3}
at~$0.7 R_{Sun}$) gives rise to a poloidal magnetic field
\begin{equation}
B_{OT} \simeq 4100 T.
\label{eq06-16}
\end{equation}
According to (\ref{eq06-16}), a magnetic flux tube anchored in tachocline (see
Fig.~\ref{fig-twisted-tube}) has a significant toroidal magnetic field
($\sim$4100~T), within a layer near the base of the convection zone, where $0.7
R_{Sun}$ and $d \sim 0.05 R_{Sun}$ are constants defining the mean position and
thickness of the layer where the field is concentrated. Each of these anchored
magnetic flux tubes forms a pair of sunspots on the surface of the Sun.
Let us now show the theoretical possibility of the sunspot activity correlation
with the variations of the $\gamma$-quanta of axion origin, induced by the
magnetic field variations in the overshoot tachocline.
\subsubsection{The Parker-Biermann cooling effect, Rosseland mean opacity and\\ axion-photon oscillations in twisted magnetic tubes}
\label{parker-biermann}
Several local models are known to have been used with great success to
investigate the formation of buoyant magnetic transport, which transforms
twisted magnetic tubes generated through shear amplification near the base
tachocline (e.g.~\cite{Nelson2014}) and the structure and evolution of
photospheric active regions (e.g.~\cite{Rempel2011a}).
Because these models assume the anchored magnetic flux tubes depending on the
poloidal field in the tachocline, it is not too hard to show that the magnetic
field $B_{OT}$ reaching $\sim 4100 T$ (see~(\ref{eq06-16})) may be at the same
time the reason for the Parker-Biermann cooling effect in the twisted magnetic
tubes (see~Fig.~\ref{fig-twisted-tube}b). The theoretical consequences of such
reasoning of the Parker-Biermann cooling effect are considered below.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=15cm]{MagTube-twisted-05.pdf}
\end{center}
\caption{An isolated and anchored in tachocline (a) magnetic flux tube (adopted
from~\cite{Parker1979}) and (b) twisted magnetic flux tube
(e.g.~\citep{Stein2012}, Fig.~2 in~\citep{Gold1960}, Fig.~1 and Fig.~2
in~\citep{Sturrock2001}) bursting through the solar photosphere to form a
bipolar region. \textbf{Inset in panel (b)}: topological effect of the magnetic
reconnection in the magnetic tube (see~\cite{Priest2000}), where the
$\Omega$-loop reconnects across its base, pinching off the $\Omega$-loop to
form a free $O$-loop (see Fig.~4 in~\cite{Parker1994}). The buoyancy of the
$O$-loop is limited by the magnetic tube interior with Parker-Biermann
cooling.}
\label{fig-twisted-tube}
\end{figure}
First of all, we suggest that the classic mechanism of magnetic tubes buoyancy
(Fig.~\ref{fig-twisted-tube}а), appearing as a result of the shear instability
development in the tachocline, should be supplemented by the rise of the
twisted magnetic tubes in a stratified medium (Fig.~\ref{fig-twisted-tube}b
(see Fig.~1 and Fig.~2 in \citep{Sturrock2001}), where the magnetic field is
produced by dynamo action throughout the convection zone, primarily by
stretching and twisting in the turbulent downflows (see~\citep{Stein2012}).
Second, the twisting of the magnetic tube may not only promote its splitting,
but also may form a cool region under a certain condition
\begin{equation}
p_{ext} = \frac{B^2}{8\pi}
\label{eq06v2-01}
\end{equation}
\noindent
when the tube (inset in~Fig.~\ref{fig-twisted-tube}b) is in the thermal
disequilibrium with its surroundings and the convective heat transfer is
suppressed \citep{Biermann1941}.
It is interesting to explore how the cool region stretching from the
tachocline to the photosphere, where the magnetic tube is in thermal
non-equilibrium~(\ref{eq06v2-01}) with its surroundings, deals with the
appearance of the neutral atoms (e.g. hydrogen) in the upper convection zone
(see Fig.~\ref{fig-lampochka}a in contrast to Fig.~2c in \cite{Parker1955}). In
other words, how does this very cool region prevent the neutral atoms to
penetrate from the upper convection zone to the base of the convection zone,
i.e. tachocline?
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=12cm]{Sun-mag_tube-Nernst-10.pdf}
\end{center}
\caption{(a) Topological effects of the magnetic reconnection inside the
magnetic tubes with the ``magnetic steps''. The left panel shows the
temperature and pressure change along the radius of the Sun from the tachocline
to the photosphere \citep{ref45-3}, $L_{MS}$ is the height of the magnetic
shear steps. At $R \sim 0.72~R_{Sun}$ the vertical magnetic field reaches $B_z
\sim 3600$~T, and the magnetic pressure $p_{ext} = B^2 / 8\pi
\simeq 5.21 \cdot 10^{13}~erg/cm^3$ \citep{ref45-3}. The very cool regions
along the entire convective zone caused by the Parker-Biermann cooling effect
have the magnetic pressure (\mbox{\ref{eq06v2-01}}) in the twisted magnetic tubes.
\newline (b) All the axion flux, born via the Primakoff effect (i.e. the real
thermal photons interaction with the Coulomb field of the solar plasma) comes
from the region $\leq 0.1 R_{Sun}$~\citep{ref36}. Using the angle
$\alpha = 2 \arctan \left( 0.1 R_{Sun} / 0.7 R_{Sun} \right)$ marking the
angular size of this region relative to tachocline, it is possible to estimate
the flux of the axions distributed over the surface of the Sun. The flux of the
X-ray (of axion origin) is defined by the angle
$\gamma = 2 \arctan \left( 0.5 d_{spot} / 0.3 R_{Sun} \right)$, where
$d_{spot}$ is the diameter of a sunspot on the surface of the Sun (e.g.
$d_{spot} \sim 11000~km$~\citep{Dikpati2008}).}
\label{fig-lampochka}
\end{figure*}
It is essential to find the physical solution to the problem of solar convective zone which would fit the opacity experiments. The full calculation of solar opacities, which depend on the chemical composition, pressure and temperature of the gas, as well as the wavelength of the incident light, is a complex endeavour. The problem can be simplified by using a mean opacity averaged over all wavelengths, so that only the dependence on the gas physical properties remains (e.g. \cite{Rogers1994,Ferguson2005,Bailey2009}). The most commonly used is the Rosseland mean opacity $k_R$, defined as:
\begin{equation}
\frac{1}{k_R} = \left. \int \limits_{0}^{\infty} d \nu \frac{1}{k_\nu} \frac{dB_\nu}{dT} \middle/
\int \limits_{0}^{\infty} d \nu \frac{dB_\nu}{dT} \right.
\label{eq06v2-02}
\end{equation}
\noindent
where $dB_\nu / dT$ is the derivative of the Planck function with respect to
temperature, $k_{\nu}$ is the monochromatic opacity at frequency $\nu$ of the
incident light or the total extinction coefficient, including stimulated
emission plus scattering. A large value of the opacity indicates strong
absorption from beam of photons, whereas a small value indicates that the beam
loses very little energy as it passes through the medium.
Note that the Rosseland opacity is an harmonic mean, in which the greatest
contribution comes from the lowest values of opacity, weighted by a function
that depends on the rate at which the blackbody spectrum varies with
temperature (see Eq.~(\ref{eq06v2-02}) and Fig.~\ref{fig-opacity}), and the
photons are most efficiently transported through the ``windows'' where $k_\nu$
is the lowest (see Fig.2 in \cite{Bailey2009}).
\begin{figure}[tbp!]
\begin{center}
\includegraphics[width=15cm]{rosseland_opacity-01.pdf}
\end{center}
\caption{Rosseland mean opacity $k_R$, in units of $cm^2 g^{-1}$, shown versus
temperature (X-axis) and density (multi-color curves, plotted once per decade),
computed with the solar metallicity of hydrogen and helium mixture X=0.7 and
Z=0.02. The panel shows curves of $k_R$ versus temperature for several
``steady'' values of the density, labelled by the value of $\log {\rho}$ (in
$g/cm^3$). Curves that extend from $\log {T} = 3.5$ to 8 are from the Opacity
Project (opacities.osc.edu). Overlapping curves from $\log {T} = 2.7$ to 4.5
are from \cite{Ferguson2005}. The lowest-temperature region (black dotted
curve) shows an estimate of ice-grain and metal-grain opacity from
\cite{Stamatellos2007}. Adapted from \cite{Cranmer2015}.}
\label{fig-opacity}
\end{figure}
Taking the Rosseland mean opacities shown in Fig.~\ref{fig-opacity}, one may
calculate, for example, four consecutive cool ranges within the convective
zone (Fig.~\ref{fig-lampochka}a), where the internal gas pressure $p_{int}$ is
defined by the following values:
\begin{equation}
p_{int} = n k_B T, ~where~
\begin{cases}
T \simeq 10^{3.48} ~K, \\
T \simeq 10^{3.29} ~K, \\
T \simeq 10^{3.20} ~K, \\
T \simeq 10^{3.11} ~K, \\
\end{cases}
\rho = 10^{-7} ~g/cm^3
\label{eq06v2-03}
\end{equation}
Since the inner gas pressure~(\ref{eq06v2-03}) grows towards the tachocline so
that
\begin{align}
p_{int} &(T = 10^{3.48} ~K) \vert _{\leqslant 0.85 R_{Sun}} >
p_{int} (T = 10^{3.29} ~K) \vert _{\leqslant 0.9971 R_{Sun}} > \nonumber \\
& > p_{int} (T = 10^{3.20} ~K) \vert _{\leqslant 0.99994 R_{Sun}} >
p_{int} (T = 10^{3.11} ~K) \vert _{\leqslant R_{Sun}} ,
\label{eq06v2-04}
\end{align}
\noindent
it becomes evident that the neutral atoms appearing in the upper convection
zone ($\geqslant 0.85 R_{Sun}$) cannot descend deep to the base of the
convection zone, i.e. tachocline (see Fig.~\ref{fig-lampochka}a).
Therefore it is very important to examine the connection between the Rosseland
mean opacity and axion-photon oscillations in twisted magnetic tube.
Let us consider the qualitative nature of the $\Omega$-loop formation and
growth process, based on the semiphenomenological model of the magnetic
$\Omega$-loops in the convective zone.
\vspace{0.3cm}
\noindent $\bullet$ A high concentration azimuthal magnetic flux
($B_{OT} \sim 4100$~T, see Fig.~\ref{fig-lampochka}) in the overshoot
tachocline through the shear flows instability development.
An interpretation of such link is related to the fact that helioseismology
places the principal rotation $\partial \omega / \partial r$ of the Sun in the
overshoot layer immediately below the bottom of the convective zone
\citep{Parker1994}. It is also generally believed that the azimuthal magnetic
field of the Sun is produced by the shearing $r \partial \omega / \partial r$
of the poloidal field $B_{OT}$ from which it is generally concluded that the
principal azimuthal magnetic flux resides in the shear layer
\citep{Parker1955,Parker1993}.
\vspace{0.3cm}
\noindent
$\bullet$ If some ``external'' factor of the local shear perturbation appears
against the background of the azimuthal magnetic flux concentration, such
additional local density of the magnetic flux may lead to the magnetic field
strength as high as, e.g. $B_z \sim 3600$~T (see Fig.~\ref{fig-lampochka}a
and \mbox{Fig.~\ref{fig-Bz}b}). Of course, this brings up a question about the
physics behind such ``external'' factor and the local shear perturbation.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=15cm]{Bz-11.pdf}
\end{center}
\caption{
(a) Normalized external temperature, density and gas pressure as functions of
the solar depth $R/R_{Sun}$. The standard solar model with $He$ diffusion
\citep{ref45-3} was used for $R < 0.95 R_{Sun}$ (solid lines). The dotted
lines mark extrapolated values.
(b) Variation of the magnetic field strength $B_z$ along the emerging
$\Omega$-loop as a function of the solar depth $R / R_{Sun}$ throughout the
convection zone. The solid blue line marks the permitted values for
the same standard solar model with $He$ diffusion \citep{ref45-3} starting at
the theoretical estimate of the magnetic field
$B_{OT} \approx B_z(0) = 4100~T$. The dashed line is the continuation,
according to the existence of the very cool regions inside the magnetic tube.
Red point marks the up-to-date observations showing the mean magnetic field
strength at the level $\sim 0.25~T = 2500 ~G$ \citep{Pevtsov2011,Pevtsov2014}.}
\label{fig-Bz}
\end{figure}
In this regard let us consider the superintense magnetic $\Omega$-loop
formation in the overshoot tachocline through the local shear caused by the
high local concentration of the azimuthal magnetic flux. The buoyant force
acting on the $\Omega$-loop decreases slowly with concentration so the vertical
magnetic field of the $\Omega$-loop reaches $B_z \sim 3600$~T at about
$R / R_{Sun} \sim 0.72$ (see Fig.~\ref{fig-lampochka}a and Fig.~\ref{fig-Bz}b).
Because of the magnetic pressure
see analog \mbox{(\ref{eq06-15})} and Fig.~\ref{fig-lampochka}a)
$p_{ext} = B_{0.72 R_{Sun}}^2 / 8\pi = 5.21\cdot 10^{13}~erg/cm^3$
\citep{ref45-3} this leads
to a significant cooling of the $\Omega$-loop tube (see
Fig.~\ref{fig-lampochka}a).
In other words, we assume the effect of the $\Omega$-loop cooling to be the
basic effect responsible for the magnetic flux concentration. It arises from
the well known suppression of convective heat transport by a strong magnetic
field~\citep{Biermann1941}. It means that although the principal azimuthal
magnetic flux resides in the shear layer, it predetermines the additional local
shear giving rise to a significant cooling inside the $\Omega$-loop.
Thus, the ultralow pressure is set inside the magnetic tube as a result of the
sharp limitation of the magnetic steps buoyancy inside the cool magnetic tube
(Fig.~\ref{fig-lampochka}a). This happens because the buoyancy of the magnetic
flows requires finite \textbf{superadiabaticity} of the convection zone
\citep{ref47-3,ref35-3}, otherwise, expanding according to the magnetic
\textbf{adiabatic} law (with the convection being suppressed by the magnetic
field), the magnetic clusters may become cooler than their surroundings, which
compensates the effect of the magnetic buoyancy of superintense magnetic O-loop.
Eventually we suppose that the axion mechanism based on the X-ray
channeling along the ``cool'' region of the split magnetic tube
\sethlcolor{pink}
\sethlcolor{yellow}
(Fig.~\ref{fig-lampochka}a) effectively supplies the necessary energy flux
``channeling'' in magnetic tube to the photosphere while the convective heat transfer is heavily
suppressed.
In this context it is necessary to have a clear view of the energy transport by the X-ray of axion origin, which are a primary transfer mechanism. The recent improvements in the calculation of the radiative properties of solar matter have helped to resolve several long-standing discrepancies between observations and the predictions of theoretical models (e.g. \cite{Rogers1994,Ferguson2005,Bailey2009}), and now it is possible to calculate the photon mean free path (Rosseland length) for Fig.~\ref{fig-opacity}:
\begin{equation}
l_{photon} = \frac{1}{k_R \rho} \sim
\begin{cases}
2 \cdot 10^{10} ~cm ~~ & for ~~ k_R \simeq 5 \cdot 10^{-4} ~cm^2/g, \\
10^{10} ~cm ~~ & for ~~ k_R \simeq 10^{-3} ~cm^2/g, \\
1.5 \cdot 10^{8} ~cm ~~ & for ~~ k_R \simeq 6.7 \cdot 10^{-2} ~cm^2/g, \\
10^{7} ~cm ~~ & for ~~ k_R \simeq 1 ~cm^2/g,
\end{cases}
~~ \rho = 10^{-7} ~g/cm^3
\label{eq06v2-05}
\end{equation}
\noindent
where the Rosseland mean opacity values $k_R$ and density $\rho$ are chosen so
that the very low internal gas pressure $p_{int}$ (see Eq.~(\ref{eq06v2-04}))
along the entire magnetic tube almost does not affect the external gas pressure
$p_{ext}$ (see (\ref{eq06v2-05}) and Fig.~\ref{fig-opacity}).
Let us now examine the appearance of the X-ray of axion origin, induced by the magnetic field variations near the tachocline (Fig.~\ref{fig-lampochka}a) and their impact on the Rosseland length (see~(\ref{eq06v2-05})) inside the cool region of the magnetic tubes.
Let us remind that the magnetic field strength $B_{OT}$ in the overshoot
tachocline of $\sim 4100~T$ (see Fig.~\ref{fig-lampochka}a) and the
Parker-Biermann cooling effect in~(\ref{eq06v2-01}) lead to the corresponding
value of the magnetic field strength $B(z = 0.72 R_{Sun}) \sim 3600 ~T$
(see Fig.~\ref{fig-lampochka}a), which in its turn assumes virtually zero
internal gas pressure of the magnetic tube.
As it is shown above (see~\cite{Priest2000}), the topological effect of the
magnetic reconnection inside the $\Omega$-loop results in the formation of the
so-called O-loops (Fig.~\ref{fig-twisted-tube} and Fig.~\ref{fig-lampochka}a)
with their buoyancy limited from above by the strong cooling inside the
$\Omega$-loop (Fig.~\ref{fig-lampochka}a). It is possible to derive the value
of the horizontal magnetic field of the magnetic steps at the top of the O-loop:
$\vert B_{MS} \vert \approx \vert B(z = 0.72 R_{Sun}) \vert \sim 3600 ~T$.
So in the case of a large enough Rosseland lengh (see Eq.~(\ref{eq06v2-05})),
X-ray of axion origin induced by the horizontal magnetic field in O-loops, reach
the photosphere freely, while in the photosphere itself, according to the
Rosseland length
\begin{equation}
l_{photon} \approx 100 ~km < l \approx 300 \div 400 ~km,
\label{eq06v2-06}
\end{equation}
these photons undergo a multiple Compton scattering (see
Section~\ref{subsec-osc-parameters}) producing a typical directional pattern
(Fig.~\ref{fig-lampochka}a).
Aside from the X-rays of axion origin with mean energy of 4.2~keV, there are
only $h \nu \sim 0.95 ~keV$ X-rays (originating from the tachocline, according
to a theoretical estimate by \cite{Bailey2009}) inside the magnetic tube. Such
X-rays would produce the Compton-scattered photons with mean energy of
$\leqslant 0.95~keV$ which contradicts the known measurements of the photons
with mean energy of 3-4~keV (see Fig.~4 in \cite{Rieutord2014}). Our suggested
theoretical model thus removes these contradictions by involving the X-rays of
axion origin \textit{plus} the axions of the thermal X-ray origin, both
produced in the magnetic field of O-loops (see Fig.~\ref{fig-lampochka}a and
Fig.~\ref{app-b-fig01} in Appendix~\ref{appendix-luminosity}).
And finally, let us emphasize that we have just shown a theoretical possibility
of the time variation of the sunspot activity to correlate with the flux
of the X-rays of axion origin; the latter being controlled by the magnetic
field variations near the overshoot tachocline. As a result, it may be
concluded that the the axion mechanism for solar luminosity variations
based on the lossless X-ray ``channeling'' along the
magnetic tubes allows to explain the effect of the almost complete suppression of the
convective heat transfer, and thus to understand the known puzzling darkness of
the sunspots \citep{Rempel2011}.
\subsection{Estimation of the solar axion-photon oscillation parameters on the basis of the hadron axion-photon coupling in white dwarf cooling}
\label{subsec-osc-parameters}
It is known \citep{Cadamuro2012} that astrophysics provides a very interesting
clue concerning the evolution of white dwarf stars with their small mass
predetermined by the relatively simple cooling process. It is related to the
fact that recently it has been possible to determine their luminosity function
with the unprecedented precision \citep{Isern2008}. It seems that if the DFSZ
axion \citep{ref47,Dine1981} has a direct coupling to electrons and a decay
constant $f_a \sim 10^{9} ~GeV$, it provides an additional energy-loss channel
that permits to obtain a cooling rate that better fits the white dwarf
luminosity function than the standard one~\citep{Isern2008}. On the other hand,
the KSVZ axion \citep{ref46,ref46a}, i.e. the hadronic axion (with the mass in
the $meV$ range and $g_{a\gamma \gamma} \sim 10^{-12} ~GeV^{-1}$) would also
help in fitting the data, but in this case a stronger value for
$g_{a\gamma \gamma}$ is required to perturbatively produce an electron coupling
of the required strength (\cite{Cadamuro2012}, Fig.~1 in \cite{Srednicki1985},
Fig.~1 in \cite{Turner1990}, Eq.~82 in \cite{Kim2010}).
Our aim is to estimate the solar axion-photon oscillation parameters basing on
the hadron axion-photon coupling derived from white dwarf cooling (see
\mbox{Appendix~\ref{appendix-wd-cooling}}). The estimate
of the horizontal magnetic field in the O-loop is not related to the
photon-axion conversion in the Sun only, but also to the axions in the model of
white dwarf evolution. Therefore along with the values of the magnetic field
strength
$B_{MS} \sim 3600 ~T$
and the height of the magnetic shear steps
$L_{MS} \sim 1.28 \cdot 10^4 ~km$
(Fig.~\ref{fig-lampochka}a,b) we use the following parameters of the hadronic
axion (from the White Dwarf area in Fig.~\ref{fig05}a \citep{Irastorza2013,
Carosi2013}):
\begin{figure}[tbp!]
\begin{center}
\begin{minipage}[h]{0.44\linewidth}
\includegraphics[width=7.6cm]{gagamma_ma-limits-2.pdf}
\end{minipage}
\hfill
\begin{minipage}[h]{0.53\linewidth}
\includegraphics[width=8.8cm]{Y-g_agamma-4.pdf}
\end{minipage}
\end{center}
\caption{\textbf{(a)} Summary of astrophysical, cosmological and laboratory
constraints on axions and axion-like particles. Comprehensive axion/ALP
parameter space, highlighting the two main front lines of direct detection
experiments: helioscopes (CAST~\citep{ref58,ref72,CAST2011,Arik2013}) and
haloscopes (ADMX~\citep{ref50} and RBF~\citep{ref51}). The astrophysical bounds
from horizontal branch and massive stars are labeled ``HB''~\citep{ref02} and
``Cepheids''~\citep{Carosi2013} respectively. The QCD motivated models
(KSVZ~\citep{ref46,ref46a} and DFSZ~\citep{ref47,Dine1981}) for axions lay in
the yellow diagonal band. The orange parts of the band correspond to
cosmologically interesting axion models: models in the ``classical axion
window'' possibly composing the totality of DM (labelled ``Axion CDM'') or a
fraction of it (``WIMP-axion CDM''~\citep{Baer2011}). For more generic ALPs,
practically all the allowed space up to the red dash line may contain valid ALP
CDM models~\citep{Arias2012}. The region of axion masses invoked in the WD
cooling anomaly is shown by the blue dash line~\citep{Irastorza2013}. The red
star marks the values of the axion mass $m_a \sim 3.2 \cdot 10^{-2} eV$ and the
axion-photon coupling constant $g_{a\gamma} \sim 4.4 \cdot 10^{-11} GeV^{-1}$
chosen in the present paper on the basis of the suggested relation between the
axion mechanisms of the Sun's and the white dwarf luminosity variations.
\newline
\textbf{(b)} $R$ parameter constraints to $Y$ and $g_{a \gamma}$ (adopted from
\cite{Ayala2014}. The dark purple area delimits the 68\%~C.L. for $Y$ and
$R_{th}$ (see Eq.~(1) in \cite{Ayala2014}). The resulting bound on the axion
($g_{10} = g_{a \gamma \gamma}/(10^{-10} ~GeV^{-1})$) is somewhere between a
rather conservative $0.5 < g_{10} \leqslant 0.8$ and most aggressive $0.35 <
g_{10} \leqslant 0.5$ \citep{Friedland2013}. The red line marks the values of
the axion-photon coupling constant $g_{a \gamma} \sim 4.4 \cdot 10^{-11}
~GeV^{-1}$ chosen in the present paper.
The blue shaded area represents the bounds from Cepheids
observation. The yellow star corresponds to $Y$=0.254 and the bounds from HB
lifetime (yellow dashed line).}
\label{fig05}
\end{figure}
\begin{equation}
g_{a \gamma} \sim 4.4 \cdot 10^{-11} ~ GeV^{-1}, ~~~ m_a \sim 3.2 \cdot 10^{-2} ~eV.
\label{eq3.30}
\end{equation}
The choice of these values is also related to the observed solar luminosity
variations in the X-ray band (see (\ref{eq3.35})). The theoretical
estimate and the
consequences of such choice are considered below.
As it is shown above, the $\sim 4100~T$ magnetic field in the overshoot
tachocline and the Parker-Biermann cooling effect in~(\ref{eq06v2-01}) may
produce the O-loops with the horizontal magnetic field of
$\vert B_{MS} \vert \approx \vert B(z = 0.72 R_{Sun}) \vert \sim 3600 ~T$
stretching for about $L_{MS} \sim 1.28 \cdot 10^4 ~km$, and surrounded by virtually
zero internal gas pressure of the magnetic tube (see Fig.~\ref{fig-lampochka}a).
It is not hard to use the expression (\ref{eq08})
for the conversion probability\footnote{Hereinafter we use rationalized natural
units to convert the magnetic field units from $Tesla$ to $eV^2$, and the
conversion reads $1\,T = 195\,eV^2$~\citep{Guendelman2009}.}
\begin{equation}
P_{a \rightarrow \gamma} = \frac{1}{4} \left( g_{a \gamma} B_{MS} L_{MS} \right)^2 \sim 1
\label{eq3.31}
\end{equation}
for estimating the axion coupling constant to photons (\ref{eq3.30}).
Thus, it is shown that the hypothesis about the possibility for the solar
axions born in the core of the Sun to be efficiently converted back into
$\gamma$-quanta in the magnetic field of the magnetic steps of the O-loop
(above the solar overshoot tachocline) is relevant. Here the variations of the
magnetic field in the solar tachocline are the direct cause of the converted
$\gamma$-quanta intensity variations. The latter in their turn may be the cause
of the overall solar luminosity variations known as the active and quiet Sun phases.
It is easy to show that the theoretical estimate for the part of the axion
luminosity $L_a$ in the total luminosity of the Sun $L_{Sun}$ with respect to
(\ref{eq3.30}) is~\citep{ref58}
\begin{equation}
\frac{L_a}{L_{Sun}} = 1.85 \cdot 10 ^{-3} \left(
\frac{g_{a \gamma}}{10^{-10} GeV^{-1}} \right)^2 \sim 3.6 \cdot 10^{-4} .
\label{eq3.32}
\end{equation}
As opposed to the classic mechanism of the Sun modulation, the
axion mechanism is determined by the magnetic tubes rising to the photosphere,
and not by the over-photosphere magnetic fields. In this case the solar
luminosity modulation is determined by the axion-photon oscillations in the
magnetic steps of the O-loop causing the formation and channeling of the
$\gamma$-quanta inside the almost empty magnetic $\Omega$-tubes (see
Fig.~\ref{fig-twisted-tube} and Fig.~\ref{fig-lampochka}a). When the magnetic
tubes cross the photosphere, they ``open'' (Fig.~\ref{fig-lampochka}a), and the
$\gamma$-quanta are ejected to the photosphere, where their comfortable journey
along the magnetic tubes (without absorption and scattering) ends. As the
calculations by \cite{ref36} show, the further destiny of the $\gamma$-quanta
in the photosphere may be described by the Compton scattering, which actually
agrees with the observed solar spectral shape (Fig.~\ref{fig06}b,c).
\begin{figure*}
\begin{center}
\includegraphics[width=14cm]{Sun_total_spectra-13.pdf}
\end{center}
\caption{(a) Reconstructed solar photon spectrum below 10~keV from the active
Sun (red line) and quiet Sun (blue line) from accumulated observations
(spectral bin is 6.1~eV wide). Adopted from~\cite{ref59}.
\newline
(b) Reconstructed solar photon spectrum fit in the active phase of the Sun by
the quasi-invariant soft part of the solar photon spectrum (grey shaded area;
see \mbox{Eq.~(\ref{eq06-34})}) and three spectra (\ref{eq3.33}) degraded to
the Compton scattering for column densities above the initial conversion place
of 16 (adopted from~\cite{ref36}) and 2~$g / cm^2$ (present paper).
\newline
(c) The similar curves for the quiet phase of the Sun (grey shaded area
corresponds to \mbox{Eq.~(\ref{eq06-35})})
\newline
(d) Cartoon showing the interplay between magnetic field expansion and the EUV
loop. A coalescent flow forming the sunspot drags the magnetic field in the
photosphere near the solar surface into the sunspot. In response, a hot spot of
enhanced upward directed Poynting flux, $S$, forms (red arrow). The expanding
field lines (blue) move upwards and to the side. When they transverse the hot
spot of Poynting flux, the plasma on that field line gets heated and brightens
up. As the field line expands further, it leaves the hot spot and gets darker
again. In consequence a bright coronal EUV loop forms (orange) and remains
rather stable as the successively heated field lines move through (adopted from
\cite{Chen2015}). X-ray emission is the $\gamma$-quanta of axion origin coming
from the magnetic tubes and not related to the magnetic reconnection as
conjectured by e.g. \cite{Shibata2011}.}
\label{fig06}
\end{figure*}
From the axion mechanism point of view it means that the solar spectra during
the active and quiet phases (i.e. during the maximum and minimum solar
activity) differ from each other by the smaller or larger part of the Compton
spectrum, the latter being produced by the $\gamma$-quanta of the axion origin
ejected from the magnetic tubes into the photosphere (see Fig.~4 in
\cite{Chen2015}).
A natural question arises at this point: ``What are the real parts of the
Compton spectrum of the axion origin in the active and quiet phases of the Sun,
and do they agree with the experiment?'' Let us perform the
mentioned estimations basing on the known experimental results by ROSAT/PSPC,
where the Sun's coronal X-ray spectra and the total luminosity during the
minimum and maximum of the solar coronal activity were obtained~\citep{ref59}.
Apparently, the solar photon spectrum below 10~keV of the active and quiet Sun
(Fig.~\ref{fig06}a) reconstructed from the accumulated ROSAT/PSPC observations
may be described by three Compton spectra for different column densities rather
well (Fig.~\ref{fig06}b,c). This gives grounds for the assumption that the hard
part of the solar spectrum is mainly determined by the axion-photon conversion
efficiency:
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(*)} \simeq
\left( \frac{d \Phi}{dE} \right)^{(*)}_{corona} +
\left( \frac{d \Phi _{\gamma}}{dE} \right)^{(*)}_{axions} ,
\label{eq06-33}
\end{align}
\noindent where $\frac{d \Phi}{dE}$ is the observed solar spectra during the
active (red line in Fig.~\ref{fig06}a,b) and quiet (blue line in
Fig.~\ref{fig06}a,c) phases, $\left( \frac{d \Phi}{dE} \right)_{corona}$
represents the power-like theoretical solar spectra
\begin{equation}
\left( \frac{d \Phi}{dE} \right)_{corona} \sim E^{-(1+\alpha)} e^{-E/E_0} ,
\label{eq06-33a}
\end{equation}
\noindent
where a power law decay with the ``semi-heavy tail'' takes place in practice
\citep{Lu1993} instead of the so-called power laws with heavy tails
\citep{Lu1991,Lu1993} (see e.g. Figs.~3 and~6 in \cite{Uchaikin2013}).
Consequently, the observed corona spectra
($0.25 ~keV < E \leqslant 2.5 ~keV$) (shaded area in Fig.~\ref{fig06}b)
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(active)}_{corona} \sim
5 \cdot 10^{-3} \cdot (E~[keV])^{-3} \cdot \exp{\left(-\frac{E}{1 keV} \right)}
~~for~the~active~Sun
\label{eq06-34}
\end{align}
\noindent and (shaded area in Fig.~\ref{fig06}c)
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(quiet)}_{corona} \sim
1 \cdot 10^{-4} \cdot (E~[keV])^{-3} \cdot \exp{\left(-\frac{E}{0.5 keV} \right)}
~~for~the~quiet~Sun ;
\label{eq06-35}
\end{align}
\noindent $\left( \frac{d \Phi _{\gamma}}{dE} \right)_{axions}$ is the
reconstructed solar photon spectrum fit ($0 ~keV < E < 10 ~keV$) constructed
from three spectra (\ref{eq3.33}) degraded to the Compton scattering for
different column densities (see Fig.~\ref{fig06}b,c for the active and quiet
phases of the Sun respectively).
As is known, this class of flare models (Eqs.~(\ref{eq06-34})
and~(\ref{eq06-35})) is based on the recent paradigm in statistical physics
known as self-organized criticality
\citep{Bak1987,Bak1988,Bak1989,Bak1996,Aschwanden2011}. The basic idea is that
the flares are a result of an ``avalanche'' of small-scale magnetic
reconnection events cascading \citep{Lu1993,Charbonneau2001,Aschwanden2014}
through the highly intense coronal magnetic structure \citep{Shibata2011} driven
at the critical state by the accidental photospheric movements of its magnetic
footprints. Such models thus provide a natural and computationally convenient
basis for the study of Parker hypothesis of the coronal heating by nanoflares
\citep{Parker1988}.
Another significant fact discriminating the theory from practice, or rather
giving a true understanding of the measurements against some theory, should be
recalled here (e.g. (\ref{eq06-33a}) (see Eq.~(5) in \cite{Lu1993})). The
nature of power laws is related to the strong connection between the consequent
events (this applies also to the ``catastrophes'', which in turn gives rise to
a spatial nonlocality related to the appropriate structure of the medium (see
page 45 in \cite{Uchaikin2013})). As a result, the ``chain reaction'', i.e. the
avalanche-like growth of perturbation with more and more resource involved,
leads to the heavy-tailed distributions. On the other hand, obviously, none of
the natural events may be characterized by the infinite values of mean and
variance. Therefore, the power laws like (\ref{eq06-33a}) are approximate and
must not hold for the very large arguments. It means that the power law decay
of the probability density rather corresponds to the average asymptotics, and
the ``semi-heavy tails'' must be observed in practice instead.
In this regard we suppose that the application of the power-law distributions
with semi-heavy tails leads to a soft attenuation of the observed corona
spectra (which are not visible above $E > 2 \div 3 ~keV$), and thus to a close
coincidence between the observed solar spectra and $\gamma$-spectra of axion
origin (Fig.~\ref{fig06}). I.e.
\begin{equation}
\left( \frac{d \Phi}{dE} \right)^{(*)} \simeq
\left( \frac{d \Phi _{\gamma}}{dE} \right)^{(*)}_{axions}
~~~ \text{for energies} ~~ E > 2 \div 3 ~keV.
\label{eq06-35a}
\end{equation}
It means that the physics of the formation and ejection of the $\gamma$-quanta
above $2 \div 3 ~keV$ through the sunspots into corona is not related to the
magnetic reconnection theory by e.g. \cite{Shibata2011} (Fig.~\ref{fig06}d),
and may be of the axion origin.
With this in mind, let us suppose that the part of the differential solar axion
flux at the Earth~\citep{ref58}
\begin{align}
\frac{d \Phi _a}{dE} = 6.02 \cdot 10^{10} \left( \frac{g_{a\gamma}}{10^{10} GeV^{-1}} \right)^2 E^{2.481} \exp \left( - \frac{E}{1.205} \right) ~~cm^{-2}
s^{-1} keV^{-1} ,
\label{eq3.33}
\end{align}
\noindent which characterizes the differential $\gamma$-spectrum of the axion
origin $d \Phi _{\gamma} / dE$
(see $[ d \Phi _{\gamma} / dE ]_{axions}$ in (\ref{eq06-33}) and
(\ref{eq06-35a}))
\begin{align}
\frac{d \Phi _{\gamma}}{dE} \cong P_{\gamma} \frac{d \Phi _{a}}{dE}
~~ cm^{-2} s^{-1} keV^{-1} \approx
6.1 \cdot 10^{-3} P_{\gamma} \frac{d \Phi _{a}}{dE}
~ ph\cdot cm^{-2} s^{-1} bin^{-1}
\label{eq3.34}
\end{align}
\noindent
where the spectral bin width is 6.1~eV (see Fig.~\ref{fig06}a);
the probability $P_{\gamma}$ describing the relative portion of $\gamma$-quanta
(of axion origin) channeling along the magnetic tubes may be defined, according
to~\cite{ref59}, from the observed solar luminosity variations in the X-ray
band, recorded in ROSAT/PSPC experiments (Fig.~\ref{fig06}):
$\left(L_{corona}^X \right) _{min} \approx 2.7
\cdot 10^{26} ~erg/s$ at minimum and
$\left( L_{corona}^X \right) _{max} \approx 4.7 \cdot 10^{27} ~erg/s$
at maximum,
\begin{equation}
P_{\gamma} = P_{a \rightarrow \gamma} \cdot \dfrac{\Omega \cdot (0.5 d_{spot})^2}
{(\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2} \cdot \Lambda_a
\approx 3.4 \cdot 10^{-3},
\label{eq3.35}
\end{equation}
\noindent directly following from the geometry of the system
(Fig.~\ref{fig-lampochka}b), where the conversion probability
$P_{a \rightarrow \gamma} \sim 1$ (\ref{eq3.31});
\begin{equation}
\Omega = (I_{\gamma ~CZ} / I_0) \cdot (I_{\gamma ~photo} / I_{\gamma ~CZ})
\cdot (I_{\gamma ~corona} / I_{\gamma ~photo}) \approx 0.23
\end{equation}
\noindent
is the total relative intensity of
$\gamma$-quanta, where $(I_{\gamma ~CZ} / I_0) \sim 1$ is the relative
intensity of $\gamma$-quanta ``channeling'' through the
magnetic tubes in the convective zone,
$I_{\gamma ~photo} / I_{\gamma ~CZ} = \exp {[-(\mu l)_{photo}]} \sim 0.23$ (see
Eq.~\ref{eq06-43}) is the relative intensity of the Compton-scattered
$\gamma$-quanta in the solar photosphere, and $I_{\gamma ~corona} / I_{\gamma
~photo} = \exp {[-(\mu l)_{corona}]} \approx 1$ (see Eq.~\ref{eq06-44})
is the relative intensity of the Compton-scattered $\gamma$-quanta in the solar
corona;
$d_{spot}$ is the measured diameter of the sunspot
(umbra) \citep{Dikpati2008,Gough2010}. Its size determines the relative portion
of the axions hitting the sunspot area. Further,
\begin{equation}
\dfrac{(0.5 d_{spot})^2}{(\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2} \cong 0.034,
\end{equation}
\noindent where
\begin{equation}
0.5 d_{spot} = \left[ \frac{1}{\pi} \left(
\frac{\langle sunspot ~area \rangle _{max}}{\left\langle N_{spot} \right\rangle _{max}}
\right) \right] ^{(1/2)} \cong 5500~km,
\end{equation}
\noindent and the value $\Lambda_a$ characterizes the portion of the
axion flux going through the total $(2\left\langle N_{spot}
\right\rangle_{max})$ sunspots on the photosphere:
\begin{equation}
\Lambda_a = \dfrac{\left( sunspot\ axion\ flux \right)}{(1/3)\left( total\ axion\ flux \right)} \approx
\dfrac{2 \left\langle N_{spot} \right\rangle _{max} (\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2}{(4/3) R_{Sun} ^2} \sim 0.42 ,
\label{eq3.36}
\end{equation}
\noindent and $\left\langle N_{spot} \right\rangle _{max} \approx 150$ is the
average number of the maximal sunspot number, and
$\langle sunspot ~area \rangle _{max} \approx 7.5 \cdot 10^9 ~km^2 \approx 2470 ~ppm ~of ~visible ~hemisphere$
is the sunspot area (over the visible
hemisphere~\citep{Dikpati2008,Gough2010}) for the cycle 22 experimentally
observed by the Japanese X-ray telescope Yohkoh (1991)~\citep{ref36}.
On the other hand, from the known observations (see~\cite{ref59} and
Appendix~\ref{appendix-luminosity})
\begin{equation}
\frac{(L_{corona}^X)_{max}}{L_{Sun}} \cong 1.22 \cdot 10^{-6},
\label{eq3.37}
\end{equation}
\noindent where $L_{Sun} = 3.8418 \cdot 10^{33} erg / s$ is the solar
luminosity~\citep{ref63}. Using the theoretical axion impact estimate
(\ref{eq3.32}), one can clearly see that the obtained value (\ref{eq3.35}) is
in good agreement with the observations (\ref{eq3.37}):
\begin{equation}
P_{\gamma} = \left. \frac{(L_{corona}^X)_{max}}{L_{Sun}} \middle/
\frac{L_a}{L_{Sun}} \sim 3.4 \cdot 10^{-3} \right. ,
\label{eq3.38}
\end{equation}
\noindent derived independently.
In other words, if the hadronic axions found in the Sun are the same particles
found in the white dwarfs with the known strength of the axion coupling to
photons (see (\ref{eq3.30}) and Fig.~\ref{fig05}a,b),
it is quite natural that the independent observations give the same estimate of
the probability $P_{\gamma}$ (see (\ref{eq3.35}) and (\ref{eq3.38})). So the
consequences of the choice (\ref{eq3.30}) are determined by the independent
measurements of the average sunspot radius, the sunspot
number~\citep{Dikpati2008,Gough2010}, the model estimates of the horizontal
magnetic field and the height $L_{MS}$ of the magnetic steps (see
Fig.~\ref{fig-lampochka}), and the hard part of the solar photon spectrum
mainly determined by the axion-photon conversion efficiency, and the
theoretical estimate for the part of the axion luminosity $L_a$ in the total
luminosity of the Sun $L_{Sun}$ (\ref{eq3.38}).
\section{Axion mechanism of the solar Equator -- Poles effect}
The axion mechanism of Sun luminosity variations is largely validated by the experimental
X-ray images of the Sun in the quiet (Fig.~\ref{fig-Yohkoh}a) and active
(Fig.~\ref{fig-Yohkoh}b) phases~\citep{ref36} which clearly reveal the
so-called Solar Equator -- Poles effect (Fig.~\ref{fig-Yohkoh}b).
\begin{figure*}
\centerline{\includegraphics[width=12cm]{Sun_X-ray_image_spectrum-3.pdf}}
\caption{\textbf{Top:} Solar images at photon energies from 250~eV up to a few
keV from the Japanese X-ray telescope Yohkoh (1991-2001) (adopted
from~\cite{ref36}). The following is shown:
\newline
(a) a composite of 49 of the quietest solar periods during the solar minimum in 1996;
\newline
(b) solar X-ray activity during the last maximum of the 11-year solar cycle.
Most of the X-ray solar activity (right) occurs at a wide bandwidth of
$\pm 45^{\circ}$ in latitude, being homogeneous in longitude. Note that
$\sim$95\% of the solar magnetic activity covers this bandwidth.
\newline
\textbf{Bottom:} (c) Axion mechanism of solar irradiance variations
above $2 \div 3 ~keV$, which is independent of the cascade reconnection
processes in corona (see shaded areas and Fig.~\ref{fig06}b,c,d),
and the red and blue curves characterizing the irradiance increment in the
active and quiet phases of the Sun, respectively;
\newline
(d) schematic picture of the radial travelling of the axions inside the Sun.
Blue lines on the Sun designate the magnetic field. Near the tachocline
(Fig.~\ref{fig-lampochka}a) the axions are converted into $\gamma$-quanta,
which form the experimentally observed Solar photon spectrum after passing the
photosphere (Fig.~\ref{fig06}). Solar axions that move towards the poles (blue
cones) and in the equatorial plane (blue bandwidth) are not converted by
Primakoff effect (inset: diagram of the inverse coherent process). The
variations of the solar axions may be observed at the Earth by special
detectors like the new generation CAST-helioscopes~\citep{ref68}. }
\label{fig-Yohkoh}
\end{figure*}
The essence of this effect lies in the following. It is known that the axions
may be transformed into $\gamma$-quanta by inverse Primakoff effect in the
transverse magnetic field only. Therefore the axions that pass towards the
poles (blue cones in Fig.~\ref{fig-Yohkoh}b) and equator (the blue band in
Fig.~\ref{fig-Yohkoh}b) are not transformed into $\gamma$-quanta by inverse
Primakoff effect, since the magnetic field vector is almost collinear to the
axions' momentum vector. The observed nontrivial X-ray distribution in the
active phase of the Sun may be easily and naturally described within the
framework of the axion mechanism of the solar luminosity variations.
As described in Section~\ref{subsec-channeling}, the photons of axion origin
travel through the convective zone along the magnetic flux tubes, up to the
photosphere. In the photosphere they are Compton-scattered, which results in a
substantial deviation from the initial axions directions of propagation
(Fig.~\ref{fig07a}).
\begin{figure*}
\centerline{\includegraphics[width=15cm]{axion-channaling-scattering-Yohkoh-3.pdf}}
\caption{The formation of the high X-ray intensity bands on the Yohkoh
matrix. \label{fig07a}}
\end{figure*}
Let us make a simple estimate of the Compton scattering efficiency in terms of
the X-ray photon mean free path (MFP) in the photosphere:
\begin{equation}
l_{\mu} = (\mu)^{-1} = \left( \sigma_c \cdot n_e \right)^{-1} ,
\label{eq-compt-01}
\end{equation}
\noindent where
$\mu$ is the total linear attenuation coefficient
(cm$^{-1}$),
the total Compton cross-section $\sigma_c = \sigma_0 = 8 \pi r_0^2 / 3$ for the
low-energy photons \citep{ref81,ref82}, $n_e$ is the electrons density in the
photosphere, and $r_0 = 2.8\cdot10^{-13}~cm$ is the so-called classical
electron radius.
Taking into account the widely used value of the matter density in the solar
photosphere $\rho \sim 10^{-7} ~g/cm^3$ and supposing that it consists of the
hydrogen (for the sake of the estimation only), we obtain that
\begin{equation}
n_e \approx \frac{\rho}{m_H} \approx 6 \cdot 10^{16} ~ electron / cm^3\, ,
\label{eq-compt-02}
\end{equation}
which yields the MFP of the photon \citep{ref81,ref82}
\begin{align}
l_{\mu} = \left( 7 \cdot 10^{-25} ~cm^2 \cdot 6 \cdot 10^{16} ~electron/cm^3 \right)^{-1}
\approx 2.4 \cdot 10^7 ~cm = 240 ~km .
\label{eq-compt-03}
\end{align}
Since this value is smaller than the thickness of the solar photosphere
($l_{photo} \sim 300 \div 400 ~km$),
the Compton scattering is efficient
enough to be detected at the Earth
(see \mbox{Fig.~\ref{fig07a}} and \mbox{Fig.~\ref{fig06}} adopted from
\mbox{\cite{ref59}});
\begin{equation}
\frac{I_{\gamma ~photo}}{I_{\gamma ~CZ}}
= \exp {\left[ - (\mu l)_{photo} \right]} \sim 0.23,
\label{eq06-43}
\end{equation}
\noindent which follows the particular case of the Compton scattering: Thomson
differential and total cross section for unpolarized photons
\citep{Griffiths1995}.
And finally taking into account that $l_{chromo} \sim 2 \cdot 10^3 ~km$ and
$n_e \sim 10^{13} ~electron/cm^3$ (i.e. $l_{\mu} \sim 1.4 \cdot 10^6 ~km$) and
$l_{corona} \sim 10^5 ~km$ and $n_e < 10^{11} ~electron/cm^3$ (i.e. $l_{\mu} >
1.4 \cdot 10^8 ~km$) (Fig.~12.9 in \cite{Aschwanden2004}), one may calculate
the relative intensity of the $\gamma$-quanta by Compton scattering in the
solar corona
\begin{equation}
\frac{I_{\gamma ~corona}}{I_{\gamma ~photo}}
= \frac{I_{\gamma ~chromo}}{I_{\gamma ~photo}} \cdot
\frac{I_{\gamma ~corona}}{I_{\gamma ~chromo}} =
\exp {\left[ - (\mu l)_{chromo} \right]} \cdot
\exp {\left[ - (\mu l)_{corona} \right]} \approx 1 ,
\label{eq06-44}
\end{equation}
\noindent which depends on the
total relative intensity of $\gamma$-quanta (see Eq.~(\ref{eq3.35})).
A brief summary is appropriate here. Coronal activity is a
collection of plasma processes manifesting from the passage of magnetic fields
through it from below, generated by the solar dynamo in cycles of approximately
11 years (Fig.~\ref{fig-Yohkoh}). This global process culminating in the
reversal of the solar magnetic dipole at the end of each cycle involves the
turbulent dissipation of the magnetic energy, the flares and heating of the
corona. The turbulent, highly dissipative, as well as largely ideal MHD
processes play their distinct roles, each liberating a comparable amount of
energy stored in the magnetic fields.
This mechanism is illustrated
in Fig.~\ref{fig06}d. When the magnetic flux erupts through the photosphere, it
forms a pair of sunspots pushing the magnetic field up and aside. The magnetic
field inside the sunspots is very high and the convection is suppressed.
Therefore the coalescence of the magnetic field is also suppressed. When some
magnetic field line crosses the region of the high Poynting flux, the energy is
distributed along this line in the form of plasma heating. This makes such line
visible in EUV band for some short time. While the magnetic field is being
pushed to the sides, next field line crosses the region of high Poynting flux
and flares up at the same position as the previous one and so on. This creates
an illusion of a static flaring loop, while the magnetic field is in fact
moving. It is interesting to note that \cite{Chen2015} expect the future
investigation to show to what extent this scenario also holds for X-ray
emission (see Supplementary Section~3 in \cite{Chen2015}).
In this context it is very important to consider the experimental observations
of solar X-ray jets (e.g. solar space missions of Yohkoh and Hinode
satellites), which show, for example, a gigantic coronal jet, ejecting from a
compact active region in a coronal hole \citep{Shibata1994} and tiny
chromospheric anemone jets \citep{Shibata2007}.
These jets are believed to be an indirect proof of small-scale ubiquitous
reconnection in the solar atmosphere and may play an important role in heating
it, as conjectured by Parker (\cite{Parker1988,Zhang2015,Sterling2015}).
Our main supposition here is that in contrast to EUV images (see orange line in
Fig.~\ref{fig06}d) and the coronal X-ray below $\sim 2 \div 3~keV$, the hard X-ray
emission above $\sim 3~keV$ is in fact the $\gamma$-quanta of axion origin,
born inside the magnetic tubes (see sunspot in Fig.~\ref{fig06}d), and is not
related to the mentioned indirect evidence (see e.g. Fig.~42 and Fig.~47 in
\cite{Shibata2011}) of the coronal jet, generated by the solar dynamo in cycles
of approximately 11 years (Fig.~\ref{fig-Yohkoh}). It will be interesting to
see if the proposed picture will ultimately be confirmed, modified, or rejected
by future observations and theoretical work to pin down the underlying physical
ideas.
Taking into account the directional patterns of the resulting radiation as well
as the fact that the maximum of the axion-originated X-ray radiation is
situated near 30 -- 40 degrees of latitude (because of the solar magnetic field
configuration), the mechanism of the high X-ray intensity bands formation on
the Yohkoh matrix becomes obvious. The effect of these bands widening near the
edges of the image is discussed in Appendix~\ref{appendix-widening} in detail.
\section{Summary and Conclusions}
In the given paper we present a self-consistent model of the axion mechanism of
the Sun's luminosity variations, in the framework of which we estimate the values of the axion
mass ($m_a \sim 3.2 \cdot 10^{-2} ~eV$) and the axion coupling constant to
photons ($g_{a \gamma} \sim 4.4 \cdot 10^{-11} ~GeV^{-1}$). A good
correspondence between the solar axion-photon oscillation parameters and the
hadron axion-photon coupling derived from white dwarf cooling (see
Fig.~\ref{fig05}) is demonstrated.
One of the key ideas behind the axion mechanism of Sun luminosity variations is the effect
of $\gamma$-quanta channeling along the magnetic flux tubes (waveguides inside
the cool region) above the base of the Sun convective zone
(Figs.~\ref{fig04-3}, \ref{fig-twisted-tube} and~\ref{fig-lampochka}). The low
refraction (i.e. the high transparency) of the thin magnetic flux tubes is
achieved due to the ultrahigh magnetic pressure (Fig.~\ref{fig-lampochka}a)
induced by the magnetic field of about 4100~T (see Eq.~(\ref{eq06-16}) and
Fig.~\ref{fig-lampochka}a). So it may be concluded that the axion mechanism of
Sun luminosity variations based on the lossless $\gamma$-quanta channeling along the
magnetic tubes allows to explain the effect of the partial suppression of the
convective heat transfer, and thus to understand the known puzzling darkness of
the sunspots (see 2.2.1 in \cite{Rempel2011}).
It is shown that the axion mechanism of luminosity variations (which means that
they are produced by adding the intensity variations of the $\gamma$-quanta of
the axion origin to the coronal part of the solar spectrum
(Fig.~\ref{fig-Yohkoh}c)) easily explains the physics of the so-called Solar
Equator -- Poles effect observed in the form of the anomalous X-ray
distribution over the surface of the active Sun, recorded by the Japanese X-ray
telescope Yohkoh (Fig.~\ref{fig-Yohkoh}, top).
The essence of this effect consists in the following: axions that move towards
the poles (blue cones in Fig.~\ref{fig-Yohkoh}, bottom) and equator (blue
bandwidth in Fig.~\ref{fig-Yohkoh}, bottom) are not transformed into
$\gamma$-quanta by the inverse Primakoff effect, because the magnetic field
vector is almost collinear to the axions' momentum in these regions (see the
inset in Fig.~\ref{fig-Yohkoh}, bottom). Therefore the anomalous X-ray
distribution over the surface of the active Sun is a kind of a ``photo'' of the
regions where the axions' momentum is orthogonal to the magnetic field vector
in the solar over-shoot tachocline. The solar Equator -- Poles effect is not
observed during the quiet phase of the Sun because of the magnetic field
weakness in the overshoot tachocline, since the luminosity increment of the
axion origin is extremely small in the quiet phase as compared to the active
phase of the Sun.
In this sense, the experimental observation of the solar Equator -- Poles
effect is the most striking evidence of the axion mechanism of Sun luminosity
variations. It is hard to imagine another model or considerations which would
explain such anomalous X-ray radiation distribution over the active Sun surface
just as well (compare Fig.~\ref{fig-Yohkoh}a,b with Fig.~\ref{app-b-fig01}a
).
And, finally, let us emphasize one essential and the most painful point of the
present paper. It is related to the key problem of the axion mechanism of the solar
luminosity variations and is stated rather simply: ``Is the process of axion conversion
into $\gamma$-quanta by the Primakoff effect really possible in the magnetic
steps of an O-loop near the solar overshoot tachocline?'' This question is
directly connected to the problem of the hollow magnetic flux tubes existence
in the convective zone of the Sun, which are supposed to connect the tachocline
with the photosphere. So, either the more general theory of the Sun or the
experiment have to answer the question of whether there are the waveguides in
the form of the hollow magnetic flux tubes in the cool region of the convective
zone of the Sun, which are perfectly transparent for $\gamma$-quanta, or our
model of the axion mechanism of Sun luminosity variations is built around simply guessed
rules of calculation which do not reflect any real nature of things.
\section*{Acknowledgements}
\noindent The work of M. Eingorn was supported by NSF CREST award HRD-1345219
and NASA grant NNX09AV07A.
|